Some notes on Rails nested routing

Today I spent a lot of time trying to figure out how to organize a controller in Rails where we needed to filter based on some query params. I was initially inspired to follow a pattern from DHH which turned out to be a bit challening to follow due to some incomplete information in the original blog post. The first thing that the blog post had failed to explain was how to organize files in a situation with the co-controllers. The answer to this quesiton is that you setup a folder with the parent controller name and then name the .rb files after the sub-controller. So in the linked example the “Inboxes::PendingsController” class would go into app/controllers/inboxes/pendings_controller.rb. The second component is that in the routes.rb file you need to be aware that the nested route for the index action must come before the route for the parent resource’s show method. So for example in this commit I made /applicants/interests works, but if I put line 12-14 below line 15, then it would try and find an applicant with the ID intersts and fail to do so.

However I also learned that the sub-controller approach is not great for situations where the client is sending the controller params. After some research I determined the best practice was to simply concede that I should check for the presence of params in the applicants index method and filter on that. It is important to note that after a while your controller can get cluttered with if statements checking for params to filter. Several friends have suggested that the natural evolution of this pattern is to extract the query into a query object that takes params and returns a scope on applicants. This way the query object is testable and my controller does not get bloated.

Finally we also have been struggling with nested controllers in relation to some has_many relationships between our models. The struggle was largely related to the fact that the Rails routing documentation on nested resources does not explain the changes that need to be made to a controller when you start using nested routes with it. However as you can see later in that thread I was lucky enough to find an old Railscast with a useful pattern.

Is your computer or software secure?

Is your computer or software secure? Despite what the experts or vendors want to tell you, the answer is probably no. This point was driven home to me yesterday when Citizen Lab published a report on three iPhone exploits. These exploits are called 0-days because it has been 0 days since they were known to the public and security researchers. These exploits fetch a large price on the private market and the only reason the intended target was not infected is because he was suspcious of the link. No software updates or other measures can defend you against a properly executed 0-day. It is up to you to reocgnize an attempt to infect you and be vigilant.

Security is a moving target. As companies find exploits they are patched with software updates. I was shocked when I put a new server on the Internet a few months ago and enabled something called LogWatch to show me how many times automated computers scan and probe my server to see if it is vulnerable to known exploits. It is a lot.

So how can we protect oursleves? A lot of the recommended security measures involve locking down the machine in a way that makes common attack vectors more challenging. With most online accounts using a complex password and two-factor authentication make breaking into your account more work. This deterrance approach is the same as getting a home alarm system: it will not make your system impenetrable but it makes it less appealing to opportunists. The second goal is to minimize the potential damage if your system is penetrated. Encrypt important data; restrict permissions. If you use different passwords for each account then when an adversary gets your password they cannot access your other accounts without getting those passwords. Keeping your system updated and using modern operating system software prevents you from getting exploited by known vulnerabilities.

For software developers the approach is different. We are not merely worried about protecting our machine or accounts, but protecting our code itself. Many programming languages have security guides that provide advice on common pitfalls and attacks. With the applications I develop with Ruby on Rails I have learned that there is a static code analyzer called Brakeman that checks for common security holes in your code. Deppbot keeps the modules updated when security updates occur. Finally the last line of defense is vigilence and having peers review your code. If someone discloses a vulnerability it is incumbent upon a developer to fix it as soon as possible. A code review by another experienced programmer can also help spot errors. Despite all this there will be security vulnerabilities, after all Apple and Google have some of the smartest software engineers in the world and they still have to fix vulnerabilities.

The final solution is resilience. Besides being vigilient, making sure that the failure state of being attacked is not going to hurt you badly is a good way to protect yourself. Avoid storing sensitive data if you do not need it. Use the current best practices around encryption if you do. Backup your data offsite so that you still have it if an attacker deletes it. Make a plan about what to do if you are attacked and how you will handle it so you can recover quickly.

Anatomy of a Good Pull Request

As I have been working on my Code for Boston Project I have spent lots of time reviewing pull requests. While it is easy to review pull requests when you are working with someone in the same space everyday, it is a bit more challenging when someone is remote. I think that the good pull requests tend to have a few things in common:

  1. They are as small as possible. Often a feature can be broken down into multiple steps.
  2. They include tests for new functionality.
  3. They pass the existing tests.
  4. They describe what the feature or bug fix does so that the tester can test it.
  5. Merge conflicts have been resolved.
  6. They actually check in all the files necessary for the change.

Some of these are not applicable to some projects, but in general these are the issues I encounter when I’m reviewing pull requests. The most common issues are the size of the pull request, a lack of a description, and someone forgetting to check in a file. While the command line is cool, I strongly recommend a GUI client like Tower 2 to help with git. Otherwise it often takes practice and seeing the other side before you get an intuitive sense for whether your pull request is good.

What I Learned from Forbidden Research

Earlier this week I had the privilege of attending the Forbidden Research conference with others from Berkman Klein. There the speakers posed the question of “[h]ow can we most effectively harness responsible, ethical disobedience aimed at challenging the norms, rules, or laws that sustain society’s injustices?” This was explored both through panel discussions and also some announcements that were made at the conference. At the end of the conference I felt refreshed and motivated.

The panel that most resonated with me was the panel about the hacks at MIT. There they discussed the process and rules that MIT and its students follow in relation to the famous hacks that involve placing objects on top of the MIT dome. That panel helped me understand that while the students were breaking the rules, there were still a set of norms that governed that rule breaking. Both them and the administration understood it was something that happens and that the students would endeavor to conduct their activities in a responsible and ethical manner despite the fact it was not allowed.

The biggest takeaway I got from the panel is that often it is better to be proactive in reaching out to enforcement agencies than to have egos clash in public later. The other big takeaway is that breaking the rules is often fine if it works, but you can quickly be disavowed if it fails. I think it is fairly evident that we do not have enough space and room for people to experiment and fail in this world, and while I appreciate the idea of the prize that was presented at the conference, I think another good action item would have been to spend time figuring out how we can create more spaces for rule breakers to fail safely and ethically.

Delegation versus Just Doing It

One of the things that I consistently struggle with when I run organizations is deciding when to delegate something versus when to do it myself. A well functioning organization should have lots of people in it that can split the work and get things done without large amounts of intervention. However getting an organization to a point where it is well functioning is a challenge. In a well functioning organization people need the authority, time, and ability to get things done. When new people join an organization they often are still learning how things work. There is a cost to on boarding on both ends.

In my civic technology projects I have two kinds of delegation. The first kind is for all the members of the team that visit every week. I can give one of them a high level task and they know both the history of the project and are able and willing to learn what they need to complete the task. Since they have been there every week I trust them to do it. The other kind of task, which takes more work to put together, are tightly specified micro-tasks designed to allow a coder to participate for the one session they are present. These tasks can take more work to put together. While it might feel like its worth just writing the code once I am done, not doing these tasks saves me real time and provides an opportunity for new community members to engage with the project in a meaningful way.

As a software developer that is now fairly talented at what I do, it is tempting to just write the software I want to create. Removing the management overhead of working on a group project feels like it would make things work faster, and in many ways it would. However the goal of a successful project is not just to have functioning software but to have a functioning community around that software. Communities are resilient and will patch bugs and keep your software up to date. Communities bring more perspective to the project and can take it in directions you did not consider by bringing in new and fresh perspective. In the short term delegation may not feel or be worthwhile, but I believe it is worthwhile in the long term.

Follow posts: RSS Feed
This work by Matt Zagaja is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.