Is your computer or software secure? Despite what the experts or vendors want to tell you, the answer is probably no. This point was driven home to me yesterday when Citizen Lab published a report on three iPhone exploits. These exploits are called 0-days because it has been 0 days since they were known to the public and security researchers. These exploits fetch a large price on the private market and the only reason the intended target was not infected is because he was suspcious of the link. No software updates or other measures can defend you against a properly executed 0-day. It is up to you to reocgnize an attempt to infect you and be vigilant.
Security is a moving target. As companies find exploits they are patched with software updates. I was shocked when I put a new server on the Internet a few months ago and enabled something called LogWatch to show me how many times automated computers scan and probe my server to see if it is vulnerable to known exploits. It is a lot.
So how can we protect oursleves? A lot of the recommended security measures involve locking down the machine in a way that makes common attack vectors more challenging. With most online accounts using a complex password and two-factor authentication make breaking into your account more work. This deterrance approach is the same as getting a home alarm system: it will not make your system impenetrable but it makes it less appealing to opportunists. The second goal is to minimize the potential damage if your system is penetrated. Encrypt important data; restrict permissions. If you use different passwords for each account then when an adversary gets your password they cannot access your other accounts without getting those passwords. Keeping your system updated and using modern operating system software prevents you from getting exploited by known vulnerabilities.
For software developers the approach is different. We are not merely worried about protecting our machine or accounts, but protecting our code itself. Many programming languages have security guides that provide advice on common pitfalls and attacks. With the applications I develop with Ruby on Rails I have learned that there is a static code analyzer called Brakeman that checks for common security holes in your code. Deppbot keeps the modules updated when security updates occur. Finally the last line of defense is vigilence and having peers review your code. If someone discloses a vulnerability it is incumbent upon a developer to fix it as soon as possible. A code review by another experienced programmer can also help spot errors. Despite all this there will be security vulnerabilities, after all Apple and Google have some of the smartest software engineers in the world and they still have to fix vulnerabilities.
The final solution is resilience. Besides being vigilient, making sure that the failure state of being attacked is not going to hurt you badly is a good way to protect yourself. Avoid storing sensitive data if you do not need it. Use the current best practices around encryption if you do. Backup your data offsite so that you still have it if an attacker deletes it. Make a plan about what to do if you are attacked and how you will handle it so you can recover quickly.
As I have been working on my Code for Boston Project I have spent lots of time reviewing pull requests. While it is easy to review pull requests when you are working with someone in the same space everyday, it is a bit more challenging when someone is remote. I think that the good pull requests tend to have a few things in common:
- They are as small as possible. Often a feature can be broken down into multiple steps.
- They include tests for new functionality.
- They pass the existing tests.
- They describe what the feature or bug fix does so that the tester can test it.
- Merge conflicts have been resolved.
- They actually check in all the files necessary for the change.
Some of these are not applicable to some projects, but in general these are the issues I encounter when I’m reviewing pull requests. The most common issues are the size of the pull request, a lack of a description, and someone forgetting to check in a file. While the command line is cool, I strongly recommend a GUI client like Tower 2 to help with git. Otherwise it often takes practice and seeing the other side before you get an intuitive sense for whether your pull request is good.
Earlier this week I had the privilege of attending the Forbidden Research conference with others from Berkman Klein. There the speakers posed the question of “[h]ow can we most effectively harness responsible, ethical disobedience aimed at challenging the norms, rules, or laws that sustain society’s injustices?” This was explored both through panel discussions and also some announcements that were made at the conference. At the end of the conference I felt refreshed and motivated.
The panel that most resonated with me was the panel about the hacks at MIT. There they discussed the process and rules that MIT and its students follow in relation to the famous hacks that involve placing objects on top of the MIT dome. That panel helped me understand that while the students were breaking the rules, there were still a set of norms that governed that rule breaking. Both them and the administration understood it was something that happens and that the students would endeavor to conduct their activities in a responsible and ethical manner despite the fact it was not allowed.
The biggest takeaway I got from the panel is that often it is better to be proactive in reaching out to enforcement agencies than to have egos clash in public later. The other big takeaway is that breaking the rules is often fine if it works, but you can quickly be disavowed if it fails. I think it is fairly evident that we do not have enough space and room for people to experiment and fail in this world, and while I appreciate the idea of the prize that was presented at the conference, I think another good action item would have been to spend time figuring out how we can create more spaces for rule breakers to fail safely and ethically.
One of the things that I consistently struggle with when I run organizations is deciding when to delegate something versus when to do it myself. A well functioning organization should have lots of people in it that can split the work and get things done without large amounts of intervention. However getting an organization to a point where it is well functioning is a challenge. In a well functioning organization people need the authority, time, and ability to get things done. When new people join an organization they often are still learning how things work. There is a cost to on boarding on both ends.
In my civic technology projects I have two kinds of delegation. The first kind is for all the members of the team that visit every week. I can give one of them a high level task and they know both the history of the project and are able and willing to learn what they need to complete the task. Since they have been there every week I trust them to do it. The other kind of task, which takes more work to put together, are tightly specified micro-tasks designed to allow a coder to participate for the one session they are present. These tasks can take more work to put together. While it might feel like its worth just writing the code once I am done, not doing these tasks saves me real time and provides an opportunity for new community members to engage with the project in a meaningful way.
As a software developer that is now fairly talented at what I do, it is tempting to just write the software I want to create. Removing the management overhead of working on a group project feels like it would make things work faster, and in many ways it would. However the goal of a successful project is not just to have functioning software but to have a functioning community around that software. Communities are resilient and will patch bugs and keep your software up to date. Communities bring more perspective to the project and can take it in directions you did not consider by bringing in new and fresh perspective. In the short term delegation may not feel or be worthwhile, but I believe it is worthwhile in the long term.
One of my goals when I started my fellowship at the Berkman Center for Internet and Society was to incorporate auotmated software testing into my workflow. Sections on automated software testing are included in popular tomes like the Rails Tutorial but these tutorials also will suggest that you can skip learning about testing. Many people often do. I think the main reasons people skip testing is that it is a second programming language to learn, and its purpose is not well explained.
If your software is a book of math problems, your tests are the answers at the back of the book. Testing frameworks like RSpec provide a language to describe what your software should do, and also are a way of making sure that it in fact does those things. While automated tests are not a complete replacement for human testing of software, they provide a general sense of what is and is not working in your software by pretending to be a person that uses that software. This can save time when manually testing software would be tedious; for example an automated test can load your site 300 times really quickly to test a rate limiter.
Testing is not without pitfalls. Sometimes a feature will work while testing of that feature fails. This can be due to a mismatch between your understanding of how your testing framework is doing something and how you do it in the real world. It can also be a bug in how the testing framework is executing the steps. Some people will write lots of tests for a model or program and then want to make changes. Extra work is created since for each change the tests also need to be updated to accomodate the change. The upside is tests will force you to be more deliberate in considering how your code works, but the downside is the cost of change increases. That is likely why many developers do not regularly write tests.
Ultimately learning to write tests has benefitted my code and also makes it easier to be confident when integrating changes from others into codebases I maintain. I am not religious about creating tests for every single change, but I believe that having a test suite and including tests with bug fixes is a good practice to prevent regressions and help others understand your program.