- Fix Mac OS X — each time you start typing in Spotlight (to open an application or search for a file on your computer), your local search terms and location are sent to Apple and third parties (including Microsoft) under default settings on Yosemite (10.10). See also Net Monitor, an open source toolkit for finding phone-home behaviour.
- A/B Testing at Netflix (ACM) — Using a combination of static analysis to build a dependency tree, which is then consumed at request time to resolve conditional dependencies, we’re able to build customized payloads for the millions of unique experiences across Netflix.com.
- Leslie Lamport Interview Summary — One idea about formal specifications that Lamport tries to dispel is that they require mathematical capabilities that are not available to programmers: “The mathematics that you need in order to write specifications is a lot simpler than any programming language [...] Anyone who can write C code, should have no trouble understanding simple math, because C code is a hell of a lot more complicated than” first-order logic, sets, and functions. When I was at uni, profs worked on distributed data, distributed computation, and formal correctness. We have the first two, but so much flawed software that I can only dream of the third arriving.
- Fake Identity — generate fake identity data when testing systems.
ENTRIES TAGGED "testing"
Variations in Test-Driven Development
“Red-Green-Refactor” is a familiar slogan from test-driven development (TDD), describing a popular approach to writing software. It’s been both popular and controversial since the 2000’s (see the recent heated discussions between David Hansson, Bob Martin, and others). I find that it’s useful but limiting. Here I’ll describe some interesting exceptions to the rule, which have expanded the way I think about tests.
The standard three-step cycle goes like this. After choosing a small improvement, which can be either a feature or a bug fix, you add a failing test which shows that the improvement is missing (“Red”); add production code to make the test pass (“Green”); and clean up the production code while making sure the tests still pass (“Refactor”). It’s a tight loop with minimal changes at each step, so you’re never far from code that runs and has good test coverage.
By the way, to simplify things, I’ll just say “tests” and be vague about whether they’re technically “unit tests”, “specs,” “integration tests,” or “functional tests”; the main thing is that they’re written in code and they run automatically.
Red-Green-Refactor is a very satisfying rhythm when it works. Starting from the test keeps the focus on adding value, and writing a test forces you to clarify where you want to go. Many people say it promotes clean design: it’s just easier to write tests when you have well-separated modules with reasonable interfaces between them. My personal favorite part, though, is not the Red but the Refactor: the support from tests allows you to clean things up with confidence, and worry less about regressions.
Now for the exceptions. Read more…
Getting apps into the store is a non-deterministic process
One of the major topics of my Enterprise iOS book is how to plan release schedules around Apple’s peril-filled submission process. I don’t think you can count yourself a truly bloodied iOS dev until you’ve gotten your first rejection notice from iTunes Connect, especially under deadline pressure.
Traditionally, the major reasons that applications would bounce is that the developer had been a Bad Person. They had grossly abused the Human Interface standards, or had a flakey app that crashed when the tester fired it up, or used undocumented internal system calls. In most cases, the rejection could have been anticipated if the developer had done his homework. There were occasional apps that got rejected for bizarre reasons, such as perceived adult content, or because of some secret Apple agenda, but they were the rare exception. If you followed the rules, your app would get in the store.