• Print

Four short links: 4 May 2012

Statistical Fallacies, Sensors via Microphone, Peak Plastic, and Go Web Framework

  1. Common Statistical Fallacies (Flowing Data) — once you know to look for them, you see them everywhere. Or is that confirmation bias?
  2. Project HijackHijacking power and bandwidth from the mobile phone’s audio interface.
    Creating a cubic-inch peripheral sensor ecosystem for the mobile phone.
  3. Peak Plastic — Deb Chachra points out that if we’re running out of oil, that also means that we’re running out of plastic. Compared to fuel and agriculture, plastic is small potatoes. Even though plastics are made on a massive industrial scale, they still account for less than 10% of the world’s oil consumption. So recycling plastic saves plastic and reduces its impact on the environment, but it certainly isn’t going to save us from the end of oil. Peak oil means peak plastic. And that means that much of the physical world around us will have to change. I hadn’t pondered plastics in medicine before. (via BoingBoing)
  4. web.go (GitHub) — web framework for the Go programming language.
tags: , , , , , , , ,
  • Alex Tolley

    Re: Common Statistical Fallacies:

    “You can’t always judge how likely or improbable a sample is based on how it compares to a known population. For example, let’s say you flip a coin four times and get four tails in a row (TTTT). Then you flip four more times and get HTHT. In the long run, heads and tails are going to be split 50/50, but that doesn’t mean the second sequence is more likely.”

    The example given isn’t very useful. By definition, all unique sequences of H/T are equally probable. However, it is not the sequence that is usually observed, but rather the relative frequencies of H and T in the sample. In that case, HTHT = 2H2T is more likely than HHHH = 4H.

    The author pretty much undermines his example with the last line:

    “Similarly, a sequence of ten heads in a row isn’t the same as getting a million heads in a row.”

    It isn’t similarly at all. Here he is making the case that with larger sequences, the probability of of getting 1 million H is much lower than getting 4 H, but by his first argument, any long sequence, e.g. 1 million of any sequence has a lower probability than a unique sequence of 4 H/T.

    What he meant to say was that a sequence of 1 million H was very unlikely, whilst a sequence of 6 H has a probability of 1/16.