"algorithms" entries

Four short links: 4 March 2016

Four short links: 4 March 2016

Snapchat's Business, Tracking Voters, Testing for Discriminatory Associations, and Assessing Impact

  1. How Snapchat Built a Business by Confusing Olds (Bloomberg) — Advertisers don’t have a lot of good options to reach under-30s. The audiences of CBS, NBC, and ABC are, on average, in their 50s. Cable networks such as CNN and Fox News have it worse, with median viewerships near or past Social Security age. MTV’s median viewers are in their early 20s, but ratings have dropped in recent years. Marketers are understandably anxious, and Spiegel and his deputies have capitalized on those anxieties brilliantly by charging hundreds of thousands of dollars when Snapchat introduces an ad product.
  2. Tracking VotersOn the night of the Iowa caucus, Dstillery flagged all the [ad network-mediated ad] auctions that took place on phones in latitudes and longitudes near caucus locations. It wound up spotting 16,000 devices on caucus night, as those people had granted location privileges to the apps or devices that served them ads. It captured those mobile ID’s and then looked up the characteristics associated with those IDs in order to make observations about the kind of people that went to Republican caucus locations (young parents) versus Democrat caucus locations. It drilled down further (e.g., ‘people who like NASCAR voted for Trump and Clinton’) by looking at which candidate won at a particular caucus location.
  3. Discovering Unwarranted Associations in Data-Driven Applications with the FairTest Testing Toolkit (arXiv) — We describe FairTest, a testing toolkit that detects unwarranted associations between an algorithm’s outputs (e.g., prices or labels) and user subpopulations, including sensitive groups (e.g., defined by race or gender). FairTest reports statistically significant associations to programmers as association bugs, ranked by their strength and likelihood of being unintentional, rather than necessary effects. See also slides from PrivacyCon. Source code not yet released.
  4. Inferring Causal Impact Using Bayesian Structural Time-Series Models (Adrian Colyer) — understanding the impact of an intervention by building a predictive model of what would have happened without the intervention, then diffing reality to that model.
Comment
Four short links: 4 February 2016

Four short links: 4 February 2016

Shmoocon Video, Smart Watchstrap, Generalizing Learning, and Dataflow vs Spark

  1. Shmoocon 2016 Videos (Internet Archive) — videos of the talks from an astonishingly good security conference.
  2. TipTalk — Samsung watchstrap that is the smart device … put your finger in your ear to hear the call. You had me at put my finger in my ear. (via WaPo)
  3. Ecorithms — Leslie Valiant at Harvard broadened the concept of an algorithm into an “ecorithm,” which is a learning algorithm that “runs” on any system capable of interacting with its physical environment. Algorithms apply to computational systems, but ecorithms can apply to biological organisms or entire species. The concept draws a computational equivalence between the way that individuals learn and the way that entire ecosystems evolve. In both cases, ecorithms describe adaptive behavior in a mechanistic way.
  4. Dataflow/Beam vs Spark (Google Cloud) — To highlight the distinguishing features of the Dataflow model, we’ll be comparing code side-by-side with Spark code snippets. Spark has had a huge and positive impact on the industry thanks to doing a number of things much better than other systems had done before. But Dataflow holds distinct advantages in programming model flexibility, power, and expressiveness, particularly in the out-of-order processing and real-time session management arenas.
Comment
Four short links: 26 January 2016

Four short links: 26 January 2016

Inequality, Conversational Commerce, Minsky Lectures, and Trust vs Transparency

  1. What Paul Graham is Missing About Inequality (Tim O’Reilly) — When a startup doesn’t have an underlying business model that will eventually produce real revenues and profits, and the only way for its founders to get rich is to sell to another company or to investors, you have to ask yourself whether that startup is really just a financial instrument, not that dissimilar to the CDOs of the 2008 financial crisis — a way of extracting value from the economy without actually creating it.
  2. 2016 The Year of Conversational Commerce (Chris Messina) — I really hope that these conversations with companies are better than the state-of-the-art delights of “press 5 to replay” phone hell.
  3. Society of Mind (MIT) — Marvin Minsky’s course, with lectures.
  4. Trust vs Transparency (PDF) — explanation facilities
    can potentially drop both a user’s confidence and make the process of search more stressful.
    Aka “few takers for sausage factory tours.” (via ACM Queue)
Comments: 2
Four short links: 11 January 2016

Four short links: 11 January 2016

Productivity Mystery, Detecting Bullshit, Updating Cars, and Accountable Algorithms

  1. Why Americans Can’t Stop Working (The Atlantic) — summary of a paywalled economics paper on the mystery of increasing productivity yet increasing length of the work week. (via BoingBoing)
  2. On the Reception and Detection of Pseudo-Profound Bullshit (PDF) — These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. (via Rowan Crawford)
  3. Tesla Model S Can Now Drive Without You (TechCrunch) — the upside of the Internet of Things is that objects get smarter while you sleep. (In fairness, they can also be pwned by Ukrainian teenagers while you sleep.)
  4. Replacing Judgement with Algorithms (Bruce Schneier) — We can get the benefits of automatic algorithmic systems while avoiding the dangers. It’s not even hard. Transparency and oversight with accountability.
Comment
Four short links: 22 September 2015

Four short links: 22 September 2015

Ant Algorithms, Git Commit, NASA's Deep Learning, and Built-In Empathy

  1. Ant Algorithms for Discrete Optimization (Adrian Colyer) — Stigmergy is the generic term for the stimulation of workers by the performance they have achieved – for example, termite nest-building works in a similar way. Stigmergy is a form of indirect communication “mediated by physical modifications of environmental states which are only locally accessible to the communicating agents.
  2. How to Write a Git Commit Message (Chris Beams) — A diff will tell you what changed, but only the commit message can properly tell you why.
  3. Deep Belief Networks at the Heart of NASA Image ClassificationThe two new labeled satellite data sets were put to the test with a modified deep-belief-networks-driven approach, ultimately. The results show classification accuracy of 97.95%, which performed better than the unmodified pure deep belief networks, convolutional neural networks, and stacked de-noising auto-encoders by around 11%.
  4. The Consequences of An Insightful Algorithm (Carina C. Zona) — We design software for humans. Balancing human needs and business specs can be tough. It’s crucial that we learn how to build in systematic empathy. (via Rowan Crawford)
Comment
Four short links: 11 September 2015

Four short links: 11 September 2015

Wishful CS, Music Big Data, Better Queues, and Data as Liability

  1. Computer Science Courses that Don’t Exist, But Should (James Hague) — CSCI 3300: Classical Software Studies. Discuss and dissect historically significant products, including VisiCalc, AppleWorks, Robot Odyssey, Zork, and MacPaint. Emphases are on user interface and creativity fostered by hardware limitations.
  2. Music Science: How Data and Digital Content Are Changing Music — O’Reilly research report on big data and the music industry. Researchers estimate that it takes five seconds to decide if we don’t like a song, but 25 to conclude that we like it.
  3. The Curse of the First-In First-Out Queue Discipline (PDF) — the research paper behind the “more efficient to serve the last person who joined the queue” newspaper stories going around.
  4. Data is Not an Asset, It Is a Liabilityregardless of the boilerplate in your privacy policy, none of your users have given informed consent to being tracked. Every tracker and beacon script on your website increases the privacy cost they pay for transacting with you, chipping away at the trust in the relationship.
Comment
Four short links: 19 August 2015

Four short links: 19 August 2015

Privacy-Respecting Algorithms, Dealers Growing, Book Recommendations, and End of Internet Dreams

  1. Efficient Algorithms for Public-Private Social Networks — Google Research paper on privacy-respecting algorithms for social networks. From the overview: the models of privacy we’re landing on (nodes or edges in the graph are marked as “private” by a user) mean that enforcing these privacy guarantees translates to solving a different algorithmic problem for each user in the network, and for this reason, developing algorithms that process these social graphs and respect these privacy guarantees can become computationally expensive. The paper shows how to efficiently approximate some of the graph operations required to run a social network.
  2. Rise of Networked Platforms for Physical World Services (Tim O’Reilly) — the central player begins by feeding its network of suppliers, but eventually begins to compete with it. […] Over time, as networks reach monopoly or near-monopoly status, they must wrestle with the issue of how to create more value than they capture — how much value to take out of the ecosystem, versus how much they must leave for other players in order for the marketplace to continue to thrive.
  3. Book Recommendations from BLDBLOGWinslow memorably pointed out how farmers in the Sinaloa region of Mexico had been swept up into the cartel’s infinitely flexible method of production, and that, despite any ensuing role growing and harvesting marijuana or even poppies, the cartel offered them new jobs in logistics, not agriculture. “They didn’t want to be farmers,” Winslow said at Bookcourt, “they wanted to be FedEx.”
  4. The End of the Internet Dream (Jennifer Granick) — this is all gold. Something resonating with my current meditations: People are sick and tired of crappy software. And they aren’t going to take it any more. The proliferation of networked devices — the Internet of Things — is going to mean all kinds of manufacturers traditionally subject to products liability are also software purveyors. If an autonomous car crashes, or a networked toaster catches on fire, you can bet there is going to be product liability. […] I think software liability is inevitable. I think it’s necessary. I think it will make coding more expensive, and more conservative. I think we’ll do a crappy job of it for a really long time.
Comment
Four short links: 12 August 2015

Four short links: 12 August 2015

Economic Futures, Space War, State of Security, and Algorithmic Fairness

  1. Possible Economics Models (Jamais Cascio) — economic futures filtered through Doctorovian prose. Griefer Economics: Information is power, especially when it comes to finance, and the increasing use of ultra-fast computers to manipulate markets (and drive out “weaker” competitors) is moving us into a world where market position isn’t determined by having the best offering, but by having the best tool. Rules are gamed, opponents are beaten before they even know they’re playing, and it all feels very much like living on a PvP online game server where the referees have all gone home. Relevant to Next:Economy.
  2. War in Space May Be Closer Than Ever (SciAm) — Today, the situation is much more complicated. Low- and high-Earth orbits have become hotbeds of scientific and commercial activity, filled with hundreds upon hundreds of satellites from about 60 different nations. Despite their largely peaceful purposes, each and every satellite is at risk, in part because not all members of the growing club of military space powers are willing to play by the same rules — and they don’t have to, because the rules remain as yet unwritten. There’s going to be a bitchin’ S-1 risks section when Planet Labs files for IPO.
  3. Not Even Close: The State of Computer Security (Vimeo) — In this bleak, relentlessly morbid talk, James Mickens will describe why making computers secure is an intrinsically impossible task. He will explain why no programming language makes it easy to write secure code. He will then discuss why cloud computing is a black hole for privacy, and only useful for people who want to fill your machine with ads, viruses, or viruses that masquerade as ads. At this point in the talk, an audience member may suggest that bitcoins can make things better. Mickens will laugh at this audience member and then explain why trusting the bitcoin infrastructure is like asking Dracula to become a vegan. Mickens will conclude by describing why true love is a joke and why we are all destined to die alone and tormented. The first ten attendees will get balloon animals, and/or an unconvincing explanation about why Mickens intended to (but did not) bring balloon animals. Mickens will then flee on horseback while shouting “The Prince of Lies escapes again!”
  4. Algorithms and Bias (NYTimes) — interview w/Cynthia Dwork from Microsoft Research. Fairness means that similar people are treated similarly. A true understanding of who should be considered similar for a particular classification task requires knowledge of sensitive attributes, and removing those attributes from consideration can introduce unfairness and harm utility.
Comment
Four short links: 13 July 2015

Four short links: 13 July 2015

Improving Estimates, Robot Bother, Robotics Nations, and Potential Futures of Work

  1. Kalman Filteran algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone.
  2. Interview with Bruce SterlingSingapore is like a science fictional society without the fiction. Dubai is like a science fictional society without the science. […] Robots just don’t want to live. They’re inventions, not creatures; they don’t have any appetites or enthusiasms. I don’t think they’d maintain themselves very long without our relentlessly pushing them uphill against their own lifeless entropy. They’re just not entities in the same sense that we are entities; they don’t have much skin in our game. They don’t care and they can’t be bothered. We don’t yet understand how and why we ourselves care and bother, so we’d be hard put to install that capacity inside our robot vacuum cleaners.
  3. Japan’s Robot RevolutionFugitt said Japan’s weakness was in application and deployment of its advanced technologies. “The Japanese expect other countries and people to appreciate their technology, but they’re inwardly focused. If it doesn’t make sense to them, they typically don’t do it,” he said, citing the example of Japanese advanced wheelchairs having 100 kilogram weight limits. […] South Korea could be a threat [to Japan’s lead in robotics] if the chaebol opened up [and shared technologies], but I don’t see it happening. The U.S. will come in and disrupt things; they’ll cause chaos in a particular market and then run away.
  4. A World Without Work (The Atlantic) — In 1962, President John F. Kennedy said, “If men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.” […] Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5% of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications.
Comment: 1
Four short links: 30 April 2015

Four short links: 30 April 2015

Managing Complex Data Projects, Graphical Linear Algebra, Consistent Hashing, and NoTCP Manifesto

  1. More Tools for Managing and Reproducing Complex Data Projects (Ben Lorica) — As I survey the landscape, the types of tools remain the same, but interfaces continue to improve, and domain specific languages (DSLs) are starting to appear in the context of data projects. One interesting trend is that popular user interface models are being adapted to different sets of data professionals (e.g. workflow tools for business users).
  2. Graphical Linear Algebra — or “Graphical The-Subject-That-Kicked-Nat’s-Butt” as I read it.
  3. Consistent Hashing: A Guide and Go Implementation — easy-to-follow article (and source).
  4. NoTCP Manifesto — a nice summary of the reasons to build custom protocols over UDP, masquerading as church-nailed heresy. Today’s heresy is just the larval stage of tomorrow’s constricting orthodoxy.
Comment: 1