"teams" entries

Four short links: 26 February 2016

Four short links: 26 February 2016

High-Performing Teams, Location Recognition, Assessing Computational Thinking, and Values in Practice

  1. What Google Learned From Its Quest to Build the Perfect Team (NY Times) — As the researchers studied the groups, however, they noticed two behaviors that all the good teams generally shared. First, on the good teams, members spoke in roughly the same proportion […] Second, the good teams all had high ‘‘average social sensitivity’’ — a fancy way of saying they were skilled at intuiting how others felt based on their tone of voice, their expressions, and other nonverbal cues.
  2. Photo Geolocation with Convolutional Neural Networks (arXiv) — 377MB gets you a neural net, trained on geotagged Web images, that can suggest location of the image. From MIT TR’s coverage: To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location. “PlaNet is able to localize 3.6% of the images at street-level accuracy and 10.1% at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4% of the photos and the continent in 48.0% of them.
  3. Assessing the Development of Computational Thinking (Harvard) — we have relied primarily on three approaches: (1) artifact-based interviews, (2) design scenarios, and (3) learner documentation. (via EdSurge)
  4. Values in Practice (Camille Fournier) — At some point, I realized there was a pattern. The people in the company who were beloved by all, happiest in their jobs, and arguably most productive, were the people who showed up for all of these values. They may not have been the people who went to the best schools, or who wrote the most beautiful code; in fact, they often weren’t the “on-paper” superstars. But when it came to the job, they were great, highly in-demand, and usually promoted quickly. They didn’t all look the same, they didn’t all work in the same team or have the same skill set. Their only common thread was that they didn’t have to stretch too much to live the company values because the company values overlapped with their own personal values.

Kate Matsudaira: If You Don’t Understand People, You Don’t Understand Ops

Velocity 2013 Speaker Series

While automation is clearly making everyone’s lives who work in Operations much better, startup founder Kate Matsudaira (@katemats) acknowledges that “No one ever does their work in a vaccum.” You can try as much as possible to Automate All The Things, but you can’t automate trust. And trust is key to a healthy, thriving operations team (and your own professional growth, too).

In this interview, Kate discusses some of the things she’ll be talking about at Velocity next month. Key highlights include:

  •  The word “people” is pretty broad. What aspects of working with people should operations teams care about? [Discussed at 1:32]
  • Ultimately, you depend on the people around you to help get work done, especially when you need to get funding, be it externally for a startup, or internally for an infrastructure or refactoring project. The more people trust you, the more likely that is to happen. [Discussed at 3:17]
  • Cultural change takes leadership, but that leadership doesn’t have to come from the top. [Discussed at 5:00]
  • You can be ridiculously technically competent, but if you can’t communicate well, it hinders your success in the long run. [Discussed at 5:40]

You can view the entire interview here:

This is one of a series of posts related to the upcoming Velocity Conference in Santa Clara, CA (June 18-20). We’ll be highlighting speakers in a variety of ways, from video and email interviews to posts by the speakers themselves.

 

Leading Indicators

In a conversation with Q Ethan McCallum (who should be credited as co-author), we wondered how to evaluate data science groups. If you’re looking at an organization’s data science group from the outside, possibly as a potential employee, what can you use to evaluate it? It’s not a simple problem under the best of conditions: you’re not an insider, so you don’t know the full story of how many projects it has tried, whether they have succeeded or failed, relations between the data group, management, and other departments, and all the other stuff you’d like to know but will never be told.

Our starting point was remote: Q told me about Tyler Brulé’s travel writing for Financial Times (behind a paywall, unfortunately), in which he says that a club sandwich is a good proxy for hotel quality: you go into the restaurant and order a club sandwich. A club sandwich isn’t hard to make: there’s no secret recipe or technique that’s going to make Hotel A’s sandwich significantly better than B’s. But it’s easy to cut corners on ingredients and preparation. And if a hotel is cutting corners on their club sandwiches, they’re probably cutting corners in other places.

This reminded me of when my daughter was in first grade, and we looked (briefly) at private schools. All the schools talked the same talk. But if you looked at classes, it was pretty clear that the quality of the music program was a proxy for the quality of the school. After all, it’s easy to shortchange music, and both hard and expensive to do it right. Oddly enough, using the music program as a proxy for evaluating school quality has continued to work through middle school and (public) high school. It’s the first thing to cut when the budget gets tight; and if a school has a good music program with excellent teachers, they’re probably not shortchanging the kids elsewhere.

How does this connect to data science? What are the proxies that allow you to evaluate a data science program from the “outside,” on the information that you might be able to cull from company blogs, a job interview, or even a job posting? We came up with a few ideas:

  • Are the data scientists simply human search engines, or do they have real projects that allow them to explore and be curious? If they have management support for learning what can be learned from the organization’s data, and if management listens to what they discover, they’re accomplishing something significant. If they’re just playing Q&A with the company data, finding answers to specific questions without providing any insight, they’re not really a data science group.
  • Do the data scientists live in a silo, or are they connected with the rest of the company? In Building Data Science Teams, DJ Patil wrote about the value of seating data scientists with designers, marketers, with the entire product group so that they don’t do their work in isolation, and can bring their insights to bear on all aspects of the company.
  • When the data scientists do a study, is the outcome predetermined by management? Is it OK to say “we don’t have an answer” or to come up with a solution that management doesn’t like? Granted, you aren’t likely to be able to answer this question without insider information.
  • What do job postings look like? Does the company have a mission and know what it’s looking for, or are they asking for someone with a huge collection of skills, hoping that they will come in useful? That’s a sign of data science cargo culting.
  • Does management know what their tools are for, or have they just installed Hadoop because it’s what the management magazines tell them to do? Can managers talk intelligently to data scientists?
  • What sort of documentation does the group produce for its projects? Like a club sandwich, it’s easy to shortchange documentation.
  • Is the business built around the data? Or is the data science team an add-on to an existing company? A data science group can be integrated into an older company, but you have to ask a lot more questions; you have to worry a lot more about silos and management relations than you do in a company that is built around data from the start.

Coming up with these questions was an interesting thought experiment; we don’t know whether it holds water, but we suspect it does. Any ideas and opinions?

The many sides to shipping a great software project

An interview with Shipping Greatness author Chris Vander Mey.

Chris Vander Mey, CEO of Scaled Recognition, and author of a new O’Reilly book, Shipping Greatness, lays out in this video some of the deep lessons he learned during his years working on some very high-impact and high-priority projects at Google and Amazon.

Chris takes a very expansive view of project management, stressing the crucial decisions and attitudes that leaders need to take at every stage from the team’s initial mission statement through the design, coding, and testing to the ultimate launch. By merging technical, organizational, and cultural issues, he unravels some of the magic that makes projects successful.

Read more…

Moneyball for software engineering, part 2

What if Billy Beane managed a software team?

A look at the "Moneyball"-style metrics and techniques managers can employ to get the most out of their software teams.

Scale your JavaScript, scale your team

The challenges of building big JavaScript apps with big teams.

"High Performance JavaScript" author Nicholas Zakas discusses the issues that pop up when you build big JavaScript apps with big teams.