The O’Reilly Design Podcast: Aaron Irizarry on getting and keeping a seat at the table.
Subscribe to the O’Reilly Design Podcast to explore how experience design — and experience designers — are shaping business, the Internet of Things, and other domains.
Welcome to the inaugural episode of our newly launched O’Reilly Design Podcast. In this podcast episode, I chat with Aaron Irizarry. Irizarry is the director of UX for product design at Nasdaq, co-author of “Discussing Design” with Adam Connor, and a member of the program committee for O’Reilly’s Design Conference.
Design at Nasdaq: A growing team
I first noted Nasdaq’s commitment to design when talking to Irizarry about his book and the design conference hosted by Nasdaq that Irizarry helps develop:
It’s interesting to see an organization that didn’t have a product design team as of, what — 2011, I believe. To see the need for that, bring someone in, hire them to establish a team (which is my boss, Chris), and then see just the transition and the growth within the company, and how they embraced product design. We had to work a lot, and really educate and pitch in the beginning, explain to them the value of certain aspects of the job we were doing, whether that was research, usability, testing, why we were wanting to do more design of browser and rapid prototyping, and things like that.
We believe we’re helping structure and build, and I think we still have work to do as a design-led organization. We recently did our Pro/Design conference in New York. Our opening speaker was the president of Nasdaq, and to hear her reference the design team’s research, and to be in marketing meetings, and discussing the personas that we created, and to hear the president of Nasdaq speak about these kind of artifacts and items that we feel are crucial to design and the design process, it was a mark for us like, ‘We’re really starting to make a mark here. We’re starting to show the value of what these things are,’ not just because we want design, but we believe that this approach to design is going to be really good for the product, and in the end, good for the business. Read more…
We need to provide people with proper access, interaction, and use of technology so that it serves their needs.
Download a free copy of “The New Design Fundamentals,” a curated collection of chapters from the O’Reilly Design library. Editor’s note: this post is an excerpt from “Tragic Design,” by Jonathan Shariat, which is included in the collection.I love people.
I love technology and I love design, and I love the power they have to help people.
That is why when I learned they had cost a young girl her life, it hurt me deeply and I couldn’t stop thinking about it for weeks.
My wife, a nursing student, was sharing with her teacher how passionate I am about technology in health care. Her teacher rebutted, saying she thought we needed less technology in health care and shared a story that caused her to feel so strongly that way.
This is the story that inspired me to write this book and I would like to share it with you.
Jenny, as we will call her to protect the patient’s identity, was a young girl who was diagnosed with cancer. She was in and out of the hospital for a number of years and was finally discharged. A while later she relapsed and returned to be given a very strong chemo treating medicine. This medicine is so strong and so toxic that it requires pre-hydration and post-hydration for three days with I.V. fluid.
However, after the medicine was administered, the nurses who were attending to the charting software, entering in everything required of them and making the appropriate orders, missed a very critical piece of information: Jenny was supposed to be given three days of I.V. hydration post treatment. The experienced nurses made this critical error because they were too distracted trying to figure out the software they were using.
When the morning nurse came in the next day, they saw that Jenny had died of toxicity and dehydration. All because these very seasoned nurses were preoccupied trying to figure out this interface (figure 1-1). Read more…
Using topology to uncover the shape of your data: An interview with Gurjeet Singh.
Get notified when our free report, “Future of Machine Intelligence: Perspectives from Leading Practitioners,” is available for download. The following interview is one of many that will be included in the report.
As part of our ongoing series of interviews surveying the frontiers of machine intelligence, I recently interviewed Gurjeet Singh. Singh is CEO and co-founder of Ayasdi, a company that leverages machine intelligence software to automate and accelerate discovery of data insights. Author of numerous patents and publications in top mathematics and computer science journals, Singh has developed key mathematical and machine learning algorithms for topological data analysis.
- The field of topology studies the mapping of one space into another through continuous deformations.
- Machine learning algorithms produce functional mappings from an input space to an output space and lend themselves to be understood using the formalisms of topology.
- A topological approach allows you to study data sets without assuming a shape beforehand and to combine various machine learning techniques while maintaining guarantees about the underlying shape of the data.
David Beyer: Let’s get started by talking about your background and how you got to where you are today.
Gurjeet Singh: I am a mathematician and a computer scientist, originally from India. I got my start in the field at Texas Instruments, building integrated software and performing digital design. While at TI, I got to work on a project using clusters of specialized chips called Digital Signal Processors (DSPs) to solve computationally hard math problems.
As an engineer by training, I had a visceral fear of advanced math. I didn’t want to be found out as a fake, so I enrolled in the Computational Math program at Stanford. There, I was able to apply some of my DSP work to solving partial differential equations and demonstrate that a fluid dynamics researcher need not buy a supercomputer anymore; they could just employ a cluster of DSPs to run the system. I then spent some time in mechanical engineering building similar GPU-based partial differential equation solvers for mechanical systems. Finally, I worked in Andrew Ng’s lab at Stanford, building a quadruped robot and programming it to learn to walk by itself. Read more…
BioCoder 8: neuroscience, robotics, gene editing, microbiome sequence analysis, and more.
Download a free copy of the new edition of BioCoder, our newsletter covering the biological revolution.We are thrilled to announce the eighth issue of BioCoder. This marks two years of diverse, educational, and cutting-edge content, and this issue is no exception. Highlighted in this issue are technologies and tools that span neuroscience, diagnostics, robotics, gene editing, microbiome sequence analysis, and more.
Daniel Modulevsky, Charles Cuerrier, and Andrew Pelling from Pelling Lab at the University of Ottawa discuss different types of open source biomaterials for regenerative medicine and their use of de-cellularized apple tissue to generate 3D scaffolds for cells. If you follow their tutorial, you can do it, too!
aBioBot, highlighted by co-founder Raghu Machiraju, is a device that uses visual sensing and feedback to perform encodable laboratory tasks. Machiraju argues that “progress in biotechnology will come from the use of open user interfaces and open-specification middleware to drive and operate flexible robotic platforms.” Read more…
The O'Reilly Data Show Podcast: Ben Recht on optimization, compressed sensing, and large-scale machine learning pipelines.
As we put the finishing touches on what promises to be another outstanding Hardcore Data Science Day at Strata + Hadoop World in New York, I sat down with my co-organizer Ben Recht for the the latest episode of the O’Reilly Data Show Podcast. Recht is a UC Berkeley faculty member and member of AMPLab, and his research spans many areas of interest to data scientists including optimization, compressed sensing, statistics, and machine learning.
At the 2014 Strata + Hadoop World in NYC, Recht gave an overview of a nascent AMPLab research initiative into machine learning pipelines. The research team behind the project recently released an alpha version of a new software framework called KeystoneML, which gives developers a chance to test out some of the ideas that Recht outlined in his talk last year. We devoted a portion of this Data Show episode to machine learning pipelines in general, and a discussion of KeystoneML in particular.
Since its release in May, I’ve had a chance to play around with KeystoneML and while it’s quite new, there are several things I already like about it:
KeystoneML opens up new data types
Most data scientists don’t normally play around with images or audio files. KeystoneML ships with easy to use sample pipelines for computer vision and speech. As more data loaders get created, KeystoneML will enable data scientists to leverage many more new data types and tackle new problems. Read more…