Manifold Learning, Calculus & Friendship, and Other Math Links

One of the largest gatherings of mathematicians, the joint meetings of the AMS/MAA/SIAM, took place last week in San Francisco. Knowing that there were going to be over 6,000 pure and applied mathematicians at Moscone West, I took some time off from work and attended several sessions. Below are a few (somewhat technical) highlights. (It’s the only conference I’ve attended where the person managing the press room, was also working on some equations in-between helping the media.)

The Machine-Learning Bubble in Computational Medicine (Challenges in Computational Medicine and Biology)

Donald Geman gave a nice survey of the problems and mathematical techniques frequently used in computational biology. He also raised something that struck a chord with me. While computational biology has things in common with other fields (“small n, large d problem”: small samples, relative to the number of dimensions), techniques that work in fields like computer vision don’t automatically translate to biology. First, the size of samples in biology and medicine are orders of magnitude smaller compared to other fields. Secondly, while black boxes (think SVM’s or neural nets) are acceptable in other fields, biologists want accurate predictions and explanations for why/how algorithms work. Finally, it isn’t clear if there are underlying low-dimensional structures in biological data. Taken together, Geman wonders if machine-learning’s possible role in biology and medicine has been overhyped.

Using Unlabeled Data To Identify Optimal Classifiers (A Geometric Perspective on Learning Theory and Algorithms)

Revisiting, the “small n, large d” problem, Partha Niyogi gave an overview of recent geometric approaches to machine-learning. In order to mitigate the curse of dimensionality, Niyogi and his fellow researchers exploit the tendency of (natural) data be be non-uniformly distributed. In particular, they use the shape of the data to determine optimal machine-learning classifiers. In their version of manifold learning, they assume that the space of target functions (e.g. all possible classifiers), consists of functions supported on a submanifold†† of the original high-dimensional euclidean/feature space. One of the most interesting features of their geometric approach, is their use of both labeled and unlabeled data††† to identify optimal classifiers. The traditional approaches to training classifiers require labeled data. So while one can use mechanical turks to increase the amount of labeled data for learning purposes, the geometric techniques outlined by Dr. Niyogi actually take advantage of any unlabeled data you may already have. Lest you think that these are purely academic/theoretical techniques, Dr. Niyogi cites a company that uses these algorithms to analyze and classify child speech patterns. With so much Data Exhaust available, I can’t help but think that techniques that can leverage unlabeled data will prove useful in many domains. (Niyogi and his collaborators have many papers on Manifold Learning, including one that describes the algorithms, and another that provides the theoretical foundations.)

pathint

The Calculus of Friendship

Mathematician Stephen Strogatz is known to many Radar readers for his work in network theory (“small-world networks”) with his student Duncan Watts. I went to his talk thinking it would cover recent developments in random graphs. The talk turned out to be about his recent book chronicling his long friendship with his high school math teacher. What started out with letters that talked only about calculus and math problems, evolved into a deeper relationship over the last decade. The letters ranged from humorous calculus problems, to moving personal correspondence. For a preview of his book, listen to this recent Radiolab segment featuring Dr. Strogatz and his teacher:

[What made his teacher into a great instructor/mentor? Dr. Strogatz mentioned a few characteristics, many of which could be be re-purposed into advice for business leaders and managers. Yet another reason to read his book.]

Geomathematics (Mathematics and the Geological Sciences)

Another highlight of the conference was a symposium devoted to the emerging field of geomathematics. Given that the geological sciences routinely deal with Big Data sets, developments in geomathematics are worth paying attention to.

(†) To illustrate how geometric these techniques are, Niyogi outlined versions of the Laplace-Beltrami operator, the Heat Kernel, and Homology in his short talk. I went to another interesting talk on geometric structures and discrete graphs, but from what I could gather, it was mostly theoretical in nature.

(††) Niyogi and his fellow researchers assert that “… for almost any imaginable source of meaningful high-dimensional data, the space of possible configurations occupies only a tiny portion of the total volume available. One therefore suspects that a non-linear low-dimensional manifold may yield a useful approximation to this structure.”

(†††) In classification problems, labeled data are ordered pairs of feature vectors and their corresponding class labels. In the geometric approach to learning classifiers, unlabeled data can be used to recover the “intrinsic geometric structure” of marginal probability density functions.

tags: , , , , , ,