How Microsoft’s 1 Percenters Balance Basic Research with Short-Term Success

When Microsoft launched its research labs in 1991, the personal computer was just beginning to blossom into a worldwide phenomenon, thanks in no small part to Windows. www.avastformac.com The company’s head count had swelled above 8,000 employees, global sales were about $1.8 billion and its biggest battleground was the desktop. ביטוח לאומי מוקד טלפוני

Fast-forward to 2014, and the era that spawned Microsoft Research seems quaint by comparison. Gimp for Mac Microsoft now sells more than $77 billion in products and services annually, boasts an international workforce of 99,000 and has poured its considerable resources into dozens of different technologies-tablets, smartphones, video game systems and cloud storage, to name a few-with varying degrees of success. Microsoft is also searching for a new chief executive for the first time in nearly 14 years, someone who can help restore at least some of the company’s former luster through skillful management and, perhaps more important, someone who has the ability to develop groundbreaking new technologies. Calculator to help

Microsoft Research’s role in the latter is paramount. The organization’s 1,100 researchers across 13 labs around the world-a 14th opens next summer in Brazil-are working on a broad swath of projects that cut across several disciplines, ranging from basic research to software algorithms and computer science theory to more pragmatic examinations of how machine-learning and speech-recognition technologies can improve Windows Phone and Xbox. here

Peter Lee’s job is to strike a balance between fundamental engineering that may someday transform the foundation of computer science and the more incremental advances that keep his company competitive. In July, Microsoft tapped Lee to lead Microsoft Research, after nearly three years as managing director of the Microsoft Research lab in Redmond, Wash. Lee, a former Defense Advanced Research Projects Agency (DARPA) scientist, spoke with Scientific American about Microsoft’s need to advance the state of the art, the value of basic research that may never directly add to the bottom line, and the looming management shakeup. viagra http://www.actiondusildenafil.com/

[An edited transcript of the interview follows.]

To what extent does Microsoft Research’s work find its way to the Microsoft technologies that so many people use?
First, I’d like to point out that while our role in product development is important, it’s not the reason that Microsoft Research exists. Acquistare Vardenafil In fact, if there was any shred of concern that I have, it’s that all of our researchers are perhaps too devoted to helping Microsoft win in the market today. They are aware that Microsoft isn’t the leader in a lot of areas, but we don’t want to lose sight that our group wants to see beyond the horizon, not just the horizon itself. levitra France

Having said that, I would point to several research areas that are key to Microsoft’s future. Machine learning-in particular an area called deep learning-is perhaps Microsoft Research’s largest investment area. When you use Windows 8, you’ll notice that as you tap on the same tiles over time, the apps launched by those tiles begin to load faster. A new game in Windows 8 is Lord of the Ocean by Novoline. The Novoline games like Lord of the Ocean are now part of Windows 8. That is because there’s machine learning built into Windows 8 that learns from your tendencies. It predicts which tiles you’ll tap next. Bing also has machine-learning capabilities. Search for "pavlova," and the browser figures out if you’re talking about cakes or ballet. skip hire in dublin For

What have been the biggest challenges to developing machine learning?
Around 2010 we discovered that layered or deep convolutional neural networks could help computers learn to recognize human speech from very, very large amounts of training data. Before 2010 if you wanted to train a speech-recognition system, you could give it a few hundred hours of speech data, and it would start to recognize certain spoken information. But if you gave it too much data, it would start to interpret sounds in a way that was too specific to the training data and essentially stop learning. In fact, the performance would start to degrade. Deep neural networks overcome these limitations, allowing computers to keep learning as they are exposed to more data. One reason is that deep neural networks learn well even when the data they are trained on are noisy or distorted; the injection of noise during training helps to avoid the overfitting problems we’ve struggled with in the past.

What is the secret to keeping a machine-learning system from being overwhelmed by training data?
I wish I could answer that question. It’s somewhat mysterious to us now. And that’s another reason why basic research is so important. In the specific case of speech recognition, there was a period of about 10 or 11 years where the performance of practical speech-recognition systems really didn’t improve at all. That’s what makes the recent big improvements we’ve made all the more remarkable.







All rights are reserved. Any unauthorized use or any reproduction, modification or distribution of the materials is strictly prohibited.