AI Advances

  • How AI Transforms Business – A New Microsoft Series
    true
    Microsoft is privileged to work with leading-edge customers and partners who are taking the power of the cloud and artificial intelligence and applying it to their businesses in novel ways. Our new series, How AI Transforms Business, features insights from selective such customers and partners. Join us in these conversations and see how your company and customers may be able to benefit from these solutions and insights. All Episodes 1. How Can Autonomous Drones Help the Energy and Utilities Industry? Headquartered in Norway, eSmart Systems develops digital intelligence for the energy industry and for smart communities. When it comes to next-generation grid management systems or efficiently running operations for the connected cities of the future, they are at forefront of digital transformation. In a conversation with Joseph Sirosh, CTO of AI in Microsoft’s Worldwide Commercial Business, Davide Roverso, Chief Analytics Officer at eSmart Systems, talks about interesting new AI-enabled scenarios in world of energy, utilities and physical infrastructure.   Read more »
  • How Can Autonomous Drones Help the Energy and Utilities Industry?
    true
    Welcome to How AI Transforms Business, a new series featuring insights from conversations with Microsoft partners who are combining deep industry knowledge with AI in novel ways and, in doing so, creating leading-edge intelligent business solutions for our digital age. Our first episode features eSmart Systems, which is in the business of creating solutions to accelerate global progress towards sustainable societies. Headquartered in the heart of Østfold county, Norway, eSmart Systems develops digital intelligence for the energy industry and for smart communities. The company is strategically co-located with the NCE Smart Energy Markets cluster and the Østfold University College and thrives in a very innovative environment. When it comes to next-generation grid management systems, or efficiently running operations for the connected cities of the future or driving citizen engagement, the company is at the forefront of digital transformation. We recently caught up with Davide Roverso, Chief Analytics Officer at eSmart Systems. Davide has many interesting things to share about where and how AI is being applied in the infrastructure industry. Among other things, he talks about how utilities companies are forced to fly manned helicopters missions over live electrical power lines today, just to perform routine inspections, and how – using AI – it is possible to have safer and more effective inspections that do not expose humans to this sort of risk. Davide Roverso, Chief Analytics Officer, eSmart Systems, in conversation with Joseph Sirosh, Chief Technology Officer of Artificial Intelligence in Microsoft’s Worldwide Commercial Business. Video and podcasts versions of this session are available via the links below. Alternatively – just continue reading a transcript of their conversation below. Video Access video here Access video here Podcast / Audio Access via iOS Podcast app or at this link Access via Google Play or at this link Access via Spotify app Joseph Sirosh: Davide, would you tell a little about eSmart Systems and yourself? Davide Roverso: eSmart Systems is a small Norwegian startup, was established in 2013. The main area in which we work is building SaaS for the energy and utilities sector. So basically, it was founded by a group of people that had been working together for over 20 years in the energy and utilities space. They were first working a lot on power exchange software, and delivered power exchange to California, among others. And then, about 2012, they went for a kind of exploration trip to the US, to Silicon Valley and that area, and they visited Google and Amazon and Microsoft and Cloudera and tried to find what were the new biggest trends. And they came back home with a clear idea that they had to focus on cloud and AI. And of course, they used that in their core business and that was power and utilities. So that’s how eSmart Systems started. JS: And so, you have an analytics team, or now is it an AI team? DR: We have 10 data scientists, so more than 10% of the company is data scientists, so we have… Read more »
  • Machine Reading at Scale – Transfer Learning for Large Text Corpuses
    true
    This post is authored by Anusua Trivedi, Senior Data Scientist at Microsoft. This post builds on the MRC Blog where we discussed how machine reading comprehension (MRC) can help us “transfer learn” any text. In this post, we introduce the notion of and the need for machine reading at scale, and for transfer learning on large text corpuses. Introduction Machine reading for question answering has become an important testbed for evaluating how well computer systems understand human language. It is also proving to be a crucial technology for applications such as search engines and dialog systems. The research community has recently created a multitude of large-scale datasets over text sources including: Wikipedia (WikiReading, SQuAD, WikiHop). News and newsworthy articles (CNN/Daily Mail, NewsQA, RACE). Fictional stories (MCTest, CBT, NarrativeQA). General web sources (MS MARCO, TriviaQA, SearchQA). These new datasets have, in turn, inspired an even wider array of new question answering systems. In the MRC blog post, we trained and tested different MRC algorithms on these large datasets. We were able to successfully transfer learn smaller text excepts using these pretrained MRC algorithms. However, when we tried creating a QA system for the Gutenberg book corpus (English only) using these pretrained MRC models, the algorithms failed. MRC usually works on text excepts or documents but fails for larger text corpuses. This leads us to a newer concept – machine reading at scale (MRS). Building machines that can perform machine reading comprehension at scale would be of great interest for enterprises. Machine Reading at Scale (MRS) Instead of focusing on only smaller text excerpts, Danqi Chen et al. came up with a solution to a much bigger problem which is machine reading at scale. To accomplish the task of reading Wikipedia to answer open-domain questions, they combined a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. MRC is about answering a query about a given context paragraph. MRC algorithms typically assume that a short piece of relevant text is already identified and given to the model, which is not realistic for building an open-domain QA system. In sharp contrast, methods that use information retrieval over documents must employ search as an integral part of the solution. MRS strikes a balance between the two approaches. It is focused on simultaneously maintaining the challenge of machine comprehension, which requires the deep understanding of text, while keeping the realistic constraint of searching over a large open resource. Why is MRS Important for Enterprises? The adoption of enterprise chatbots has been rapidly increasing in recent times. To further advance these scenarios, research and industry has turned toward conversational AI approaches, especially in use cases such as banking, insurance and telecommunications, where there are large corpuses of text logs involved. One of the major challenges for conversational AI is to understand complex sentences of human speech in the same way humans do. The challenge becomes more complex when we need to do this… Read more »
  • Power Bat – How Spektacom is Powering the Game of Cricket with Microsoft AI
    true
    A special guest post by cricket legend and founder of Spektacom Technologies, Anil Kumble. This post was co-authored by Tara Shankar Jana, Senior Technical Product Marketing Manager at Microsoft. While cricket is an old sport with a dedicated following of fans across the globe, the game has been revolutionized in the 21st century with the advent of the Twenty20 format. This shorter format has proven to be very popular, resulting in a massive growth of interest in the game and a big fan following worldwide. This has, in turn, led to increased competitiveness and the desire on the part of both professionals and amateurs alike to take their game quality to the next level. As the popularity of the game has increased, so have innovative methods of improving batting techniques. This has resulted in a need for data-driven assistance for players, information that will allow them to digitally assess their quality of game. Spektacom was born from the idea of using non-intrusive sensor technology to harness data from “power bats” and using that data to power insights driven by the cloud and artificial intelligence. Before we highlight how Spektacom built this solution using Microsoft AI, there are a couple of important questions we must address first. What Differentiation Can Technology and AI Create in the Sports Industry? In the last several years, the industry has realized the value of data across almost every sport and found ways to collect and organize that data. Even though data collection has become an essential part of the sporting industry, it is insufficient to drive future success unless coupled with the ability to derive intelligent insights that can then be put to use. Data, when harnessed strategically with the help of intelligent technologies such as machine learning and predictive analytics, can help teams, leagues and governing bodies transform their sports through better insights and decision-making in three critical areas of the business, namely: Fan engagement and management. Team and player performance. Team operations and logistics. For many professional sports teams and governing bodies, the factors that led to past success will not necessarily translate into future victories. The things that gave teams a competitive advantage in the past have now become table stakes. People are consuming sports in new ways and fans have come to expect highly personalized experiences, being able to track the sports content they want, whenever and wherever they want it. AI can help transform the way sports are played, teams are managed, and sports businesses are run. New capabilities such as machine learning, chatbots and more are helping improve even very traditional sports in unexpected new areas. Spektacom’s Solution What Impact will Spektacom’s Technology Have on the Game? Spektacom’s technology will help in a few critical areas of the game. It will: Enhance the fan experience and fan engagement with the sport. Enable broadcasters to use insights for detailed player analysis. Allow grassroots players and aspiring cricketers to increase their technical proficiency. Allow coaches to provide more focused guidance to… Read more »
  • Deep Learning Without Labels
    true
    Announcing new open source contributions to the Apache Spark community for creating deep, distributed, object detectors – without a single human-generated label This post is authored by members of the Microsoft ML for Apache Spark Team – Mark Hamilton, Minsoo Thigpen, Abhiram Eswaran, Ari Green, Courtney Cochrane, Janhavi Suresh Mahajan, Karthik Rajendran, Sudarshan Raghunathan, and Anand Raman. In today’s day and age, if data is the new oil, labelled data is the new gold. Here at Microsoft, we often spend a lot of our time thinking about “Big Data” issues, because these are the easiest to solve with deep learning. However, we often overlook the much more ubiquitous and difficult problems that have little to no data to train with. In this work we will show how, even without any data, one can create an object detector for almost anything found on the web. This effectively bypasses the costly and resource intensive processes of curating datasets and hiring human labelers, allowing you to jump directly to intelligent models for classification and object detection completely in sillico. We apply this technique to help monitor and protect the endangered population of snow leopards. This week at the Spark + AI Summit in Europe, we are excited to share with the community, the following exciting additions to the Microsoft ML for Apache Spark Library that make this workflow easy to replicate at massive scale using Apache Spark and Azure Databricks: Bing on Spark: Makes it easier to build applications on Spark using Bing search. LIME on Spark: Makes it easier to deeply understand the output of Convolutional Neural Networks (CNN) models trained using SparkML. High-performance Spark Serving: Innovations that enable ultra-fast, low latency serving using Spark. We illustrate how to use these capabilities using the Snow Leopard Conservation use case, where machine learning is a key ingredient towards building powerful image classification models for identifying snow leopards from images. Use Case – The Challenges of Snow Leopard Conservation  Snow leopards are facing a crisis. Their numbers are dwindling as a result of poaching and mining, yet little is known about how to best protect them. Part of the challenge is that there are only about four thousand to seven thousand individual animals within a potential 1.5 million square kilometer range. In addition, Snow Leopard territory is in some of the most remote, rugged mountain ranges of central Asia, making it near impossible to get there without backpacking equipment. Figure 1: Our team’s second flat tire on the way to snow leopard territory. To truly understand the snow leopard and what influences its survival rates, we need more data. To this end, we have teamed up with the Snow Leopard Trust to help them gather and understand snow leopard data. “Since visual surveying is not an option, biologists deploy motion-sensing cameras in snow leopard habitats that capture images of snow leopards, prey, livestock, and anything else that moves,” explains Rhetick Sengupta, Board President of Snow Leopard Trust. “They then need to sort through the… Read more »
  • “Snip Insights” – An Open Source Cross-Platform AI Tool for Intelligent Screen Capture
    true
    This post is authored by Tara Shankar Jana, Senior Technical Product Marketing Manager at Microsoft. What if we could infuse AI into the everyday tools we use, to delight everyday users? With just a little bit of creativity – and the power of the Microsoft AI platform behind us – it’s now become easier than ever to create AI-enabled apps that can wow users. Introducing Snip Insights! An open source cross-platform AI tool for intelligent screen capture, Snip Insights is a step change in terms of how users can generate insights from their screen captures. The initial prototype of Snip Insights, built for Windows OS and released at Microsoft Build 2018 in May, was created by Microsoft Garage interns based out of Vancouver, Canada. Our team at Microsoft AI Lab, in collaboration with the Microsoft AI CTO team, took Snip Insights to the next level by giving the tool an intuitive new user interface, adding cross-platform support (for MacOS, Linux, and Windows), and offering free download and usage under the MSA license. Snip Insights taps into Microsoft Azure Cognitive Services APIs and helps increase user productivity by automatically providing them with intelligent insights on their screen captures. Solution Overview Snip Insights taps into cloud AI services and – depending on the image that was screen-captured – can convert it into translated text, automatically detect and tag images, and provide smart image suggestions that improve the user workflow. This simple act of combining a familiar everyday desktop tool with Azure Cognitive Services has helped us create a one-stop shop for image insights. For instance, imagine that you’ve scanned a textbook or work report. Rather than having to manually type out the information in it, snipping it will now provide you with editable text, thanks to the power of OCR. Or perhaps you’re scrolling through your social media feed and come across someone wearing a cool pair of shoes – you can now snip that to find out where to purchase them. Snip Insights can even help you identify famous people and popular landmarks. In the past, you would have to take the screen shot, save the picture, upload it to an image search engine, and then draw your conclusions and insights from there. This is so much smarter, isn’t it? Key Capabilities Celebrity Search: Snip a celebrity image and the tool will provide you with relevant information about them. Object Detection and Bing Visual Search: You dig that T-shirt your friend is wearing in their latest social media post and want to know where you can buy it from. No problem! Just use Snip Insights and you can see matching product images and where to buy them from – all in a matter of seconds! OCR, Language Translation and Cross-Platform Support: You find a quotation or phrase in English and wish to convert that to French or another language. Just use Snip Insights and you can do so effortlessly. What’s more, the tool is free and works on Windows, Linux and MacOS,… Read more »
  • Can AI Generate Programs to Help Automate Busy Work?
    true
    By Joseph Sirosh, Corporate Vice President and CTO of AI, and Sumit Gulwani, Partner Research Manager, at Microsoft. There are an estimated 250 million “knowledge workers” in the world, a term that encompasses anybody engaged in professional, technical or managerial occupations. These are individuals who, for most part, perform non-routine work that requires the handling of information and exercising the intellect and judgement. We, the authors of this blog post, count ourselves among them. So are a majority of you reading this post, regardless of whether you’re a developer, data scientist, business analyst or manager. Although a majority of knowledge work tends to be non-routine, there are, nevertheless, many situations in which knowledge workers find ourselves doing tedious repetitive tasks as part of our day jobs, especially around tasks that involve manipulating data. In this blog post, we take a look at Microsoft PROSE, an AI technology that can automatically produce software code snippets at just the right time and in just the right situations to help knowledge workers automate routine tasks that involve data manipulation. These are generally tasks that most users would otherwise find exceedingly tedious or too time consuming to even contemplate. Details of Microsoft PROSE can be obtained from GitHub here: https://microsoft.github.io/prose/. Examples of Tedious Everyday Knowledge Worker Tasks Let’s take a couple of examples from the familiar world of spreadsheets to motivate this problem. Figures 1a (above), 1b (below): A couple of examples of “data cleaning” tasks, and how Excel “Flash Fill” saves the user a ton of tedious manual data entry. Look at the task being performed by the user in the Excel screen in Figure 1a above. If you see the text the user is entering in cell B2, it looks like they have modified the data in the corresponding column A, to fit a certain desired format for phone numbers. You can also see them starting to attempt an identical transformation manually in the next cell below, i.e. cell B3. Similarly, in cell E2 in Figure 1b above, it seems like the user is transforming the first and last names fields available in columns C and D, changing them into a format with just the last name followed by comma and capitalized first initial. They next attempt to accomplish an identical transformation, manually, in cell E3 which is right below it. Excel recognizes that the user-entered data in cells B2 and B3 represents their desired “output” (i.e. for a certain format of telephone numbers) and that it corresponds to the “input” data available in column A. Similarly, Excel recognizes that the user-entered data in cells E2 and E3 represents a transformed output of the corresponding input data present in columns C and D. Having recognized the desired transformation pattern, Excel is able to display the [likely] desired user output – shown in gray font in the images above – in all the cells of columns B and E, in these two examples. Regular Excel users among you will readily recognize this as… Read more »
  • This New [AI] Software Constantly Improves – and that Makes all the Difference
    true
    Based on a recent conversation between Joseph Sirosh, CTO for AI at Microsoft, and Roger Magoulas, VP of Radar at O’Reilly Media. Link to video recording below. Joseph and Roger had an interesting conversation at the recently concluded O’Reilly AI Conference in San Francisco where Joseph delivered a keynote talk on Connected Arms. Their discussion initially focused on a new low-cost 3D-printed prosthetic arm that can “see” and which connects to cloud AI services to generate customized behaviors, such as different types of grips needed to grasp nearby objects. But the conversation soon pivoted into a discussion about the unlimited set of possibilities that open up when devices such as this are embedded with low-cost sensors, take advantage of cloud connectivity, sophisticated cloud services such as AI, link to other datasets and other things in the world around them. True digital transformation is not about running a neural network or just about AI, as Joseph observes. It is about this ability to tap into software running as a service in the cloud, with the connectivity and global access that it brings. That can endow unexpected and almost magical new powers to ordinary everyday things. Joseph draws the parallel between this gadget and the digital transformation that every company and every piece of software is going through. Eventually, nearly everything of some value in this world will be backed by a cloud service, will rely on similar connectivity and the ability to pool data to synthesize new behaviors – behaviors that are learned in the cloud and which can be tailored to each individual or situation. That, along with the ability to improve continuously, is what sets apart this current wave of digital disruption. Joseph concludes with the latter observation, i.e. that the key differentiator of this AI-powered platform of today is that – while traditional software does not improve (on its own accord) – this new software constantly improves, and that makes all the difference.  You can watch their full interview below:   AI / ML Blog Team Read more »
  • AI-Based Virtual Tutors – The Future of Education?
    true
    This post is co-authored by Chun Ming Chin, Technical Program Manager, and Max Kaznady, Senior Data Scientist, of Microsoft, with Luyi Huang, Nicholas Kao and James Tayali, students at University of California at Berkeley. This blog post is about the UC Berkeley Virtual Tutor project and the speech recognition technologies that were tested as part of that effort. We share best practices for machine learning and artificial intelligence techniques in selecting models and engineering training data for speech and image recognition. These speech recognition models, which are integrated with immersive games, are currently being tested at middle schools in California. Context The University of California, Berkeley has a new program founded by alum and philanthropist Coleman Fung called the Fung Fellowship. In this program, students develop technology solutions to address education challenges such as enabling underserved children to help themselves in their learning. The solution involves building a Virtual Tutor that listens to what children say and interacts with them when playing educational games. The games were developed by a technology company founded by Coleman named Blue Goji. This work is being done in collaboration with the Partnership for a Healthier America, a nonprofit organization chaired by Michelle Obama. GoWings Safari, a safari-themed educational game, enabled with a Virtual Tutor that interacts with the user. One of the students working on the project is a first-generation UC Berkeley graduate from Malawi named James Tayali. James said: “This safari game is important for kids who grow up in environments that expose them to childhood trauma and other negative experiences. Such kids struggle to pay attention and excel academically. Combining the educational experience with interactive, immersive games can improve their learning focus.” As an orphan from Malawi who struggled to focus in school, this is an area that James can relate to. James had to juggle family issues and work on part-time jobs to support himself. Despite humble beginnings, James worked hard and attended UC Berkeley with scholarship support from the MasterCard Foundation. He is now paying it forward to the next generation of children. “This project can help children who share similar stories as me by helping them to let go of their past traumatic experiences, focus on their present education and give them hope for their future,” James added. James Tayali (left), UC Berkeley Public Health major class of 2017 alum and Coleman Fung (right), posing with the safari game shown on the monitor screen. The fellowship program was taught by a Microsoft Search and Artificial Intelligence program manager, Chun Ming, who is also a UC Berkeley alum. He also advised the team that built the Virtual Tutor, which includes James Tayali, who majored in Public Health and served as team’s product designer; Luyi Huang, an Electrical Engineering & Computer Science (EECS) student who led the programming tasks; and Nicholas Kao, an Applied Math and Data Science student, who drove the data collection and analysis. Much of this work was done remotely across three locations – Redmond, WA, Berkeley, CA,… Read more »
  • How to Implement AI-First Business Models at Scale
    true
    Earlier this week, MIT, in collaboration with Boston Consulting Group, released their second global study looking at AI adoption in industry. A top finding of this report is that the leading companies in AI adoption are now convinced of the value of AI and are now facing the challenge of moving beyond individual point solutions toward broad, systematic use of AI across the company and at-scale. In the report, Joseph Sirosh, CTO of AI at Microsoft, discusses how Microsoft is building a complete AI platform that empowers enterprises to implement these AI-first business models and do so at scale. Scaling AI across an entire business requires companies to look far beyond just building that initial model. As Joseph says, companies need an “AI Oriented Architecture capable of constantly running AI experiments reliably, with continuous integration and development, and then learning from those experiments and continuing to improve its operations.” For those of you who are attending Microsoft Ignite at Orlando next week, you can hear Joseph talk about AI Oriented Architectures first hand, and get guidance on how enterprises can build successful AI solutions at scale. Adopting Microsoft AI is super easy – you can get started here. AI / ML Blog Team Read more »
WordPress RSS Feed Retriever by Theme Mason

Author: hits1k

Leave a Reply