AI ML MarketPlace

  • Astra makes machine learning bet with Benevolent AI deal
    true
    Today’s tie-up between Astrazeneca and Benevolent AI is another example of a big pharma jumping on the machine learning bandwagon, in this case to try to improve its drug discovery processes. Some industry watchers are cynical about the potential of artificial intelligence, and even Astra admits that the jury is still out. But the company maintains that this is an area it has to be involved in, otherwise it risks falling behind its competitors. “There’s a lot of hyperbole around machine learning and what it can do to transform drug discovery,” Mene Pangalos, head of Astra’s innovative medicines division, tells Vantage. “But I think it’s something worth us investing in, because if it is useful it could be transformational.” Still, it will be a long while before this will be proven either way, he concedes: “The proof of the pudding will be if we can deliver proof-of-concept data or launch a medicine. And that’s obviously some years away.” New targets The aim of the deal with Benevolent is to give Astra new insights into the biology of diseases, initially chronic kidney disease (CKD) and idiopathic pulmonary fibrosis (IPF), so it can discover targets or drug candidates that it might not have otherwise found. Key to this effort will be Benevolent’s capabilities in natural language processing (NLP): the way machines analyse human language and pull out relevant information. In this case, the companies will scour publicly available sources including scientific meetings, journals and patents to look for links between the diseases of interest and specific genes or proteins. This will generate an evolving and adapting dataset that will be updated as new findings become available, making this resource more powerful, Mr Pangalos explains. Astra is already sitting on a lot of data in CKD and IPF, which is one reason it chose to look first at these disorders, another being that they are complex and poorly understood. But the company will expand into all disease areas if the machine learning approach proves its worth, Mr Pangalos says. Early hints that the technology is doing what it is supposed to could include the company finding a genetically linked target that it would not otherwise have looked for, he adds. No black box As for why Astra chose Benevolent above the other many AI specialists, Mr Pangalos cites the group’s “excellent” NLP skills, an area in which Astra does not have its own capabilities. Another important factor was that Benevolent was willing to work with Astra, rather than just asking the big pharma to trust a “black box” algorithm with no input or understanding of how – or if – it worked. “There were companies that wanted us to just trust their capability, and for me that wasn’t acceptable because that won’t strengthen our own ability to do this. I didn’t want to become dependent on any one person or group – I wanted the expertise internally to judge whether this is good or not,” Mr Pangalos says, but refuses to point the finger at any AI company in particular. However, one player, IBM Watson Health, could provide a cautionary tale on the dangers of putting too much faith in machine learning. The company has struggled with various issues, most recently reportedly stopping sales of Watson for Drug Discovery, used in preclinical drug development. The group’s director of global life sciences, Christina Busmalis, told Vantage last week that Watson was still supporting the drug discovery product. When pressed on whether this meant that the company would not sell Watson for Drug Discovery to new customers, she replied: “We’re supporting our existing clients, and it’s a case-by-case basis of how we go to market.” She also admitted that the company was “investing more heavily into other areas, specifically clinical development”. Astra did meet Watson when it was looking for a machine learning partner, along with many other AI players, Mr Pangalos says. For now he will not give any financial details, but says the Benevolent deal represents “exceptionally good value for money” for Astra. Of course, this will ultimately depend on the collaboration producing results in the long term. “We still don’t know how useful this will be,” Mr Pangalos admits. “But we’ll never get there if we don’t work on it, so it’s important we’re trying it.” Source: Evaluate Read more »
  • Eye on A.I.— Retail Has Big Hopes For A.I. But Shoppers May Have Other Ideas
    true
      Walmart has opened a store in Levittown, N.Y. that is intended to showcase the power of artificial intelligence. The store, announced last week, is packed with video cameras, digital screens, and over 100 servers, making it appear more like a corporate data center than a discount retailer. All that machinery helps Walmart automatically track inventory so that it knows when toilet paper is running low or that milk needs restocking. The company’s goal is to create “a glimpse into the future of retail,” when computers rather than humans are expected to do a lot of retail’s grunt work. Walmart’s push into artificial intelligence highlights how retailers are increasingly adding data crunching to their brick and mortar stores. But it also sheds light on some of the potential pitfalls as consumers grow increasing wary of technology amid an endless stream of privacy mishaps at companies like Facebook. Walmart isn’t alone, of course, in trying to reinvent itself in an industry that is facing a huge threat from tech-savvy Amazon. Grocery chain Kroger, for instance, said earlier this year that it had tapped Microsoft to help it build two “connected experience” stores in which shoppers would, among other things, get personalized deals—possibly on their phones as they walk inside or on screens mounted on the shelves. Mark Russinovich, the chief technology officer of Microsoft Azure, told Fortune in a recent interview that such futuristic stores would need to handle their data crunching nearby, rather than doing it far away in the cloud—to avoid lag time. Equipping these Internet-connected stores could be a lucrative business for companies like Microsoft that want to sell computing power to retailers. The vendors even have a name for this emerging market—edge computing. But there’s no guarantee that retailers will be saved by it because consumers may balk at cameras tracking their movements while they walk up and down the aisles and being bombarded by discount offers. In apparent anticipation of the blowback, Walmart has filled its new store with kiosks that tell shoppers more about the technology it has installed. For retailers to be successful, consumers must feel comfortable about how their data is being used and with how they’re tracked. Companies that pitch A.I. as the future shouldn’t just assume that customers will go along for the ride. EYE ON A.I. NEWS   Facebook’s A.I. failure. Facebook said that its A.I. systems failed to detect a video by the New Zealand mosque shooter because the video was taken from a first-person point of view. But tech publication Motherboard fed some of the New Zealand shooting footage into Amazon’s image and video detection software and found that it was able to detect guns and weapons. White House A.I. National Plan Version 2.0. Later this year, the White House will release an updated version of its national A.I. plan, reported policy news site FedScoop. The newer version will contain more recommendations to government agencies about A.I. policies.  Driver’s Ed needs an update. As more companies like Tesla debut autonomous-driving features, drivers should be trained to use them, said auto news site Jalopnik. The report details how airline pilot training evolved as the aviation industry introduced automation, and suggests the similar training is necessary for drivers so that they know how to safely use self-driving features. Slack’s not slacking on A.I. Workplace chat company Slack filed last week to list its sharesdirectly on the New York Stock Exchange. The filing repeatedly mentions the use of machine learning technology to improve how its chat app recommends relevant topics, people, and documents in workplace discussions.  MACHINE LEARNING BABY STEPS Despite machine learning’s buzz in the business world, few companies are widely  using the technology and are instead testing it, according to tech publication ZDNet. For those starting out, Warwick Business School associate professor Dr. Panos Constantinides recommends that companies try machine learning in non-critical areas of their operations like chatbots that field customer inquiries. EYE ON A.I. HIRES   The Lustgarten Foundation named Elizabeth Jaffee as its chief medical advisor and that she will help develop a national pancreatic cancer database based on data from clinical and non-clinical trials. Jaffee is also the deputy director of the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University School of Medicine. Financial services company AllocateRite hired Michael Spece as the firm’s chief of artificial intelligence and data science. Spece was a data science immersion program fellow at e-commerce company Wayfair. The University of Chicago’s National Opinion Research Center picked Susan Paddock as its executive vice president and chief statistician. She was previously a senior statistician at the RAND Corporation think tank. EYE ON A.I. RESEARCH   Ludwig-Maximilians University of Munich and the Schoen Clinic Schwabing in Munich were among the institutions that published a paper about using deep learning techniques to better screen for hard-to-detect body movements related to Parkinson’s disease. The researchers used wearable devices containing motion sensors to gather data from patients, and then used a variety of machine-learning and other statistical approaches to analyze the information.  A.I. in the power grid. Researchers from the Global Energy Interconnection Research Institute North America published a paper about using reinforcement learning—in which computers learn from repetition—to automatically control voltage settings in a power grid. The researchers used a power grid simulator from the Pacific Northwest National Laboratory as a test bed for its Grid Mind A.I. system. Source: Fortune Read more »
  • Rise of AI: Should Humans Be Worried?
    true
    AI — A Decades Old Problem The bogeyman of the modern workplace is undoubtedly AI. It seems every day a new article pops up about how machines are going to steal jobs from humans – and soon. The worry around AI has only continued to grow as the technology advances, with more and more enterprises undergoing digital transformation and implementing AI in their everyday business processes. This isn’t a new problem. People have been concerned about machines taking their jobs for decades. Looking back at the Indian Freedom Movement — the concern over automation displacing agrarian jobs was a key influence in pushing the movement forward. Mahatma Gandhi elaborated, “I have a great concern about introducing machine industry. The machine produces too much too fast and brings with it a sort of economic system, which I cannot grasp. I do not want to accept something when I see its evil effects, which outweigh whatever good comes with it.” This view certainly remains pervasive today. However, while AI has certainly evolved quite a bit from its roots, it will never be able to fully take over for humans in decision making-roles. Humans will always play a key role in the workplace. Ultimately, I predict humans will be living and working in an augmented world, where machines are useful supplements and tools to the future workplace rather than completely in charge. Evolution of AI AI has been around for decades, and much like humans, has continued to evolve over time. Human intelligence evolved as people started talking to one another about their ideas, developing better ideas over time through increased collaboration and communication. Machines are now experiencing the same exact phenomenon. With the introduction of the internet, machines were able to communicate with each other in a way that was previously impossible, allowing them to develop the ability to learn and perform basic tasks. However, there is an important distinction to make between the respective evolutions of machines and humans. Humans have evolved in three dimensions — thought, action, and emotion — and are capable of implementing all three in decision-making processes, while machines have evolved in only two dimensions, thought and action. Irrefutably, AI technology and machines have greatly grown in these two realms — AI can learn as it goes along, and is capable of creating machines that can process input, make decisions, and take actions like humans. The key difference is that with machines, the aspect of emotion is missing. The type of work that humans choose to pursue, the way that we spend our time, and the decisions we make aren’t always necessarily logical. In fact, they’re often ruled more by emotion than anything else. Without the trigger of emotion, an integral part in decision-making and productivity, humans can never be taken fully out of the equation in the workplace, despite the pervasive fear that AI’s capabilities will take over. Recently, it was revealed that Amazon’s experimental hiring AI was discriminatory against women, and had taught itself that male candidates were preferable. Without any concrete way to guarantee that the AI wouldn’t continue to be biased moving forward, Amazon abandoned the project. This only shows that we’re still a long way off from ensuring AI is completely fair and accountable — and until we reach that point, humans will continue to be the driving force of the workplace. What Will the World with AI Really Look Like? Interestingly, we can actually view humans as similar to AI — both are able to continually learn and perform endless varieties of specific tasks. Meanwhile, actual AI is trained to perform very specific tasks that it cannot stray from. A driverless car powered by AI technology can’t perform the same tasks as an AI-powered spam email detector, as a machine can’t do anything other than what it was designed for. However, humans have the ability to both drive a car and identify spam email. In this sense, humans can be considered a generalized AI, while machines are specialized AI for specific use cases. AI’s clear-cut applications will only supplement human ability to perform a variety of tasks, in the workplace and outside of it. Ultimately, the tasks and skills that humans and AI bring to the table are complementary, and will not detract from one another. In fact, it is expected that Assisted Intelligence will come to be the norm, rather than Augmented Intelligence (where the decision rights are solely with the human) or Autonomous Intelligence (where the rights are solely with the machine). This will only amplify the value of existing activity, and allow humans to perform higher-level tasks rather than focusing on the minutia. Look at telephones, for example — prior to phones, humans were still able to communicate. However, the telephone created a new opportunity for humans to communicate on a larger and more efficient scale. Today, machines are continuing to come into our lives extensively across industries, forging the way ahead for co-automation. While jobs across industries will leverage AI technology advantageously, it is important to note that there will always be aspects of jobs that cannot be automated. For doctors and nurses, a certain level of compassion and empathy will always be necessary when working with patients; in education, people don’t want a machine teaching their children, they look for human interaction and warmth. While co-automation will be beneficial, it cannot take away the necessity of humans in the workplace. Final Thoughts A co-automated future is already beginning, and eventually, every domain will be affected. In order to prepare for this digital era, upcoming generations must learn to communicate with both AI and humans – knowing how to speak the same language as machines will only help to remove the fear around AI. Teaching children programming and computer fluency will allow them from a young age to start working side by side with machines, and be ready for co-automation in the workplace, and even outside of it. There have been leaps and bounds forward in the abilities of AI. Ultimately, there is no need for mass anxiety about the capabilities of emerging technology – at least in our lifetime, machines will not be able to work towards their own self-interests to take over the job market, despite what some of the media hype would like you to believe. We should rather view AI as a helpful partner or tool to bolster our roles in the workplace going forward, and work towards that vision rather than fear it. Source: Mar Tech Series Read more »
  • A Promising future: How AI is making big strides in breast cancer treatment
    true
    Breast cancer is the most common cancer in the UK. It accounts for 15% of all new cases in the country, and about one in eight women will be diagnosed with it during their lifetime. In the NHS, breast cancer screening routinely includes a mammogram, which is essentially an X-ray of the breast. But the future of this early test is at risk as the number of specialists able to read them declines. While this skills shortfall can’t be made up immediately, promising advances in artificial intelligence may be able to help.   Interpreting a mammogram is a complex process normally performed by specially trained radiologists and radiographers. Their skills are vital to the early detection and diagnosis of this cancer. They visually scrutinise batches of mammograms in a session for signs of breast cancer. But these signs are often ambiguous or hard to see. False negative rates – where cancers are incorrectly diagnosed or missed – are between 20 and 30% for mammography. These are either errors in perception or errors in interpretation, and can be attributed to the sensitivity or specificity of the reader. It is widely believed that the key to developing the expertise needed to interpret mammograms is rigorous training, extensive practice and experience. And while researchers like myself are looking into training strategies and perceptual learning modules which can expedite the transition from novice reader to expert, others have been investigating how AI could be used to speed up diagnosis and improve its accuracy. Machine diagnosis As in countless other fields, the potential for AI algorithms to help with cancer diagnosis has not gone unrecognised. Along with breast cancer, researchers have been looking at how AI can improve the efficacy and efficiency of care for lung, brain and prostate cancer, in order to meet ever increasing diagnosis demands. Even Google is looking at how AI can be used to diagnose cancer. The search giant has trained an algorithm to detect tumours which have metastasised, with a 99% success rate. For breast cancer, the focus so far has been on how AI can help diagnose the disease from mammograms. Every mammogram is read by two specialists, which can lead to potential delays in diagnosis if there is a shortfall in expertise. But researchers have been looking at introducing AI systems at the time of the screening. The idea is that it would support a specialist’s findings without waiting for the second opinion of another professional. This would reduce the waiting time and associated anxiety for the women who have been tested. AI has already made substantial strides in cancer image recognition. In late 2018, researchers reported that one commercial system matched the accuracy of over 28,000 interpretations of screening mammograms by 101 radiologists. This means it achieved a cancer detection accuracy comparable to an expert radiologist. In another study led by the same researcher, radiologists using an AI system for support showed an improved rate of improved breast cancer detection – rising from 83% to 86%. This research also found that using an AI system reduced the amount of time radiologists spent analysing the images on screen. Fine tuning But while the potential of AI has been welcomed by some radiologists, it has brought suspicion from others. And though other researchers have also found that AI is just as good at detecting breast cancers from mammograms as its human counterparts, this comes with the caveat that more fine tuning and software improvement is needed before it can be safely introduced into breast screening programmes. Exciting as it may be to think that AI could be used to help detect such a prevalent cancer, specialist and public confidence needs to be taken into consideration before it can be introduced. Acceptance of the technology is vital so that patients and medical professionals know they are receiving the correct results. As yet, there has been little research into the public perception of AI in breast cancer screening, but more general studies into AI and healthcare have found that 39% of people are willing to engage with artificial intelligence/robotics for healthcare. This rises to 55% for the 18- to 24-year-old demographic. The AI systems are still in the research phase, with no firm plans to use it to diagnose patients in the UK yet. But these promising results show there is a tremendous opportunity for the delivery of radiology healthcare services, and ultimately the potential to detect more patients with breast and other cancers. Source: Business Standard News Read more »
  • Amazing AI Generates Entire Bodies of People Who Don’t Exist
    true
    Embodied AI: A new deep learning algorithm can generate high-resolution, photorealistic images of people — faces, hair, outfits, and all — from scratch. The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media. Catalog From Hell: The algorithm was developed by DataGrid, a tech company housed on the campus of Japan’s Kyoto University, according to a press release. In a video showing off the tech, the AI morphs and poses model after model as their outfits transform, bomber jackets turning into winter coats and dresses melting into graphic tees. Specifically, the new algorithm is a Generative Adversarial Network (GAN). That’s the kind of AI typically used to churn out new imitations of something that exists in the real world, whether they be video game levels or images that look like hand-drawn caricatures.   Photorealistic Media: Past attempts to create photorealistic portraits with GANs focused just on generating faces. These faces had flaws like asymmetrical ears or jewelry, bizarre teeth, and glitchy blotches of color that bled out from the background. DataGrid’s system does away with all of that extraneous info that can confuse algorithms, instead posing the AI models in front of a nondescript white background and shining realistic-looking light down on them. Each time scientists build a new algorithm that can generate realistic images or deep fakes that are indistinguishable from real photos, it seems like a new warning that AI-generated media could be readily misused to create manipulative propaganda. Here’s hoping that this algorithm stays confined within the realm of fashion catalogs. Source: Futurism Read more »
WordPress RSS Feed Retriever by Theme Mason

Leave a Reply