Expert system

  • It doesn’t have to be Artificial Intelligence vs. Machine Learning: the most effective approach merges the best of both worlds
    Artificial Intelligence (AI) and Machine Learning (ML) are often considered synonyms. Instead, ML is an application of AI, and it’s important to understand its key elements and the potential they offer for business value. Artificial Intelligence vs. Machine Learning: the key differences ML is an application or a subfield of AI that is based on the concept that a machine can learn on its own from data sets. In reality, however, ML systems are limited in their ability to understand natural language since there are no tools to refine the algorithm for content analysis. In other words, it’s a black box.  Your only option is to feed more examples to the algorithm, but this doesn’t guarantee greater accuracy because you need to provide specific training documents to cover all the use cases. Machine learning is most applicable for use cases where you have a large, relevant sample of documents. But what if you don’t have large data sets? What if your use case is very difficult and ambiguous? The more complex your use case, the more difficult it will be to identify correct samples for each situation. Instead, ML works well when you have a large number of sample documents and low complexity. Watch the video “Don’t Settle for Less than True AI” to learn more The reality is that most business scenarios do not fall into this category. In that case, other AI approaches, such as Natural Language Understanding based on a knowledge graph, offer concrete benefits for a range of cognitive tasks and can also be used in scenarios that involve a small, distributed set of sample documents with an average level of complexity. Thanks to the deep and wide representation of knowledge, AI solutions that leverage a knowledge graph understand and process natural language and any kind of unstructured texts faster and more accurately than a ML approach. Combining the two worlds, knowledge graph and machine learning Instead, the combination of ML (and deep learning) backed by a knowledge graph can provide high quality data to optimize the implementation of AI applications. To learn more, read the recent press release “Expert System Fuels Advances in Artificial Intelligence by Pushing further the Joint Work of Machine Learning and Knowledge Graphs” The post It doesn’t have to be Artificial Intelligence vs. Machine Learning: the most effective approach merges the best of both worlds appeared first on . Read more »
  • From Rookie to Master: how to become a legend in AI
    In 2004, in his very first season with the Cavs, LeBron James was named the NBA Rookie of the Year. He soon established himself as one of the league’s premier players and, according to some, the greatest player of all time (even if I’ll always be partial to the stars of my day like Magic Johnson and Michael Jordan). Maybe Aussies now hope that Ben Simmons will soon follow in his footsteps. No matter who you’re fan of, sports history is full of rookies who have gone on to become masters and even legends in their field. We decided to refer to this well-known sport paradigm when we refined our Cogito® Training and Certification Program for Partners. With this program, the goal is to prepare our business and technology partners to gain the best possible results from the Artificial Intelligence capabilities of Cogito®, to help identify best use cases for customers and to support them in correctly sizing and designing the right solution for a given scenario. The first step of the program is a fully online, self-learning step that will introduce professionals to Artificial Intelligence and Semantic Analysis. Professionals that will go through this first step of training and complete the online test will be certified as Rookies. With this knowledge, Rookies will be able to compete with more experienced practitioners to be able to propose innovative solutions that deal with Natural Language Understanding, Natural Language Processing and the strategic role of unstructured information for intelligent automation, knowledge discovery and insight. With more project implementation experience, Rookies will be ready to move to the Pro and Master certifications, which will require classroom training in order to become fully independent in the definition and implementation of projects based on Cogito technology. Whether you’re looking to learn more about AI, add talent and new skills to your team or increase the capacity of your team to propose pioneering solutions based on state-of-the-art technologies, our new Training and Certification Program for Partners offers a trusted source to help close the knowledge gaps. In the meantime, we’ll continue to build out additional skill-building resources, refine our training and certifications and add new partners covering more countries and markets. Who will be the next Rookie who will become an AI legend? Take the challenge and register for our Rookie Certification! Maurizio Mencarini VP Global Strategic Partnership, Expert System https://it.linkedin.com/in/mencarini   The post From Rookie to Master: how to become a legend in AI appeared first on . Read more »
  • Insurance & Artificial Intelligence: A Powerful Combination
    true
    AI’s ability to automate time-consuming, knowledge-based tasks provide unprecedented insight and productivity gains for the world of Insurance. It can also help insurers reduce their exposure to risk. Learn more about the benefits of using AI to combat risk for insurance in our new infographic! It’s inspired by our recent webinar, “Intelligent Underwriting: Future Risk Engineering” with Oliwia Berdak, VP & Research Director at Forrester Research and Pamela Negosanti, Global VP of Insurance at Expert System. Click here to watch the recorded webinar! The post Insurance & Artificial Intelligence: A Powerful Combination appeared first on . Read more »
  • Artificial Intelligence and Robotic Process Automation: a Match Made in Heaven
    true
      While Robotic Process Automation (RPA) has exceeded market predictions, Artificial Intelligence (AI) continues to drive greater value and, according to 2019 predictions from Forrester, automation technologies including RPA and AI are becoming a strategic investment in the enterprise world. In the four primary areas where AI and RPA are quickly progressing (Figure 1), the ability to capture greater insight from unstructured data is at center stage in making robots more intelligent. Figure 1. Forrester Research, Inc “Look to Four Use Cases Categories to Push RPA and AI Convergence. Text Analytics Leads the Way.”   Why is unstructured data so important for intelligent process automation? With 60% of global data and analytics decision makers saying that their company is sitting on 100 or more terabytes of unstructured data, there is no surprise that organizations are looking for proven solutions to accelerate the use of unstructured information to unleash the full potential of intelligent RPA. To close the process gap, the main focus remains on analytics (Figure 2), where nearly half of organizations will use a combination of AI and automation to create a new digital workforce. Figure 2 Just think about the following scenarios that may impact the use of unstructured data while building an RPA process: When you need to extract qualified unstructured information depending on the context from documents that are required When you need to classify documents against specific customer taxonomies and behave accordingly When understanding the content of documents is relevant When you need to apply reasoning before extracting information When you need to also discover relations linked to the data you extract to be used in the process In all of the above scenarios, the collaboration between AI and RPA extends and improves the reach of intelligent automation by accelerating the use of unstructured information. AI makes all relevant data immediately useful and actionable in RPA. It analyzes, categorizes and extracts relevant information trapped in unstructured data (such as text fields of various business documents, purchase orders, invoices, emails, survey reports, forms, etc.) to organize it into clean files for RPA.   AI and RPA: a match made in heaven AI is the perfect match for RPA to analyze, categorize and extract unstructured data to make it functional to improve the output of complex, even more mission-critical intelligent RPA workflows. At the same time, RPA is the ideal complement to enable the adoption of cognitive capabilities at scale. As a result, companies can benefit from both technologies by using an all-in-one platform to automate end-to-end processes and easily exploit the value of intelligence functionalities.   Join Expert System and Blue Prism at Blue Prism World 2019! We’re Making Robots More Intelligent! The integration of Expert System AI capabilities into Blue Prism RPA platform delivers improved labor efficiency and productivity while accomplishing higher levels of accuracy in unstructured data access, extending business automation to new strategic areas by automating tasks that once were reserved only for humans. Expert System will be showcasing at Blue Prism World 2019, how organizations accelerate the use of unstructured information by leveraging Artificial Intelligence while building a process in the Blue Prism platform.  The post Artificial Intelligence and Robotic Process Automation: a Match Made in Heaven appeared first on . Read more »
  • How the Insurtech Is Disrupting the Insurance Industry (and AXA XL Is an Example)
    The Insurtech boom is well known: insurance companies are integrating digital technologies into their traditional processes and everyday workflows in order to reduce manual efforts, time and costs. Process automation and new digital experience impact all sectors but the insurance is perhaps the most affected sector of all. The World Insurtech Report 2018, released last October from Capgemini, states that the Insurtech sector saw “investment increase at a compound annual growth rate of 36.5 percent between 2014 and 2017.” In its Global Insurance trends analysis 2018, EY said that “Insurtech continues to be a hot area within the overall Fintech investment space having seen deal values rise 32% YoY and 45% CAGR since 2012.” According to Technavio, who published a new research report on the global Insurtech market, “the global insurtech market will grow by almost USD 15.63 billion during 2019-2023,” at a CAGR of more than 41%. Therefore, it seems that insurtech attracts attention and a lot of investments, and in the future it will also continue to play a key role in using new technologies to provide a strategic advantage in the insurance sector. Driving value for insurance: Insurtech and AI Transform the value chain, improve  the customer experience, enable  human capabilities, increase efficiencies, create new products and redefine business models: new technologies can impact the insurance industry in several ways.  Among the different emerging technologies impacting insurance companies, Artificial Intelligence implementation is effective because it can be applied to the most data-intensive processes, providing gains and extracting insight from textual information. AI can read documents in the same way people do, recognizing the useful information they contain automatically. This reduces the time spent on insurance tasks from hours, down to seconds. Using AI, insurance companies can : Accelerate claims management Accelerate underwriting Reduce leakage (To learn more about cognitive automation and AI applied to insurance, download the White Paper “Three unique ways to improve the economics of insurance with artificial intelligence”) The example of AXA XL: Cogito AI is a great partner for insurtech “AXA XL is using artificial intelligence software to help reduce its risk consultants’ workload; the latest example of an enterprise adopting software that can help it do more with less.” This is how AXA XL was defined in a recent interview in CIO.com with Tim Heinze, its director of strategic operations for the company’s North American property unit. AXA XL leverages AI to enhance its property risk engineering capabilities using Expert System’s Cogito Platform. The core technology used in this project is Expert System’s Cogito Knowledge Graph, which embeds millions of concepts and their lexical forms, properties and relationships, to help disambiguate the meaning of words and expressions contained in text. The software leverages machine learning algorithms to enrich its knowledge automatically from text. Expert System’s Cogito AI technology, capturing all the useful data from the reports, provides several benefits to AXA XL, such as improved accuracy, time savings and a digital edge over competitors. The post How the Insurtech Is Disrupting the Insurance Industry (and AXA XL Is an Example) appeared first on . Read more »
  • As for Types of Chatbots: Deep Learning for Chatbot
      Deep learning for chatbots remains a hot topic as more and more companies look for different approaches to develop their chatbots. Deep learning techniques for chatbots are only one of several different approaches that use Artificial Intelligence (AI) to simulate human conversations. When a chatbot has to answer complex questions and/or understand with good accuracy a wide range of different intents (e.g. more than 100+ user intents), a more sophisticated approach is required. Pros and cons of deep learning chatbots In a previous post, we talked about how chatbots simplify the interaction between people and machines (What Is a Chatbot?) Today, we will explore pros and cons affecting chatbots based on deep learning or machine learning. Machine learning and deep learning for chatbots are on the rise especially thanks to virtual assistants like Siri, Alexa or Cortana that allow consumers to interact using their voice via smartphone or even appliances and devices used in the home. There are basically two different techniques at the core of these kinds of systems: Supervised learning. In this case, the chatbot software is trained by a large set of requests. Each request is correlated to a specific “tag”, which represents a specific user intent. Pros: Deep learning or machine learning based on a supervised approach can enable a good level of accuracy. Cons: However, this approach needs a really large set of good, correctly tagged examples, otherwise the training process won’t be successful. Unsupervised learning. In this case, the chatbot software relies on a very high number of examples to independently identify the requests and corresponding user intents. Pros: The system potentially does not require human supervision and a set of explicitly tagged examples. Cons: A fully independent approach is not possible, as the training set of examples should be not only extremely wide and varied but also of high quality to ensure that the chatbot is properly trained. Both approaches may leverage some basic Natural Language Processing (NLP) capabilities, but at the same time require a long time to be trained and a vast amount of good, appropriate data. A system like Siri receives more than 1 billion requests on a weekly basis, while data volumes necessary for training a deep learning system in today’s enterprise world are considerably lower. Poor data means poor results. A lack of good examples with which to appropriately train the system will lead to approximative results or, even worse, completely wrong answers or no answers, unless the training phase is continuously repeated. Conclusion Thanks to consumer application like Siri, chatbots have been receiving a lot of attention from private and public organizations, and they are expected to continue their rise in the Artificial Intelligence solutions market. According to Gartner, “by 2020 customers will manage 85% of their relationship with the enterprise without interacting with a human,” and “by 2020, over 50% of medium to large enterprises will have deployed product chatbots.” There are several reasons why consumers love chatbots (and they include curiosity and entertainment); and there are as many reasons why companies benefit from chatbots as well, such as the opportunities to improve routine tasks, to simultaneously process multiple requests from users, to reduce customer care efforts and costs, to acquire a better understanding of customer issues and expectations, and finally, to enhance their loyalty. The problem is that the typical enterprise data world scenario is completely different from that of the consumer, especially if we look at how chatbots can be implemented. Deep learning for chatbots, for example, may be appropriate in a consumer context, while in the enterprise world it’s usually difficult to easily gather large volumes of good examples that are ready to be used to train the system. For this reason, more and more companies are looking for more sophisticated approaches such as hybrid approaches based on different AI algorithms, machine learning / deep learning and advanced NLP. In the end, don’t forget that chatbots, as well as other forms of AI, are not magic, and any AI solution is always based on programming.   Learn more on Expert System Customer Support and Chatbot Applications  The post As for Types of Chatbots: Deep Learning for Chatbot appeared first on . Read more »
  • Natural Language Understanding: What is it and how is it different from NLP
      Have you ever asked: “Alexa, what’s the weather like outside?” or “Siri, how is the traffic this morning?” If so, you would have received a data-supported answer from your personal digital assistant. Just how can it understand that you are interested in knowing the weather in a specific location or the traffic on the exact route that you travel every morning? The answer is NLU: Natural language understanding. This artificial intelligence centered automation enables voice technology, like Siri, Cortana, Alexa and Google’s Assistant, to deduce what you actually mean, regardless of the way you express it. Natural Language Understanding (NLU) is defined by Gartner as “the comprehension by computers of the structure and meaning of human language (e.g., English, Spanish, Japanese), allowing users to interact with the computer using natural sentences”. In other words, NLU is Artificial Intelligence that uses computer software to interpret text and any type of unstructured data. NLU can digest a text, translate it into computer language and produce an output in a language that humans can understand. NLU & NLP: What is the difference? NLU and natural language processing (NLP) are often confused. Instead they are different parts of the same process of natural language elaboration. Indeed, NLU is a component of NLP. More precisely, it is a subset of the understanding and comprehension part of natural language processing. Natural language understanding interprets the meaning that the user communicates and classifies it into proper intents. For example, it is relatively easy for humans who speak the same language to understand each other, although mispronunciations, choice of vocabulary or phrasings may complicate this. NLU is responsible for this task of distinguishing what is meant by applying a range of processes such as text categorization, content analysis and sentiment analysis, which enables the machine to handle different inputs. On the other hand, natural language processing is an umbrella term to explain the whole process of turning unstructured data into structured data. NLP helps technology to engage in communication using natural human language. As a result, we now have the opportunity to establish a conversation with virtual technology in order to accomplish tasks and answer questions. Being able to formulate meaningful answers in response to users’ questions is the domain of Cogito Answers. This Cogito solution supports businesses through customer experience management and automated personal customer assistants. By employing Cogito Answers, businesses provide meticulous, relevant answers to customer requests on first contact. Interested in improving the customer support experience of your business? Cogito Answers makes every step of the support process easier, faster and less expensive both for the customer and the support staff.   Want to learn more? Download the Cogito Answers brochure The post Natural Language Understanding: What is it and how is it different from NLP appeared first on . Read more »
  • Proof of Concept: An Essential Step Before Adopting AI Solutions
    Why a proof of concept is one of the key factors for an AI-based project success in the enterprise data world? Artificial Intelligence (AI) is one of the most important business opportunities for 2019. While there are many definitions of AI (and even if many still don’t have t a clear idea about the range of AI technologies we encounter when talking about AI), we can all agree that AI is changing our lives. It’s essentially a technology that simulates capabilities that we normally associate with the human brain, and it can a variety of issues such as process automation, customer interaction and information intelligence. The POC: a taste of Artificial Intelligence As with other technologies, any business considering AI should undergo a proof of concept (PoC) approach to get started from concept to deployment. Let’s take the case of text analytics for enterprise AI where, Gartner, recommends that data and analytics leaders dealing with text analytics based on AI technologies should “conduct flash or quick proofs of concept to test a vendor’s capability with a small sample of data to narrow down the options, then plan for a full ontology of domain-specific data for a comprehensive evaluation of the tools’ capabilities” (“Market Guide for Text Analytics”, November 2018). Why you need a PoC Implementing a text analytics AI-based solution doesn’t have to be long and complicated. Choosing the right partner for your projects is key to ensure that you’re starting your AI journey with solid forethought, planning, expertise and best practices. This way, you’ll be positioned to implement the solution best suited for your needs and place you ahead of the competition without challenging your expense budget. A proof of concept can provide a concrete demonstration of the capability of AI technology—using real output—to solve your real problem. It’s the best way to understand whether a use case can really achieve positive benefits from that specific application. Essentially, it can determine if that AI solution will be successful or not. To get started, it’s important to identify the problems you need to solve, and to be sure that what you’re asking of the technology is realistic: some issues may not be suited for an AI-based technology. In addition to considering the problems you’d like to solve, you’ll also want to focus on the internal skills that your teams may need. Your partner vendor should be able to provide you with the guidance you need by working with you to evaluate with you the scope of your AI projects. Today, implementing a project that can rapidly deliver business value and demonstrate relevant business impacts is mandatory. For emerging technologies such as AI, a PoC is generally the most cost-effective path to demonstrate quick wins and build confidence. A PoC allows you to quickly compare different solutions and to test a vendor’s capability. Vendors who promote a PoC approach will accelerate your AI-based solution benchmark, and you’ll want to work with an AI technology partner that can provide experience and guidance. Some of the main factors in a successful AI-based project are proper analysis and effective implementation: a proof of concept can help a business understand if an existing process has to be replicated in the same way when using AI or if it’s better to combine AI with other processes and techniques. In summary, PoCs benefit the decision making process by allowing the business to: Test the technologies and methodologies to be used Deliver more immediate and concrete value Quickly compare different solutions and approaches Gain experience, skills and confidence in AI   What’s next? A PoC is a “closed” but working solution to help an organization understand whether an AI-based project can be successful or not. The next step will be creation of a detailed approach and a plan for implementation, including cost estimates, timelines and a high-level work estimate of what will be required to take the PoC to the next step and begin building the real, successful project. With 300+ AI projects and counting, Expert System has consolidated a winning AI methodology. Request a demo to find out how our AI technology works and request a PoC to understand and test how it can be applied to your use case! The post Proof of Concept: An Essential Step Before Adopting AI Solutions appeared first on . Read more »
  • Explainable Artificial Intelligence: How Does it Work?
    It has been said that questions of “why” are best left to philosophy, while science is the best realm to tackle questions of “how.” The field of Artificial Intelligence (AI) is often where these worlds intersect and at times, where they collide. “I don’t know why it works, but it works” doesn’t actually work when we talk about AI. Instead, we need Explainable Artificial Intelligence (XAI). For example, why did the software used by the cardiac surgery department select Mr. Rossi out of  hundreds of people on the list for a heart transplant? Why did the autopilot divert the vehicle off the road, injuring the driver but avoiding the mother pushing a stroller who was crossing the street? Why did the monitoring system at the airport choose to search Mr. Verdi instead of other passengers? In a previous post we talked about the need to understand the logic behind an AI system, but we still need to go deeper to understand the benefits of XAI versus an unknown. AI presents many dilemmas for us, and the answers are neither immediate or unambiguous. However, I believe we can say that “how” and “why” should be addressed. “To trust AI, I have to better understand how it works,” is one of the comments that we hear most often. The wave of excitement around AI, no doubt fueled by marketing investments by major IT players, is now being mitigated. The disappointment caused by the resounding failure of software that promised to learn perfectly and automatically (and magically) and to therefore reason at the human level, is bringing expectations about the real uses and benefits of AI back in line with reality. Today AI is generally a little bit “artificial” and a little bit “intelligent.” In many situations, it is not very different from traditional software: a computer program capable of processing input, based on precise pre-coded instructions (the so-called source code), that returns an output. The difference compared to 20 years ago is the greater computing power (today we have supercomputers) and a much larger number of inputs (the infamous big data). So, to understand how AI works, you need to know how the software works and if it operates with any prejudices; that is, is an output the result of an input or is it predetermined (regardless of the input)? The first aspect can be tackled by an XAI system, where X is explainable. In practice, the mechanisms and the criteria by which the software reasons and why it produces certain results must be clear and evident. The second aspect requires an ethical AI that is free from prejudices—determined by those who programmed the software, the dataset used for learning or by way of a cyber attack. An AI system whose functioning is impossible to understand is a black box and the opposite of an XAI and an ethical system. To me, a black box is a far worse solution than a glass box (or a white or clear box, for that matter). That’s why the “I don’t know why it works, but it works” approach can’t work when we talk about Artificial Intelligence. English translation by Analisi Difesa The post Explainable Artificial Intelligence: How Does it Work? appeared first on . Read more »
  • What is a Thesaurus and Why is it a Whole Other Thing from a Dictionary?
    What is a thesaurus? To understand it better, let’s look at this simple example. Consider the word “house,” which is defined as “a building for human, in which people live.” In addition to the definition, you would also consult the etymology, the spelling and the pronunciation. If you do the same search with a thesaurus, it suggests synonyms (“home, habitation, place of residence, homestead” etc.) but it doesn’t explain the meaning of words. While in the dictionary you can see the word’s definition and how it’s used in speech (noun, verb, adjective etc.), when you want to know similar words you have to look in a thesaurus. And sometimes a thesaurus also includes words with opposite meaning, antonyms. So, to give a brief explanation of the difference between a dictionary and a thesaurus, we can say that a thesaurus is not an ordinary dictionary, a list of single terms and their definitions in alphabetical order but it includes “clusters” of words with similar meanings grouped together.  The importance of a thesaurus for text analytics Any text analytics tool needs a detailed thesaurus to be able to understand and identify all the concepts and relevant data. An organization’s thesaurus includes and describes the objects and relationships—products, materials, geographies, people, etc.—that are essential to its business. However, available thesauri can hardly cover 100% of the terminology related to a specific domain, either because it would be too costly to identify all possible forms of a term, or because the way these forms occur cannot be predicted, let alone their various misspellings. Based on the Expert System Cogito technology, Cogito Studio Express  is the ontology editor that coordinates the workflows and resources required to create, manage, enrich and apply thesauri and ontology to content for semantic enrichment. Cogito Studio Express helps companies efficiently leverage domain-specific thesauri for entity extraction, extending the terminology of the thesaurus with a good quality, and allows multiple users to work daily on multilingual thesauri. It allows you to enrich existing thesauri or import new ones, showing immediately how they apply to content. Compared to other applications, Cogito Studio Express makes it simple for end users to design, maintain their taxonomy/ontology and enable the development of powerful text analytics solution for categorization or extraction. Thanks to these features, organizations can simplify and accelerate their development while significantly reducing their operating cost. Want to learn more? Download the brochure of Cogito Studio Express The post What is a Thesaurus and Why is it a Whole Other Thing from a Dictionary? appeared first on . Read more »
WordPress RSS Feed Retriever by Theme Mason

Author: hits1k

Leave a Reply