There are 5 important elements for you to succeed in business:
Our proprietary technology allows us to offer you unprecedented services:
- Unlimited traffic
- Fast reaction, weeks instead of months or years
- And best of all, we deliver or it is free
The advantage of using technology to generate traffic is the fact that now you have the opportunity to work on the other four factors that determine your success.
So even if you fail more than once and even alienate your prospects, our technology keeps on bringing you more and more people so you can keep on correcting your mistakes over and over until you get it right and succeed in your business. This is what we do for you, you do the rest.
This is what we do for you; provide you with unlimited traffic as much as you want!
Just contact us for a free consultation.
Your products or services are of paramount importance; your visitors must want what you have to offer. It is obvious, right? But even if you fail here, because we keep on bringing you people you can correct the issue. How good is it that you have the best product but no one comes?
All these visitors found you online, so if your price is not competitive, they can just as easily find your competitors.
You must be able to explain quickly and clearly what you offer.
How will you deliver, get paid, etc, this is probably the easiest but just as important.
- Five Principles to Advance AI at the Air ForceThe Air Force has been on an almost three-year journey to integrate AI into operations and that effort will soon be more apparent as the service declassifies its strategy, Capt. Michael Kanaan, the service’s co-chair for artificial intelligence, said June 26 at the AI World Government conference in Washington, D.C. “We had to find a way to get us to a place where we could talk about AI in a pragmatic, principled, meaningful way,” said Kanaan, in an account in C4ISRNet. During his speech, Kanaan laid out five principles that have guided the Air Force with artificial intelligence in the meantime. They are: Technological barriers will be a significant hurdle. While the service has made it a point to limit technological obstacles, contractors may face higher-priced products geared toward security-driven government programs compared to less expensive commercial programs. A new attitude toward commercial off-the-shelf technology within the service can help, he said. Data needs to be treated like a strategic asset. “We used to ask the question, if a tree falls in the forest does it make a sound. Well, in the 21st century the real question to ask is was something there to measure it,” he said. He explained this involves looking at when and how to digitize workflows. The Air Force must be able to democratize access to AI. “This is an opportunity now to say, machine learning as our end state, if done right, should be readable to everyone else,” Kanaan said. This will involve balancing support and operations and taking into consideration the reality that the demographics of the traditional workforce are going to shift, Kanaan explained. “Not looking at the top one percent, but focusing on the 99 percent of our workforce,” he said. “The Air Force, of those 450,000 people, 88 percent are millennials [adults under 40].” Looking to digital natives in the integration process will be valuable because this younger slice of the workforce already has insights into how this technology works, he suggested. Read the source post at C4ISRNet. Read more »
- Game Theory and AI Systems: Use Case For Autonomous CarsBy Lance Eliot, the AI Trends Insider When you get onto the freeway, you are essentially entering into the matrix. For those of you familiar with the movie of the same name, you’ll realize that I am suggesting that you are entering into a kind of simulated world as your car proceeds up the freeway onramp and into the flow of traffic. Whether you know it or not, you are indeed opting into playing a game, though one much more serious than an amusement park bumper cars arena. On the freeway, you are playing a game of life-and-death. It might seem like you are merely driving to work or trying to get to the ballgame, but the reality is that for every moment you are on the freeway you are at risk of your life. Your car can go awry, say it suddenly loses a tire, and you swerve across the lanes, possibly ramming into other cars or going off a freeway embankment. Or, you might be driving perfectly well, and all of a sudden, a truck ahead of you unexpectedly slams on its breaks and you crash into the truck. I hope this doesn’t seem morbid. Nor do I want to appear to be an alarmist. But, you have to admit, these scenarios are all possible and you are in fact at the risk of your life while on the freeway. For a novice driver, such as a teenager starting to drive, you can usually see on their face an expression that sums up the freeway driving circumstance – abject fear. They know that one wrong move can be fatal. They are usually somewhat surprised that anyone would trust a teenager to be in such a situation of great responsibility. Most teenagers are held in contempt by adults for a lack of taking responsibility seriously, and yet we let them get behind the wheel of a multi-ton car and drive amongst the rest of us. That’s not to suggest that its only teenage drivers that understand this matter. There are many everyday drivers that know how serious being on the freeway is. They grip the steering wheel with both hands and arch their backs and are paying close attention to every moment while on the freeway. Meanwhile, there are drivers that have gotten so used to driving on the freeway that they act as though they are at a bumper car ride and don’t care whether they cut off other drivers or nearly cause accidents. They zoom along and seem to not have a care in the world. One always wonders whether those drivers are the ones that get into the accidents that you see while on the freeway. Are they more prone to accidents or are they actually more able to skirt around accidents, which maybe they indirectly caused, but managed to avoid themselves getting entangled into. For my article about how greed motivates drivers and its impacts on self-driving cars, see: https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/ Leveraging Game Theory Anyway, if you are willing to concede that we can think of freeway driving as a game, you then might be also willing to accept the idea that we can potentially use game theory to help understand and model driving behavior. With game theory, we can consider the freeway driving and the traffic to be something that can be mathematically modeled. This mathematical model can take into account conflict. A car cuts off another car. One car is desperately trying to get ahead of another car. And so on. The mathematical model can also take into account cooperation. As you enter onto the freeway, perhaps other cars let you in by purposely slowing down and making an open space for you. Or, you are in the fast lane and want to get over to the slow lane, so you turn on your blinker and other cars let you make your way from one lane to the next. There is at times cooperative behavior on the freeway, and likewise at times there is behavior involving conflict. If this topic generally interests you, there’s key work by John Glen Wardrop that produced what is considered the core principles of equilibrium in traffic assignment. Traffic assignment is the formal name given to modeling traffic situations. He developed mathematical models that showcase how we seek to minimize our cost of travel, and that we potentially can reach various points of equilibrium in doing so. At times, traffic suffers and can be modeled as doing so due to the “price of anarchy,” which is based on presumably selfish oriented behavior. See my article about the prisoner’s dilemma and tit-for-tat driving: https://aitrends.com/selfdrivingcars/tit-for-tat-and-ai-self-driving-cars/ See my article about bounded irrationality and driving behavior: https://aitrends.com/selfdrivingcars/motivational-ai-bounded-irrationality-self-driving-cars/ For those of you that are into computer science, you likely are familiar with the work of John von Neumann. Of his many contributions to the field of computing and mathematics, he’s also known for his work involving zero-sum games. Indeed, he made use of Brouwer’s fixed-point theorem in topology, and had observed that when you dissolve sugar in a cup of coffee that there’s always a point without motion. We’ll come back to this later on in this exploration of game theory and freeway traffic. Let’s first define what a zero-sum game consists of. In a zero-sum game, the choices by the players will not decrease and nor increase the amount of available resources, and thus they are competing against a bounded set of resources. Each player wants their piece of the pie, and in so doing are keeping that piece away from the other player. The pie is not going to expand or contract, it stays the same size. Meanwhile, the players are fighting over the slices and when someone else takes a slice it means there’s one less for the other players to have. A non-zero sum game allows for the pie to be increased and thus one player doesn’t necessarily benefit at the expense of the other players. When you are on the freeway, you at times experience a zero-sum game, while at other times it is a non-zero sum game. Suppose you come upon a bunch of stopped traffic up ahead of you. You realize that there’s an accident and it has led to the traffic halting. You are going to get stuck behind the traffic and be late to work. Looking around, you see that there’s a freeway offramp that you could use to get off the freeway and take side streets to get around the snarl. It turns out that the freeway traffic is slowly moving forward up toward the blockage, and meanwhile other cars are also realizing that maybe they should try to get to the offramp. You are in the fast lane, which is the furthest lane from the exit ramp. The cars in the other closer lanes are all vying to make the exit. They don’t care about you. They care about themselves making the exit. If they were to let you into their lane, it would increase your chances of getting to the offramp, but simultaneously decrease their chances. This is due to the aspect that the traffic is slowly moving forward and will gradually push past the offramp. There’s a short time window involved and it’s a dog eat dog situation. Zero-sum game. But suppose instead the situation involved all the cars that were behind the snarl to share with each other to get to the offramp. Politely and with civility, the cars each allowed other cars around them to make the offramp. Furthermore, there was an emergency lane that the cars opted to use, which otherwise wasn’t supposed to be used, and opened up more available resources to allow the traffic to flow over to the exit. Non-zero sum game (of sorts). Game theory attempts to use applied mathematics to model the behavior of humans and animals, and in so doing explain how games are played. This can be done in a purely descriptive manner, meaning that game theory will only describe what is going on about a game. This can also be done in a prescriptive manner, meaning that game theory can advise about what should be done when playing a game. Applying Game Theory To Autonomous Cars What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are using game theory to aid in modeling the traffic that will occur with the advent of AI self-driving cars. There are some that believe in a nirvana world whereby all cars on the roadways will be exclusively AI self-driving cars. This provides a grand opportunity to essentially control all cars and do so in a macroscopic manner. Presumably, either by government efforts or by other means, we could setup some master system that would direct the traffic on our roads. Imagine that when you got onto the freeway, all of the cars on the freeway were under the control of a master traffic flow system. Each car was to obey strictly to the master traffic flow system. It alone would determine which lane each car would be in, what the speed of the car would be, when it will change lanes, etc. In this scenario, it is assumed that there would never be traffic snarls again. Somehow the master traffic flow system would prevent traffic snarls from occurring. All traffic would magically flow along at maximum speeds and we could increase the speed limit to say 120 miles per hour. Pretty exciting! But, this is something that seems less based on mathematics and more so based on a hunch and a dream. It’s also somewhat hard to believe that humans are going to be willing to give up the control of their cars to a master traffic flow system. I realize you might immediately point out that if people are willing to secede control of the driving task to an AI-based self-driving car, it’s a simple next step to then secede that their particular AI self-driving car must obey some master traffic control system. We’ll have to wait and see whether people will want their AI self-driving car to be an individualized driver, or whether they’ll be accepting that their individualized driver will become a robot Borg of the larger collective. Anyway, even if all of this is interesting to postulate, it still omits the real-world aspect that we are not going to have all and only AI self-driving cars for a very long time. In the United States alone, there are 200+ million conventional cars. Those conventional cars are not going to disappear overnight and be replaced with AI self-driving cars. It’s just not economically feasible. As such, we’re going to have a mixture of AI self-driving cars and conventional cars for quite some time. Let’s make that even longer too, due to the aspect that there are different levels of AI self-driving cars. A true self-driving car is considered at Level 5. That’s a self-driving car for which the AI does all of the driving. There is no need for a human driver. There is indeed usually no provision for a human driver, and the driving controls such as the steering wheel and pedals are not even present. For self-driving cars less than Level 5, the driving task is co-shared between the human driver and the AI. We might as well then say that the self-driving cars that are less than a Level 5 are pretty much in the same boat as the conventional cars. This is due to the aspect that the human driver can still take over the driving task (though, for Level 4, this is not yet quite settled and some view that a Level 4 would still have car controls for humans, while others insist it not). If we have even one ounce of human driving, we’re back to the situation that it’s going to be problematic to have a master traffic flow system that commands all cars to obey. You might argue that maybe when a less than level 5 self-driving car gets onto the freeway we could jury rig those cars to obey the master traffic flow system, but this seems like a credibility stretch of how this would play out. You could even stand this topic on its ear by making the following proposal. Forget about AI self-driving cars per se. Instead, let’s make all cars to have some kind of remote car driving capability. We add this into all cars, conventional or otherwise. When any car gets onto the freeway, it must have this remote control feature included, otherwise it is banned from getting onto the freeway. So, we’ve now reduced all such cars to follow-along automata that the master traffic flow system can control. We would somehow lock-out the human driving controls and only allow the use of the remote control, during the time that the car is on the freeway. See my article about the levels of AI self-driving cars: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/ See my article about swarms and AI self-driving cars: https://aitrends.com/selfdrivingcars/swarm-intelligence-ai-self-driving-cars-stigmergy-boids/ If we did this, it might give us the nirvana traffic flow advantages that some claim they see in the future. And it would still allow for human driving, but just not on the freeways, or maybe only on the freeways at off-hours and that during the morning traffic and evening traffic times the master flow system takes over all such cars on the freeways. We then wouldn’t need to be in a rush to perfect the AI self-driving cars, since instead we’ve just outfitted cars with this remote control feature. It would be a lot easier than trying to get a car to drive like a human does, which is what the AI self-driving car efforts are trying to achieve. Well, I really doubt we’ll have us all accept the notion of having a remote control driving feature placed into our conventional cars. This seems like something that society at large would have severe heartburn over. It has too much of a Big Brother smell to it. That’s actually why so far there seems to be such overall support for AI self-driving cars. Most people tend to assume that an AI self-driving car will obey whomever the human occupant is. It’s like having your own electronic chauffeur that is always at your beck and call. If instead it was being pitched that AI self-driving cars would allow for governmental control of all car traffic and that wherever you wanted your AI self-driving car to go would first need to be cleared by the government, I’d bet we’d have a lot of people saying let’s put the brakes on this AI self-driving car thing. Now, it could be that we at first have AI self-driving cars that are all about individual needs. You are able to use your AI self-driving car to drive you wherever you want to go, and however you want to get there. But, then there’s a gradual realization that it might be prudent to collective guide those AI self-driving cars. And so via V2I (vehicle to infrastructure) communication, we creep down that path by having the roads tell your AI self-driving car which road to take and how fast to go. This then expands and eventually reaches a point whereby all AI self-driving cars are doing this. The next step becomes master control. Ouch, we got back to that. It’s just that it might happen by happenstance over time, rather than as part of a large-scale master plan. You might enjoy reading my piece about conspiracy theories and AI self-driving cars: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/ Predicting A Point Of Equilibrium Returning to the aspect about using game theory, we can at least try to do traffic simulations and attempt to see what might happen as more and more cars become AI self-driving cars, especially those that are at the vaunted Level 5. These simulations use various payoff matrices to gauge what will happen as an AI self-driving car drives alongside human driven cars. A symmetric payoff is one that depends upon the strategy being deployed and not the AI or person deploying it, while an asymmetric payoff is dependent. We also include varying degrees of cooperative behavior versus non-cooperative behavior. See my article about human and AI driving styles: https://aitrends.com/selfdrivingcars/driving-styles-and-ai-self-driving-cars/ See my article about simulations for AI self-driving cars: https://aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/ John Nash made some crucial contributions to game theory and ultimately was awarded the Nobel Prize in Economic Science for it. His mathematical formulation suggested that when there are two or more players in a game, at some point there will be an equilibrium state such that no player can do any better than they are already doing. The sad thing is that we cannot yet predict per se when that equilibrium point is going to be reached – well, let’s say it is very hard to do. This is still an open research problem and if you can solve it, good for you, and it might get you your very own Nobel Prize too. Why would we want to be able to predict that point of equilibrium? Because we could then potentially guide the players toward it. On the freeway, imagine that you have a hundred cars all moving along. Some are not doing so well and are behind in terms of trying to get to work on time. Others are doing really well and ahead of schedule and will get to work with plenty of time to spare. All else being equal, if we had a master traffic flow system, suppose it could reposition and guide the cars so that they would all be at their best possible state. But if we aren’t able to figure out that best possible state, there’s no means to therefore guide everyone toward it. We instead have to use a hit-and-miss approach (not literally hit, just metaphorically). In more formal terms, Nash stated that for a game of a finite number of moves, there exists a means by which the player can randomly choose their moves such that they will ultimately reach a collective point of equilibrium, and that at that point no player can further improve their situation. You might say that everyone has reached the happiest point to which they can arrive, given the status of everyone else involved too. When I earlier said it was hard to calculate the point of equilibrium, I was suggesting that it can be found but that it is computationally expensive to do so. Some of you might be familiar with classes of mathematical problems that are considered computable in polynomial time (P), and others that are NP (non-deterministic polynomial time). We aren’t sure yet whether the calculation of Nash’s point of equilibrium is P or NP. Right now, it seems hard to calculate, that we can say for sure. By the way, for those of you looking for a Nobel Prize, please let us know if P = NP. Conclusion Game theory will increasingly become important to designing and shaping traffic flow on our roads, particularly once we begin to see the advent of true Level 5 AI self-driving cars. The effort to mathematically model conflict and cooperation in our traffic will involve not only the intelligent rational human decision makers, along with their irrational behavior, but also the potential intelligent rational (and maybe irrational) behavior of the AI of the self-driving cars. Getting a handle on the traffic aspects will allow AI developers to better shape the AI of the self-driving cars and will aid regulators and the public at large in experiencing what hopefully will be better traffic conditions than with human-only drivers. I don’t think we want to end-up with AI self-driving cars that drive like those crazy human drivers that seem to not realize they are involved in a game of life-and-death. It’s deadly out there and we need to make sure that the AI self-driving cars know how to best play that that serious and somber game. Copyright 2019 Dr. Lance Eliot This content is originally posted on AI Trends. Read more »
- Digital Assistants Transforming Public ServiceBy Deborah Borfitz, Senior Science Writer, AI Trends Digital assistants have become a major trend in government at every level and across geographies, and could soon be a mainstay in many state and federal agencies in the U.S. Recent favorable signs include an executive order launching the American AI Initiative and the Health and Human Services Department awarding 57 spots on its Intelligent Automation/Artificial Intelligence (AI) contract, according to natural language processing (NLP) expert William Meisel, president of TMA Associates. Speaking at the AI World Government conference, held last month in Washington, D.C., Meisel says digital assistants (aka “intelligent” or “virtual” assistants) are among the most developed and least risky ways to implement AI—and “the closest to what we see in sci-fi.” Digital assistants are broadly applicable across departments and agencies looking to cut costs and boost human productivity and have a minimum probability of failure and unintended consequences. For a citizenry looking for answers, they’re also a “nice alternative to automated systems and long hold times,” he adds. William Meisel, president, TMA Associates Juniper Research reports that, by 2023, one-quarter of the populace will be using digital voice assistants daily, says Meisel. By the end of this year, the global install base for smart speakers is projected to exceed 200 million units and they’re doing as well as humans in understanding speech, he adds. NLP is the core technology, matching user text input to executable commands. Digital assistants that recognize the voice typically first convert speech to text, meaning speech recognition can be tacked onto a text-only (chatbot) solution, Meisel says. Either way, the technology generates a lot of data that can be used to personalize conversations and fix flaws in websites. Use Cases Among the smorgasbord of intelligent assistants in the public sector are: Emma, used by U.S. Citizenship and Immigration Services (ICE) to help website visitors get answers and find information in English or Spanish; and Mrs. Landingham, a chatbot of the U.S. General Services Administration that works with the Slack app and guides new-hire onboarding, says Meisel. In the UK, the National Health Service has a digital assistant to help residents determine if their medical condition warrants a trip to the emergency room, he continues. The Medical Concierge Group in Uganda has built a digital assistant to advise people on their treatment options and when to see a doctor. And a chatbot based on the Haptik platform is allowing officials in Maharashtra, India provide conversational access to information on 1,400 public services managed by the state government. Virtual assistants that give health advice over the phone are expected to be major players as the United Nations works to meet its 2030 Sustainable Development Goals, says Meisel. The MAXIMUS Intelligent Assistant has already enhanced the customer service experience of government citizenries around the globe. In Mississippi, the MISSI chatbot assists with public services and suggests good places to visit, says Meisel. The City of Los Angeles is particularly fond of bots. Residents can turn to CHIP (City Hall Internet Personality), based on Microsoft’s Azure bot framework, if they need help filling out forms. They can also opt to receive local daily news and information via Amazon’s Alexa. Up in San Francisco, PAIGE—built atop Facebook’s wit.ai NLP engine—is assisting workers with questions about the city’s confusing procurement process, Meisel says. OpenData KC, the Facebook messenger chatbot used by Kansas City’s open data portal, has enabled users to quickly find relevant information and datasets on a crowded online website, says Meisel. In the Carolinas, the not-for-profit hospital network Atrium Health has a HIPAA-compliant, Alexa-based digital assistant people are using to reserve a spot at one of the system’s 31 urgent care locations. By the Numbers Commercial applications of digital assistants are likewise varied and widespread, he says. Last year, Bank of America launched a mobile app called Erica that customers can use to check their account balance and make transactions. Erica is gaining new users at the rate of half a million per month and has doubled (to 4,000) the ways in which clients can ask her questions. Telecommunications conglomerate Vodafone Group has TOBi to handle customer transactions from start to finish and plans to increase the number of contacts reached by chatbot six-fold over the next few years. Adobe Analytics finds 91% of 400 surveyed business decision-makers have made significant investments in voice interaction and 99% are increasing those investments, says Meisel. Close to a quarter of companies have already released a voice app while 44% plan to do so this year, he adds. Most of those apps are for defined channels. Failures are common when companies and governments try to build a specific AI tool, says Meisel, but there is no shortage of companies standing by to help—including TMA Associates as well as Nuance Communications, Verint, and Microsoft. The biggest challenge with digital assistants, he adds, is that “you don’t know what people are going to say when call in. You will always have customers say something you don’t expect.” The solution is to deploy slowly, using the technology to augment the human system while you learn what you don’t know—precisely what Amazon did with Alexa. Privacy By Design Detecting and resolving misunderstandings between humans and machines is the specialty of Ian Beaver, lead research engineer at Verint, helping to ensure intelligent virtual assistants deliver tangible productivity gains. Interactive voice response technology and website FAQs are not enough for government agencies where “funding is pulled in multiple directions and customer service is typically not high on the priority list,” he says. Ian Beaver, research engineer, Verint Digital assistants can also better accommodate fluctuations and surges in user demand, says Beaver. They can deal with unforeseen uses cases, circumstantial information, changing user demographics and requirements, and focus on where improvements are most needed. In the public sector, agencies have a captive audience because people have no choice in their service provider, says Beaver, and “people don’t trust what they do not choose.” Users are more willing to provide identifiable data if they know there are guardrails around how it can be used—think privacy protection laws like GDPR and CCPA. Virtual assistants that offer “privacy by design” can likewise give users a greater sense of freedom to talk about sensitive topics with perceived repercussions, he adds. Beaver presented the U.S. Army’s SGT STAR virtual recruiter to demonstrate his points. The chatbot went live in 2006, integrated with Facebook four years later and hit app stores in 2012, he says. It understands about 1,100 distinct user intentions, talks to 900 unique people a day and took over the work of 55 cyber-recruiters in the Army’s live chat facility. “After a while, only 3% of conversations hit a live human,” he says. People talk to SGT STAR longer than they would a human recruiter, and all that data quickly painted a portrait of users, says Beaver. They’re influenced by movies, don’t want to waste recruiters’ time, and family members care about what will happen to their loved one. They will ask hard questions of the digital assistant before talking to a human recruiter, “like a test run.” Users also have a lot of practical questions, such as “How do I pay my bills when I’m deployed?” and “Am I going to have to cut my hair?” The information was used to redesign the Army’s website, which now includes answers to 400 common questions, says Beaver. Unexpected uses of SGT STAR were also discovered—notably, to disclose embarrassing, illegal and other personal issues that could affect enrollment or Army life. “We went into the healthcare space because of how open people were.” Veterans and active duty personnel were both looking for resources on post-traumatic stress disorder, insomnia, and other service-related mental health issues, he continues. Enlisted soldiers did not want to risk being judged or discharged by going to their supervisor for answers. A second case study looked at the now-retired SpectreScout, a chatbot built to scan all internet relay channels on behalf of the resource-limited cyber-crimes division of U.S. Immigration and Customs Enforcement (ICE). “We predicted the channels where the bad boys would hang out and pretended to be looking for goods [e.g., a child to exploit],” says Beaver. “We’d spin up a bunch of users and have conversations with our self. When humans joined in, we’d arrange a meeting with a suspect and an ICE agent would take over.” It was so successful at generating leads that it finally had to be shut down; ICE agents couldn’t keep up. So Verint built Emma to answer questions, says Beaver, including sensitive inquiries about how to enter the U.S. as a refugee or to apply for asylum. The virtual assistant launched in 2015 to handle large swings in call volume triggered by mere mentions of policy changes. Emma succeeded in reducing those call spikes, and without the need to retrain a bunch of ICE representatives, he notes. In fact, Emma became a go-to resource for immigration attorneys on policy matters. Smarter Systems In healthcare, one role of digital assistants is to create a “frictionless experience” between doctors and patients, says Eduardo Olvera, director of user experience at Nuance Communications. The company has an Alexa-like virtual assistant that extracts insights from routine exam-room dialogue and automatically uploads the information to the right spots in the electronic health record—meaning, doctors can be fully present with patients and not focused on a computer screen. In call centers, intelligent assistants can likewise be fed transcripts of phone conversations and return recommendations for improving citizen engagement, he adds. The new Pathfinder project of Nuance Communications is using machine learning and AI innovation to increase the conversational intelligence of virtual assistants and chatbots, says Olvera. Project Pathfinder reads existing chat logs and transcripts of conversations and uses the data to build effective dialog models by adding in missing pieces of information and modifying the flow. The company just came up with another, yet-to-be-named conversational AI tool that will manage the work of testing for biases in the data, he says. Systems get smarter, and interactions improve, when there is “common ground,” Olvera says. “We went from ‘please tell me in my own words’ to ‘please tell me in your own words,’ but that’s still not very grounded. What we want to see is ‘I already know. Here, it’s done.’” If-then rules only work about 35% of the time, Olvera says, and using machine learning to populate a record with everything known about a customer gets close to 80% accuracy in predicting their need. Additional AI innovation can take governments to the 90% mark that is truly transformative. Learn more about Pathfinder from Nuance Communications and about nextIT from Verint. Read more »
- Encourage Immigration of Skilled AI Technologists, Suggests Former IARPA DirectorTo stay on the cutting edge of AI development, the U.S. needs to better encourage immigration of skilled technologists, dedicate higher levels of education funding and maintain international alliances, according to the former head of Intelligence Advanced Research Projects Agency, speaking recently at AI World Government in Washington, DC. China, Russia and the U.S. are all vying for top spots in AI development, said Jason Matheny, director of the Center for Security and Emerging Technology at Georgetown University and former IARPA director, in an account reported in fedscoop. Jason Matheny, director of the Center for Security and Emerging Technology at Georgetown University The new technology could usher in a radical change to data analytics and military applications, giving technological advantage to whoever reaches broad-scale AI implementation first. Reports that China is massively outdoing the U.S. in AU development can be misleading, Matheny suggested. “We see lots of current estimates …but I have not seen really good empirics yet justifying those estimates,” he said during a fireside chat at the conference. On top of that, questions of how China is spending its money — be it on quantitative research or human development — are still unanswered. At IARPA, Matheny led investments in AI for applications in the intelligence community. How money is invested is as important as how much of it is spent, he said. “Our ability to attract and retain the world’s best and brightest computer scientists and electrical engineers is something we have greatly benefited from,” he said. That attraction comes mainly from the quality of higher education available in the U.S. Read the source article in fedscoop. Read more »
- How NASA Wants to Explore AINASA is working to overcome barriers that once blocked it from a full pursuit of innovations in artificial intelligence and machine learning technology, according to an account in the Federal Times. NASA has previously used AI in human spaceflight, scientific analysis and autonomous systems. It currently has multiple programs that use AI/ML: CIMON, which is “Alexa for space”; ExMC, medical care AI assistance; ASE, autonomy for scheduling in space; Multi-temporal Anomaly Detection for SAR Earth Observations; FDL, partnership between industry and NASA through AIMs; and robots, and rovers. Brian Thomas, NASA data scientist In a June 26 presentation at the AI World Government Conference in Washington, D.C., Brian Thomas, a NASA agency data scientist and program manager for Open Innovation, spoke about how to get the best results for AI/ML while considering important policy and culture implications at the agency. Machine learning has been with us for 60 years, Thomas said, “so really the question is, why haven’t we been using this all along?” “We’ve already seen the value in these technologies,” Thomas said. “They are enabling NASA’s mission now. The problem is that we’re being held back, believe it or not, and so how can we do better.” Read the source article in the Federal Times. Read more »
- Putting AI into Practice is a Test, Say AnalystsMaking sense of how to employ AI and machine learning can be difficult, said two analysts at a session at the AI World Government conference held in Washington, DC from June 24-26. Ron Schmelzer and Kathleen Walch, who both are managing partners and principal analysts at Cognilytica, an AI-focused analyst and advisory firm, shared insights from their work reviewing and interpreting the AI marketplace—which includes their weekly podcast called AI Today. “AI is transforming the way we work, live and interact together, but putting it into the practice is more difficult than it may seem,” said Schmelzer, in a report from Signal, a publication of AFCEA International. AI will support future enterprise systems that are both intelligent and autonomous for an “always-on workforce,” the managing partners stressed. AI has been through two winters, or times of falling interest and investment, and “now it is back into vogue…. and we hope this time around it sticks,” Walch added. While industries and organizations are already experiencing changes due to employing the advanced computer systems, AI doesn’t make a use case for everything, the experts said. According to Schmelzer and Walch, AI and other cognitive or intelligent technologies are best suited for tasks or problems involving classification and identification, such as object identification and clustering or for conversational interfaces that use text or voice chatbots. AI also does well with performing predictive analytics using big data, or using structured data to make inferences; pattern discovery to find hidden patterns in big data; or on autonomous systems, robotic or other systems that run independently, without human interaction. In addition, AI is well-suited for game and scenario playing, or letting computer systems discover hidden rules; or for providing hyper-personalization and recommendations, connecting pieces of information to make a larger conclusion to serve customers or users more effectively. See the source article in Signal. Read more »
- Use of AI in Government Still LagsAdoption of AI and automated software tools has been sluggish, especially by the US government, according to a report from SearchEnterpriseAI, based on a session at the AI World Government conference held recently in Washington, DC. Bill Valdez, director of the Senior Executives Association (Photo By Tom Williams/CQ Roll Call) While companies are beginning to benefit from the use of AI tools, users are low on the learning curve about AI technologies, data scientists are in short supply and employees are sometimes unwilling to use the new technologies. Still, private sector adoption of AI is faster and more efficient than that of the federal government. “The federal government is not ready for the new world of AI,” said Bill Valdez, president of the Senior Executives Association, during a panel at the conference. Valdez cited a 2017 survey of the Senior Executive Service (SES), a class of executives specified by the U.S. government civil service regulations, that indicated the executives’ awareness of AI technologies was low. Read the source article at SearchEnterpriseAI. Read more »
- AI in Government: Ethical Considerations and Educational NeedsBy Deborah Borfitz, Senior Science Writer, AI Trends In the public sector, adoption of artificial intelligence (AI) appears to have reached a tipping point with nearly a quarter of government agencies now having some sort of AI system in production—and making AI a digital transformation priority, according to research conducted by International Data Corporation (IDC). In the U.S., a chatbot named George Washington has already taken over routine tasks in the NASA Shared Services Center and the Truman bot is on duty at the General Services Administration to help new vendors work through the agency’s detailed review process, according to Adelaide O’Brien, research director, Government Insights at IDC, speaking at the recent AI World Government conference in Washington, D.C. The Bureau of Labor Statistics is using AI to reduce tedious manual labor associated with processing survey results, says conference speaker Dan Chenok, executive director of the IBM Center for The Business of Government. And one county in Kansas is using AI to augment decision-making about how to deliver services to inmates to reduce recidivism. If Phil Komarny, vice president for innovation at Salesforce, has his way, students across 14 campuses at the University of Texas will soon be able to take ownership of their academic record with a platform that combines AI with blockchain technology. He is a staunch proponent of the “lead from behind” approach to AI adoption. The federal government intends to provide more of its data to the American public for personal and commercial use, O’Brien points out, as signaled by the newly enacted OPEN Government Data Act requiring information be in a machine-readable format. But AI in the U.S. still evokes a lot of generalized fear because people don’t understand it and the ethical framework has yet to take shape. In the absence of education, the dystopian view served up by books such as The Big Nine and The Age of Surveillance Capitalism tends to prevail, says Lord Tim Clement-Jones, former chair of the UK’s House of Lords Select Committee for Artificial Intelligence and Chair of Council at Queen Mary University of London. The European Union is “off to a good start” with the General Data Protection Regulation (GDPR), he notes. The consensus of panelists participating in AI World Government’s AI Governance, Big Data & Ethics Summit is that the U.S. lags behind even China and Russia on the AI front. But the communist countries plan to use AI in ways the U.S. likely never would, says Thomas Patterson, Bradlee Professor of Government and the Press at Harvard University. Patterson’s vision for the future includes a social value recognition system that government would have no role in or access to. “We don’t want China’s social credit system or a surveillance system that decides who gets high-speed internet or gets on a plane,” Patterson says. Risks and Unknowns The promise of AI to improve human health and quality of life comes with risks—including new ways to undermine governments and pit organizations against one another, says Thomas Creely, director of the Ethics and Emerging Military Technology Graduate Program at the U.S. Naval War College. That adds a sense of urgency to correcting the deficit of ethics education in the U.S. Big data is too big without AI, says Anthony Scriffignano, senior vice president and chief data scientist at Dun & Bradstreet. “We’re looking for needles in a stack of needles. It’s getting geometrically harder day to day.” The risk of becoming a surveillance state is also real, adds his co-presenter David Bray, executive director of the People-Centered Coalition and senior fellow of the Institute for Human-Machine Cognition. The number of network devices will soon number nearly 80 billion, roughly 10 times the human population, he says. Presently, it’s a one-way conversation, says Scriffignano, noting “you can’t talk back to the internet.” In fact, only 4% of the net is even searchable, and search engines like Google and Yahoo are deciding what people should care about. Terms like artificial intelligence and privacy are also poorly defined, he adds. The U.S. needs a strategy for AI and data, says Bray, voicing concern about the “virtue signaling and posturing” that defines the space. No one wants to be a first mover, particularly in rural America where many people didn’t benefit from the last industrial revolution, but “in the private sector you’d go broke behaving this way.” Meanwhile, AI decision-making continues to grow in opaqueness and machine learning is replicating biases, according to Marc Rotenberg, president and executive director of the Electronic Privacy Information Center. After Google acquired YouTube in 2006, and switched to a proprietary ranking algorithm, EPIC’s top-rated privacy videos mysteriously fell off the top-10 list, he says. EPIC’s national campaign to advance algorithmic transparency has slogans to match its objectives: End Secret Profiling, Open the Code, Stop Discrimination by Computer, and Bayesian Determinations are Not Justice. A secret algorithm assigning personally identifiable numeric scores to young tennis players is now the subject of a complaint EPIC filed with the Federal Trade Commission, claiming it impacts opportunities for scholarship, education, and employment, says Rotenberg. Part of its argument is that the ratings system could in the future provide the basis for government rating of citizens. Replicating an outcome remains problematic, even as numerous states have begun experimenting with AI tools to predict the risk of recidivism for criminal defendants and to consider that assessment at sentencing, says Rotenberg. The fairness of these point systems is also under FTC scrutiny. Matters of Debate The views of Al experts about how to move forward are not entirely united. Clement-Jones is adamant that biotech should be the model for AI because it did a good job building public trust. Michael R. Nelson, former professor of Internet studies at Georgetown University, reflected positively on the dawn of the internet age when government and businesses worked together to launch pilot projects and had a consistent story to tell. Chenok prefers allowing the market to work—”what is 98% right with the internet”—along with industry collaboration to work through the issues and learn over time. Clement-Jones also believes the term “ethics” helps keep the private sector focused on the right principles and duties, including diversity. Nelson likes the idea of talking instead about “human rights,” which would apply more broadly. Chenok was again the centrist, favoring “ethical principles that are user-centered.” Whether or not the public sector should be leading AI education and skills development was also a matter of debate. Panelist Bob Gourley, co-founder and chief technology officer for startup OODA LLC, says government’s role should be limited to setting AI standards and laws. Clement-Jones, on the other hand, wants to see government at the helm and the focus be on developing creativity across a diversity of people. His views were more closely aligned with that of former Massachusetts governor and presidential candidate Michael Dukakis, now chairman of The Michael Dukakis Institute for Leadership and Innovation. The U.S. needs to play a major and constructive role in bringing the international community together and out of the Wild West era, he says, noting that the U.S. recently succeeded in hacking the Russian electric grid. Finding Courage Moving forward, governments need to be “willing to do dangerous things,” says Bray, pointing to project CORONA as a case in point. Launched in 1958 to take photos over the Soviet Union, the program lost its first 13 rockets trying to get the imaging reconnaissance satellite into orbit but eventually captured the film that helped end the Cold War—and later became the basis of Google Earth. Organizations may need a “chief courage officer,” agrees Komarny. “The proof-of-concept work takes a lot of courage.” Pilot projects are a good idea, as was done in the early days of the internet, and need to cover a lot of territory, says Krigsman. “AI affects every part of government, including how citizens interact with government.” “Multidisciplinary pilot projects are how to reap benefits and get adoption of AI for diversity and skills development,” says Sabine Gerdon, fellow in AI and machine learning with the World Economic Forum’s Centre for the Fourth Industrial Revolution. She advises government agencies to think strategically about opportunities in their country. Government also has a big role to play in ensuring the adoption of standards within different agencies and areas, Gerdon says. The World Economic Forum has an AI global consensus platform for the public and private sectors that is closing gaps between different jurisdictions. The international organization is already solving some of the challenges, says O’Brien. For example, it has convened stakeholders to co-design guidelines on responsible use of facial recognition technology. It also encourages regulators to certify algorithms fit for purpose rather than issuing a fine after something goes wrong, which could help reduce the risks of AI specific to children. Practical Strides Canada has an ongoing, open-source Algorithmic Impact Assessment project that could serve as a model for how to establish policies around automated decision-making, says Chenok. Multiple European countries have already established ethical guidelines for AI, says Creely. Even China recently issued the Beijing AI Principles. The Defense Innovation Board is reportedly also talking about AI ethics, he adds, but corporations are all still “all over the place.” Public-private collaboration in the UK has established some high-level principles for building an ethical framework for artificial intelligence, says Clement-Jones. AI codes of conduct now must be operationalized, and a public procurement policy developed. It would help if more legislators understood AI, he adds. Japan, to its credit, is urging industrialized nations composing the G10 to work on an agreement regarding data governance to head off the “race to the bottom with AI use of data,” Clement-Jones continues. And in June, the nonprofit Institute of Business Ethics published Corporate Ethics in a Digital Age with practical advice on addressing the challenges of AI from the boardroom. The cybersecurity framework of the National Institute of Standards and Technology (NIST) could be used by governments around the world, says Chenok. The AI Executive Order issued earlier this year in the U.S. tasked NIST with developing a plan for federal engagement in the development of standards and tools to make AI technologies dependable and trustworthy. IEEE has a document to address the vocabulary problem and create a family of standards that are context-specific—ranging from the data privacy process to automated facial analysis technology, says Sara Mattingly-Jordan, assistant professor for public administration and policy at Virginia Tech who is also part of the IEEE Global Initiative for Ethical AI. The standards development work (P7000) is part of a broader collaboration between business, academia, and policymakers to publish a comprehensive Ethically Aligned Design text offering guidance for putting principles into practice. Work is underway on the third edition, she reports. The Organization for Economic Co-operation and Development (OECD) has guidelines based on eight principles—including being transparent and explainable—that could serve as basis for international policy, says Rotenberg. The guidelines have been endorsed by 42 countries, including the U.S., where some of the same goals are being pursued via the executive order. Food for Thought “We may need to consider restricting or prohibiting AI systems where you can’t prove results,” continues Rotenberg. Tighter regulation will be needed for systems used for decision-making about criminal justice than issues such as climate change where agencies worry less about the impact on individuals. Government can best serve as a conduit for “human-centered design thinking,” says Bray, and help map personal paths to skills retraining. “People need to know they’re not being replaced but augmented.” Citizens will ideally have access to retraining throughout their lifetime and have a “personal learning account” where credits accumulate over time rather than over four years, says Clement-Jones. People will be able to send themselves for retraining instead of relying on their employer. With AI, “education through doing” is a pattern that can be scaled, suggests Komarny. “That distributes the opportunity.” AI ethics and cultural perspectives are central to the curriculum of a newly established college of computing at the Massachusetts Institute of Technology (MIT), says Nazli Choucri, professor of political science at the university. That’s the sort of intelligence governments will need as they work to agree on AI activities that are unacceptable. Choucri also believes closing the gap between AI and global policy communities requires separate focus groups of potential users—e.g., climate change, sustainability and strategies for urban development. Improving AI literacy and encouraging diversity is important, agrees Devin Krotman, director of prize operations at IBM Watson AI XPRIZE. So are efforts to “bridge the gap between the owners [trusted partners] of data and those who use data.” Team composition also matters, says O’Brien. “Data scientists are the rock stars, but you need the line-of-business folks as well.” Additionally, government needs to do what it can to foster free-market competition, says Krigsman, noting that consolidation is squeezing out smaller players—particularly in developing countries. Public representatives at the same time need to be “skeptical” about what commercial players are saying. “We need to focus on transparency before we focus on regulation.” For more information, visit AI World Government. Read more »
- Driver Traffic Guardians And Human Behavior Quirks Impacting AI Autonomous CarsBy Lance Eliot, the AI Trends Insider Holiday traffic can be difficult to navigate. I had just gotten done doing some shopping at a popular mall and was pulling out of the mall parking lot, trying to enter into traffic onto a rather busy street that had lots of cars zipping along. Though you might think that people would be in a good mood because the holidays were just around the corner, it seemed that most drivers were crazed and driving as though it was their last day on Earth. Drivers were snarling at each other, cutting each other off, and otherwise driving as if it were a dog-eat-dog occasion. Suddenly, a car came to a stop just before the mall exit that I was waiting at and offered to let me into the stream of traffic. I assumed at first that the driver was probably trying to come into the mall at this juncture and so was letting me go first, getting my car out of the way, so that they could then come into the mall. But, the driver didn’t have a turn signal on, which would have indicated they were going to turn into the mall. Furthermore, I knew this was considered an exit from the mall and though you could drive into the exit area, it tended to make for confusion of drivers in the mall that were usually all jockeying at the exit to get out of the mall. I smiled at the driver and did a quick wave with my hand, suggesting that they should go ahead and continue forward on the street and not continue to hold-up traffic behind them. For more about head nods and drivers, see article: https://aitrends.com/selfdrivingcars/head-nod-problem-ai-self-driving-cars/ For why greed is a key motivator in driving, see my article: https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/ For the tit-for-tat of driving approaches, see my article: https://aitrends.com/selfdrivingcars/tit-for-tat-and-ai-self-driving-cars/ For my article about road rages, see: https://aitrends.com/selfdrivingcars/road-rage-and-ai-self-driving-cars/ I also did a bit of a head shake and a hand movement warning that this was not a proper and safe means to enter into the mall. I hoped that my gestures would be taken in a positive manner. I was trying to help out this other driver. I figured too that perhaps the driver was trying to help me by holding back traffic, doing so was a quite kind and thought act, and I thought that my waving and head nodding might be a quid-pro-quo in return (aiding them by warning that coming into the exit area might be dangerous for them and I was thoughtfully considering their safety). The driver of the car made no particular head motion or arm waving in return. Meanwhile, the driver kept their car in this holding position. I could see that the eyes of the driver were turned in my direction and it seemed hard to believe that they had not seen my waving and head gestures to them. Just in case, I rolled down my window and put my arm outside, and did a large waving motion to make it abundantly clear that they should move ahead and not continue to keep the street blocked. I was somewhat leery though of prolonging the matter. First, it could erupt into road rage. The driver that was holding back the traffic might misinterpret my gestures and think that I am perhaps holding up the middle finger as though I am insulting them. The traffic behind me was getting exasperated because they wanted to get out of the mall parking lot and could see that I had an opportunity to move forward. They probably were having rather foul thoughts about me. Why wasn’t that dolt pulling out into the street? You might even be wondering why I would not go ahead and pull out into the street, taking advantage of the car that was holding back traffic. I suppose that I could have done so, and there have been times that I have made use of such a situation to proceed ahead. In this case, my main hesitation was twofold. My first point of hesitation was that the driver that was holding back traffic did not look particularly trustworthy to me. This is a judgement call, I realize. I was somehow slightly suspicious that I might go ahead forward into traffic and the driver might suddenly speed-up. I’ve seen this happen before. A driver that was intending to hold back traffic suddenly opts to not do so, and moves forward unexpectedly, catching the other driver off-guard and it can at times lead to confusion and worse still a car accident. My second point of hesitation was that the traffic on this particular street was moving along at a fast clip and there was a tremendous volume of traffic. As such, some cars were opting to go past the stopped driver and then cut into the lane that the driver was blocking. If I went into the spot that the driver was blocking for me to get into the street, there was also a chance that another car might go around that driver and then try to whip into “my” lane at that moment, and we’d end-up in a potential collision. All in all, I was willing to let the driver proceed and wait for a moment when the traffic had slowed down sufficient that I could snake my way into the lane. I didn’t really want someone holding up traffic for me. It was a presumably kind gesture to do so, but it actually was making the situation dicey. They probably had no clue as to why I was hesitating and just assumed that I was unaware of what a nice gesture was being offered to me. I doubt the driver had contemplated and calculated the risks involved in this maneuver. The Nature Of Human Driver Traffic Guardians I’m guessing that you’ve likely had similar kinds of driving circumstances. Another driver opts to try and do something for you that allows you to get your car into traffic, and you need to decide whether to proceed or not. The other driver is actually ceding the right-of-way to you. Of course, it is questionable whether a driver can cede the right-of-way in this manner. If you were to get into a car accident due to taking the right-of-way baton, I’d bet that by-and-large you’d be held responsible. The judge would certainly want to know who had the legal right-of-way, and though it was perhaps nice that another driver seemed to grant you the informal right-of-way, strictly speaking the rules-of-the-road would be against you since you had illegally “failed to yield the right-of-way” (in spite of your claim and clamoring that the right-of-way had been presumably handed to you). I’ve seen novice teenage drivers fall for a kind of right-of-way ceding trap. These novice drivers are just learning how to best navigate the streets safely. They often assume that the more seasoned drivers on the road know what they are doing. I don’t make that assumption. I’d readily claim that there are many seasoned drivers that are rather clueless about driving and make all sorts of judgement mistakes as they drive (hence too the massive number of car accidents and fender benders). A driver that appears to grant you the right-of-way might opt to suddenly retract the offer and catch you midstream trying to take the right-of-way. Or, other drivers that don’t realize what is happening might make moves that end-up clobbering you and/or the right-of-way granting driver. There can also be pedestrians that can get dragged into the matter too. If there are nearby pedestrians, they might also be making predictions about what is going to happen next and become befuddled when the situation does not necessarily playout as it might seem to be. For pedestrian aspects, see my article: https://aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/ For my article about rational and irrational driving behavior: https://aitrends.com/selfdrivingcars/motivational-ai-bounded-irrationality-self-driving-cars/ For the human foibles of drivers, see my article: https://aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/ For the importance of defensive driving, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/ There are also traffic downstream issues that can arise. I’ve often noticed that cars that are further back from the stoppage point are at times caught unawares that traffic has stopped for seemingly no valid reason. In this case of the mall exit, there was a nearby intersection that had a green light. The cars coming down the street could see that the light was green. There would have been no expectation that traffic would come to a halt. Cars at the end of the pack that weren’t paying close attention could have rammed into a car that was also at the back of the pack, simply because they had expected traffic to be zipping along and had no presence-of-mind to look ahead and realize that traffic had come to a stop. A novice driver tends to consider these situations to be a godsend. They are happy to have someone block traffic for them and not have to on-their-own figure out how to make their way into a busy street. These naïve drivers are bound to suggest that they do not want to look a gift horse in the mouth. If some other driver is willing to grant them right-of-way, well, whether a generous gesture or maybe because the other driver is confused, it doesn’t matter “why” and instead just go ahead and take the opportunity. No need to think about it. I’m sure that some might have considered me to be a scared mouse that was too timid a driver to realize the moment existed for me to take advantage of it. As mentioned, I’ve indeed used these moments on some occasions, but I don’t think it is a wise driver that always and automatically will use the largesse of another driver. You need to assess each such circumstance and decide whether it is a safe gamble or not. If you’ve ever tried to turn down the offer from a traffic “guardian” such as my mall exit situation, you know that sometimes the guardian driver is insistent that you take their offer. The driver often assumes that for sure you need to take their offer. It even becomes a kind of game of insistence. Even though I was waving on the driver, he was still sitting there and seemingly ignoring my gestures to move along. The moment can be one of wills. The guardian driver won’t back down from their offer. The other driver won’t back down from not taking the offer. Both cars then are a standstill, like gun slingers facing off in the wild west. One time, this happened to such a severity that the cars behind the guardian driver began to honk their horns, and the cars behind the driver that was not taking the opening were honking their horns too. It was a horn orchestra. This seemed to actually make the two confronting drivers even more intractable. Invariably, one of the two drivers will give in and traffic will flow again, but it will have not been without a great deal of stress and consternation all around. Seasoned drivers will frequently make judgements about these situations and do so without much conscious thought that they are indeed mulling it over. If you’ve encountered these moments before, you by now have formulated an unspoken mental method to gauge the efficacy of the circumstance. Without any noticeable hesitation, you’ll likely instantly opt to either take the right-of-way that was provided by the guardian or you will turn it down. You instinctively come to know what smells right and what stinks. If the situation seems to have a potential stench to it, such as bordering on your own safety driving threshold, you’ll likely try to find a means to get out of it. I don’t want you to get this situation confounded with other situations that are not quite the same, even though they might seem to be the same. For example, suppose I had already started to move my car ahead into the street, prior to the guardian driver coming up to the spot where the mall exit was. If I was already protruding into the street, it could be that I was then blocking their lane and it was safest for the other driver to come to a stop. Or, even if I wasn’t actually partially into the lane, if it appeared that I was intending to do so, such as my car was moving forward actively, it could also spur the other driver to come to a halt. I’ve done this same thing many times in terms of noticing a car that was coming out into the street and seemingly not realizing that traffic is already coming down the street. I’ve had to come to a stop to let the intruder into traffic, otherwise the odds were that the driver was going to get clobbered by me or some other car. These are drivers that don’t judge well the traffic situation at-hand. They have made a blunder by moving into the street prematurely. When I say it is a blunder, that’s actually a bit of an understatement. There are some drivers that are living in their own driving world and don’t care about other cars. These drivers will take the right-of-way whether it is legal or not. For them, they are all important. If they want into the street, of course all other traffic should come to a halt. They don’t calculate whether other cars might need to dangerously come to a sudden halt. All that they know is that they want to get to wherever they are going, and by heck or high water they will do so. If you are wondering what happened in my case of the mall exit, I waited to see what the guardian driver was going to do and he finally gave-up and proceeded forward. The whole matter took only a few seconds of time and my having walked you through it step-by-step might seem like it took an eternity. It is like trying to analyze a football game play that involved the quarterback tossing a pass and via a rewind you inspect the player movements step-by-step. The play itself might have taken just a few seconds but the analysis might take several minutes of careful inspection. My mall exit example is merely one exemplar of these kinds of traffic guardian moments. There are lots of driving situations involving a driver that decides to become a traffic guardian. They take on the role of directing traffic. In some cases, their efforts are laudable. In other instances, their efforts are misguided. I’d like to think that most of the time the driver is acting in a selfless manner and genuinely wanting to help other drivers. Either way, whether the driver traffic guardian has heavenly motives or perhaps less than lofty motives, it is a driving circumstance that other drivers need to be watching for and be able to contend with. Driver Traffic Guardians Are Both Good And Bad Another example that illustrates how dangerous these driver guardian moments can become involves the other day when I was driving down a busy street that had a median dividing the traffic. It turns out that a pedestrian was standing on the median and hopeful of crossing the traffic lanes. They were doing so in a jaywalking manner. In other words, the pedestrian was not crossing at a valid crosswalk. Clearly the pedestrian was in the wrong and was potentially creating a dangerous situation for the cars streaming down the street and for them self. Most of the cars were moving along and ignoring the pedestrian. This seemed like the prudent action for the cars. All of a sudden, one car that was closest to the median decided to come to a stop. This was completely contrary to the existing flow of traffic. There was no reason to come to a halt. Apparently, the driver noticed the “stranded” pedestrian and decided to go ahead and make their lane available for the pedestrian to cross. Unfortunately, the lane to the right was still open and traffic was moving at a fast speed. The traffic in this still moving lane could not readily see the pedestrian. Turns out the car that had stopped in the other lane to let the pedestrian pass was a large car and it pretty much hid the pedestrian from view. So, you now had traffic zipping along in the lane that the pedestrian would next need to use to get across the street, and the traffic in that lane had no idea why the other lane had come to a halt. I’d guess that most of the time we might assume that the stopped car had car troubles. Perhaps the stopped car had a stalled engine. Maybe there was debris on the ground in that lane so the car came to a halt. Who knows? All the drivers in the other lane that were streaming along didn’t seem to care and just kept going. Meanwhile, the pedestrian had stepped into the street in front of the stopped car, and thus the stopped car no longer had any other viable option other than to remain stopped. If the pedestrian tried to cross that remaining lane, it was going to be a really dicey effort. Cars were not expecting to see a pedestrian miraculously appear in the lane. The pedestrian might also misjudge a momentary gap in traffic and try to run across the lane to the sidewalk, but then get plowed over by a fast moving car that had no idea that the pedestrian was wanting to move across the lane. Imagine if the pedestrian leapt into the lane, and an upcoming car tried to brake, but the car slides into the pedestrian, injuring or killing them, and maybe at the same time the swerving car hits other nearby cars. The whole cascading nightmare could have been triggered by the driver guardian that opted to come to a halt in the middle of traffic. You might say that the pedestrian is at fault for being on the median and looking like they wanted to cross the street. I’d wager that in spite of the pedestrian being in the wrong place at the wrong time, most traffic courts and traffic judges would come down hard on the driver that decided to be the traffic guardian. No matter whether the traffic guardian was trying to right the situation and be helpful, if they create a hazardous traffic situation it is their doing. AI Autonomous Cars And Driver Traffic Guardians What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect for the AI is that it needs to be able to contend with driver traffic guardians. Allow me to elaborate. I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/ For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/ For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/ Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion. Here’s the usual steps involved in the AI driving task: Sensor data collection and interpretation Sensor fusion Virtual world model updating AI action planning Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/ See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/ For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/ Returning to the matter of the driver traffic guardians, let’s consider how these kinds of drivers and driving situations can impact the AI of a self-driving car. Some AI developers might say that they don’t need to figure out whether someone is a driver traffic guardian. All the AI needs to do is react to whatever driving situation happens to arise. I consider this rather narrow thinking on the part of the AI developers and I assert it puts the AI itself into a (at best) novice teenage driver mindset, meaning that the chances of a car accident and the danger of injury or killing humans is going to go up if the AI cannot realize what is happening. Complexities Of Coping With Driver Traffic Guardians Let’s take the mall exit example. What would the AI do? You might suggest that since the AI was intending to leave the mall, and if the stopped traffic allows for the AI self-driving car to get out of the mall, it should go ahead and proceed. No fuss, no particular complexity involved. Suppose though that the other traffic in the street opts to suddenly swerve over into the stopped lane and collides with the AI self-driving car as it enters into the stopped lane? I’m sure that some AI developers would claim that’s the fault of those other drivers and the AI was “in the right” and thus nothing else matters. But, if the AI allows the self-driving car to get into a situation for which the odds of a collision happening is relatively high, wouldn’t we all agree that this is something the AI ought to be trying to anticipate? I would say so. There are AI developers that cling to the notion that as long as the “fault” is on the heads of the other drivers, it somehow means that it is Okay for the AI to make rather short shrift decisions that get the AI self-driving car into bad or even dire circumstances. These AI developers also want to have the rest of the world accommodate the arrival of AI self-driving cars. They want other drivers to provide a wide berth for an AI self-driving car. They want other drivers and even pedestrians to tiptoe around a self-driving car when it comes down the street. That’s not the real-world. If AI self-driving cars are going to be on our real-world streets, you cannot somehow expect that the rest of the world is going to gingerly and respectively give AI self-driving cars their own special safety cushion. Rather than trying to change the rest of the world to accommodate AI self-driving cars, I emphasize that the AI self-driving cars need to be able to be designed and built to fit to the nature of the real-world. Let’s not put things upside down wherein the world has to make new roads, new rules, etc., just on behalf of AI self-driving cars. I’ve previously discussed the so-called pedestrian on a pogo stick problem. This is a situation involving a human that happens to be on a pogo stick and opts to pogo into lanes of traffic. There are some AI developers that indicate that this is a ridiculous idea and would never happen. That’s the first wrong answer. It could certainly happen. Secondly, the AI developers say that if it did happen, the pogoing person is clearly at fault and thus if the AI self-driving car hits the person, no big deal in the sense that it was the stupid human that led to the incident. That’s another wrong answer. The AI should be able to respond to situations as they emerge. Believe it or not, some AI developers would even say that if we are going to have pogoing humans onto roadways, the world should put up barriers to prevent any humans from getting into the roadway wherever an AI self-driving car is going to be. Can you imagine the astronomical costs of putting up barriers all across the country for this purpose? Ridiculous. For more about the pogoing human issue, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/ For why AI developers aren’t necessarily focusing on these matters, see my article: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/ For the aspect of edge problems, see my article: https://aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/ For resiliency in driving, see my article: https://aitrends.com/selfdrivingcars/self-adapting-resiliency-for-ai-self-driving-cars/ For idealism about AI, see my article: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/ How AI Autonomous Cars Need To Cope Overall, I would argue that the AI needs to be able to contend with the driving traffic guardian. The first element of the AI involves it being able to detect that a driving traffic guardian aspect is arising. Detection is key. Once the matter is detected, the rest involves prediction of what might happen and being able to prepare an AI action plan to deal with the matter. In the case of the mall exit, the AI via its sensors would have likely been able to detect that the car in the traffic lane had come to a stop. The question arises or should arise as to why has the car stopped there? If the traffic was flowing unabated, and if there is nothing apparent in the front of the stopped car, what else might account for the car having come to a stop? If the AI can rule out most other relatively common reasons as to why a car would suddenly stop in traffic, this allows the AI to then consider that the car might be stopped due to a driver traffic guardian. This is not a certainty that the situation entails a driver traffic guardian. Just as humans can only guess when they experience such a moment, likewise the AI is also going to be “guessing” that the situation involves a traffic guardian. I mention this aspect about “guessing” since there are some conventional systems developers that are not particularly used to dealing with uncertainties and probabilities in terms of their coding practices. They write programs that they expect will repeatedly and always do whatever they do without any notion of the “chances” of things occurring or not occurring. In AI development, and especially a human endeavor such as driving a car, you have to alter your programming mindset to include the use of uncertainties and probabilities. Also, in terms of identifying traffic situations that might include the act of a traffic guardian, it is possible via analyzing large-scale datasets of traffic data to be able to more readily spot such moments. Using Machine Learning (ML) and deep learning, usually consisting of Artificial Neural Networks (ANN), you can deeply analyze thousands upon thousands of traffic situations and train the neural network to be able to identify the driver traffic guardian aspects. When a driving situation begins to arise, the ML can potentially participate in making the detection of it. For my article about the dangers of irreproducibility, see: https://aitrends.com/selfdrivingcars/irreproducibility-and-ai-self-driving-cars/ For the nature of uncertainty and probabilities in AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/probabilistic-reasoning-ai-self-driving-cars/ For Federated Machine Learning, see my article: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/ For Ensemble Machine Learning, see my article: https://aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/ The context of the detected driving situation will then shape what action the AI self-driving car should next take. It could be that the AI will opt to proceed and make use of the traffic guardian’s actions. Or, it could be that the AI will opt to wait it out and not exploit the traffic guardian’s actions. The AI might even opt to take some other action such as… Read more »
- Threat Analytics, Privileged Access Management Leveraging AI, Machine Learning for Better CybersecurityBy AI Trends Staff AI is combining with cybersecurity to create a new genre of tools called threat analytics. Machine learning is enabling threat analytics to deliver greater precision around risk context, especially involving the behavior of privileged users, details a recent account in Forbes. This approach can be used to create notifications in real time, and actively respond to incidents by cutting off sessions or flagging for followup. The commonly-held belief that millions of hackers have gone to the dark side and are orchestrating massive attacks on vulnerable businesses is a misconception. The more brutal truth is that businesses are not protecting their privileged access credentials from easy hacks. Cybercriminals are looking for ways to steal privileged access credentials and walk in the front door. According to Verizon’s 2019 Data Breach Investigations Report, ‘Phishing’ (as a precursor to credential misuse), ‘Stolen Credentials’, and ‘Privilege Abuse’ account for the majority of threat actions in breaches. Identities and the trust placed in them have become the Achilles heel of cybersecurity practices, according to a survey entitled “Privileged Access Management in the Modern Threatscape,” from Centrify, a company offering a cloud service to secure from attacks on modern enterprises. Some 74% of respondents acknowledged an organization breach resulting from access to a privileged account. While the threat actors might vary, according to Verizon’s 2019 Data Breach Investigations Report, the cyber adversaries’ tactics, techniques, and procedures are the same across the board. Verizon found that the fastest growing source of threats are from internal actors. Internal actors are able to obtain privileged access credentials with minimal effort, often obtaining them through legitimate access requests to internal systems or perusing sticky notes in the cubicles of co-workers. Privileged credential abuse is a challenge to detect, since legacy approaches to cybersecurity trust the identity of the person using the privileged credentials. In effect, the hacker is camouflaged by the trust assigned to the privileged credentials. A cohesive Privileged Access Management (PAM) strategy would include machine learning-based threat analytics, to provide a layer of security that goes beyond passwords, multi-factor authentication or privilege elevation. Machine learning algorithms enable threat analytics to immediately detect anomalies and non-normal behavior by tracking login behavioral patterns, geolocation, and time of login, and many more variables to calculate a risk score. Risk scores are calculated in real-time and define if access is approved, if additional authentication is needed, or if the request is blocked entirely. Threat analytics applications with machine learning-based engines are said to be effective at profiling normal behavior pattern for any user, or for any privileged activity including commands. This identifies anomalies in real time to enable risk-based access control. High-risk events are immediately flagged and elevated to the attention of IT, which in theory speeds analysis. Effective Threat management applications may include support for Security Information and Event Management (SIEM) tools, such as Micro Focus ArcSight, IBM Radar and Splunk. Primer on Cybersecurity Analytics Available A primer on the basics in this area, entitled “Security Analytics in the Age of AI: 2019 Update,” is available from AIMultiple, an organization tracking AI product and service options. Cybersecurity analytics is defined as the study of the digital trail left behind by cyber criminals to help better understand weaknesses and how to prevent similar breaches in the future. While the terms cybersecurity and information security may be used interchangeably, they don’t mean the same thing exactly. In information security, the biggest concern is to safeguard data from illegal access of any kind. In cybersecurity, the biggest concern is to safeguard data from illegal digital access. In other words, cybersecurity works to protect digital information, whereas information security works to protect all information, regardless of whether it is kept digitally or not. Benefits of cybersecurity analytics can include: a more visual analytics process, usable by business users; a more holistic view of security considerations, such as how an attack fits in context with existing systems; enhanced data enrichment capacity, making data elements more useful; an aid to IT departments; and a look at ignored data sources that may be important to understanding security threats. The addition of AI tools into the cybersecurity mix adds more horsepower to existing technologies and leading to more effective practice. AI knowledge graphs can act as repositories for the enormous amount of data being constantly produced, helping to identify patterns and relationships that matter. This can enable more effective predictive analytics. Machine learning has also demonstrated value in behavior analysis and in deploying countermeasures. See the source posts at Forbes and at AIMultiple. Read more »