AI Trends – AI News and Events

  • AI and the Case of the Disappearing Textbooks; Maybe the Writers are Next
    true
    By AI Trends Staff Pearsons, one of the largest textbook publishing companies in the world, is getting out of the print business, according to a recent account in Forbes. This is very much along the lines of Ford Motor Company announcing recently that they will stop producing cars. While the jury is still out on whether the latter is a good idea, in many respects. the decision by Pearsons has been inevitable for a while. It is a matter of economics. Traditionally, when a publisher commissions a book, what they are doing is making a bet that the book will return its total investment costs by a significant amount. Textbooks are an interesting quandary in the publishing world. The cost of actually creating the original content for the textbook is a comparatively small percentage of the overall costs, but for a textbook, those other costs – securing the rights for images or commissioning them outright, editing the content, indexing, prepress, printing, promoting and distributing can mean that most textbooks can cost between $50,000 and $100,000 to make, and some can end up costing more than a million dollars. What makes this even more of a risk is that a given textbook’s primary audience is students. For secondary education and below, this cost is ameliorated by a school district buying the books for use by all of the schools within the district. For college textbooks, on the other hand, the publisher is reliant upon individual teachers deciding to buy their particular book for a class. Either way, the audience is comparatively small by publishing standards, which is one of the reasons that the cost of textbooks tends to be higher than it is for general entertainment content. The rise of digital publishing, the Internet and increasingly AI have completely upended that equation. Until comparatively recently, those students represented a captive audience – if they wanted to take the class, they had to buy (or have someone subsidize the buying of) the textbooks. Because this created a (larger) market, the cost per book including profit was lower, though still high by book cost standards. The Internet (and most notably Amazon) ate away at the distribution side, initially by making it easier to sell slightly used books at a considerably lower cost point that (from the publisher’s perspective) was money not coming to them. Publishers were forced into a position of rasing the prices of books to eke out ever-smaller margins. This pushed the costs of textbooks into the stratosphere, which is where the second whammy hit publishers like Pearsons. Professors were faced by uprisings from students already faced with crippling student loans and began to use more and more material from the Internet (or publishing their own works to the Internet). Not only was it far less expensive, but the professor could teach their students what was important to them, not what was important to the publishers. Automating the Writing – Writer is a Robot Meanwhile, while the print publisher is ceasing printing, other publishers are looking to automate the writing, to in effect, make the writer a robot. Robot reporters are generating lots of copy at Boomberg.(Credit: Cam Cottrill) AI has made forays into the print media, starting with the more rote aspects of journalism, according to an account in What’s New in Publishing. Cyborg, as it is named, accounts for an estimated one-third of the content published by Bloomberg News. It is able to examine financial reports as soon as they are available, and create news stories featuring the most pertinent information. And it can do this faster and more accurately than a human reporter – most of whom would usually find this type of work sleep-inducing. Layoffs of reporters and editors have resulted. Elsewhere, the Associated Press and The Washington Post have been using AI to produce articles for minor league baseball and high school football respectfully. The Los Angeles Times has reported using robot reporters to write about earthquakes. Journalism executives are quick to point out that this does not spell an end to human journalism. Publishers leading the way in this new era say the AI will allow journalists to spend more time on more practical work; AI should be seen as part of the toolbox. Lisa Gibbs, director of news partnerships for The Associated Press, says, “The work of journalism is creative, it’s about curiosity, it’s about storytelling, it’s about digging and holding governments accountable, it’s critical thinking, it’s judgment — and that is where we want our journalists spending their energy.” Lisa Gibbs, director of news partnerships for The Associated Press. We seem to be on the verge of significant advances in AI research in the area of writing thoughtfully, like a human. New systems for analyzing text from Google and OpenAI are attracting interest. Open AI’s algorithm, called GPT-2, is currently the most extraordinary example. It is superior at a task referred to as language modelling, which tests a program’s capacity to guess the next words in a sentence. The ability of the algorithm is truly mind-boggling. Give GPT-2 a headline, and it can write the remainder of the article, complete with bogus quotes and statistics. Recently, Open AI rather dramatically withheld the release of their newest language modeling algorithm, GPT-2 – instead deciding to release a small, simplified version of GPT-2 with its sampling code and research paper for researchers to conduct further experimentation. Open AI feared that the full release of GPT-2 could see it being used to automate the mass production of misinformation. The decision also accelerated the AI community’s ongoing discussion about how to detect this kind of fake news. Experiments are ongoing to build systems to determine if written material is generated by a human, or by a language model. Read the source articles in Forbes and in What’s New in Publishing. Read more »
  • For Driverless Car Test Tracks, It Helps to Make the Signs Clear
    true
    Testing sites in Pittsburgh, Boston, Washington, D.C. and other prominent cities in America have already put driverless cars on the road – all with the cooperation and endorsement of local officials. State and national governments are also hastening to welcome autonomous vehicles, but before their widespread adoption, safety is priority No. 1 for governments. “It really is rarely – if ever – safe enough,” Karina Ricks, Director of Mobility and Infrastructure for the city of Pittsburgh, said in a session at the AI World Government conference held recently in Washington, DC., quoted in a piece in govloop. In Pittsburgh, autonomous vehicles operate and are tested on a now-famous stretch of roadway – right in the heart of the booming technology sector downtown. The driverless cars need two testers at all times, per regulations. Karina Ricks, Director of Mobility and Infrastructure, city of Pittsburgh Pedestrians along the driverless cars’ route have familiarized themselves with the presence of the vehicles. Some pedestrians, Ricks said, know that the vehicles are driving slowly enough and are responsive enough to stop in close proximity, so they will cross the road ahead of autonomous vehicles regardless of walkways and traffic laws. “What keeps us on the sidewalk is really a fear of bad drivers,” Ricks said, remarking that peoples’ response to driverless cars could cause new urban planning issues and legislation around roads. Governments are preparing policies and practices to help manage the societal responses to the self-driving vehicles, such as pedestrian disregard for traffic laws. Policy questions regarding underground roads, pedestrian bridges over roads, mandatory painted bike lanes, and other infrastructure surrounding the operation of driverless cars, need to be considered. Autonomous vehicles use available maps to guide their journey, sensors to develop a picture of nearby surroundings and hard-coded rules to maintain a standard of driving. Software then computes all of the inputs and, using predetermined formulas, calculates the correct speed, direction and course of action. While the formulas are exact, the cars’ read of the external road can be much more mercurial. Because of misinterpretations that lead to the incorrect programmed response, a car could swerve into traffic before a pothole or fail to halt at an obscured stop sign on the side of the road. “It’s all of those components that are supporting getting AVs [autonomous vehicles] to an actual place,” Jeff Marootian, Director of the District Department of Transportation (DDOT) in Washington, D.C., said at the event. Jeff Marootian, Director of the District Department of Transportation (DDOT) in Washington, D.C. For the infrastructure that governments will have to build out to allow for driverless vehicles, signs will have to be visible, lanes will have to be clear and roads would ideally not confuse the self-driving cars, he suggested. Moreover, governments will have to bolster their networks to allow for the necessary information-sharing, Marootian said. Wireless 4G networks paved the way for ride-sharing services, and now, 5G will allow for driverless cars to communicate with other cars and process the split-second decisions that they will have to make in unpredictable, real-world environments. Read the source article in govloop. Read more »
  • Motorcycles Are On-The-Road To Becoming Semi-Autonomous And Fully Autonomous
    true
    By Lance Eliot, the AI Trends Insider At last week’s TechCrunch TC Sessions: Mobility summit in San Jose, California on July 10, 2019, at one point the main stage was adorned with two awesome looking motorcycles (shall I dare say “gnarly” or “coolest” as they were tech-laden feature-packed prototypes). Jay Giraud, founder and CEO of Damon, a high-tech startup seeking to reinvent personal mobility, came onto the stage and shared his vision of how smarter, safer, electric motorcycles are the wave of the future and showcased his assertion by demonstrating the two working prototypes that he had brought with him. He’s got e-motorcycles on his mind and besides showcasing the tech that his firm has infused into the motorcycles, he also pointed out that the global enterprise motorcycle market is around $90 billion annually and growing at a serious clip, making this a sizable market worthy of tackling and transforming. For more about TechCrunch’s TC Mobility summit, see: https://techcrunch.com/events/tc-sessions-mobility-2019/ For more about Damon, see :https://damonxlabs.com/ We all know that riding a motorcycle in today’s car-crazed dog-eat-dog traffic morass can be extremely hair raising and outright dangerous. Car drivers often neglect to notice motorcyclists, or see a motorcycle but don’t seem to give a motorcyclist the same roadway respect that they might of another car. Motorcyclists have to be extraordinarily vigilant as drivers of a motorcycle. It doesn’t take much of a momentary lapse in attention or judgment to find yourself hurling toward a dicey moment, leading to a possible crash or otherwise becoming another injury or fatal motorcycle death statistic. Generally, those trying to advance the tech associated with the actual driving of a motorcycle can be categorized into these buckets: a) Advanced Driver-Assistance Systems (ADAS) – these are features that aid the motorcyclist by detecting traffic situations and offering a warning or minor assist to the human driver of the motorcycle b) Semi-Autonomous Systems – these are features having the AI drive the motorcycle in some notable respect and yet only as a co-sharing “driver” with the human driving the motorcycle c) Fully Autonomous – this is a motorcycle entirely driven by the AI and the human on-board is a passenger and no longer considered a motorcycle driver per se Let’s consider some facets about the challenges involved in trying to achieve heightened levels of autonomy for a motorcycle. As readers of my column know, seeking to create autonomous cars is pretty much a “given” in that everyone assumes that’s where the AI systems are going to be developed and applied for roadway self-driving aspects. Few realize that the same kind of autonomous driving is being pursued with motorcycles. If you ponder the topic for a moment, it becomes apparent that this is a daunting task, for a myriad of reasons (I’ll be covering many herein). For my article about self-driving cars as a moonshot, see: https://www.aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/ For the affordability of self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/affordability-of-ai-self-driving-cars/ For the cognitive aspects involved in the AI of self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/ For my article about Machine Learning and self-driving, see: https://www.aitrends.com/selfdrivingcars/machine-learning-ultra-brittleness-and-object-orientation-poses-the-case-of-ai-self-driving-cars/ Motorcycles Are Alluring When I was in my late-teens, I wanted to ride motorcycles. It was considered cool, and still is by many. There was this maverick image of being a motorcycle rider. You were free. You went with the wind. You were different. You were someone that bucked the system. A good friend of mine was a dirt bike rider and he offered to show me how to drive a motorcycle. Notice that I just used the word “drive” when mentioning a motorcycle. There are motorcyclists that don’t like the use of the word drive when referring to being on a motorcycle. They say that you “drive a car” but that for a motorcycle you “ride a motorcycle.” I certainly don’t want to quibble on this point, but I would say that when you “ride” on something it could be that you are merely a passenger, while when you “drive” something it implies you are actually directing the activities of the machine. This will be an important distinction here, and so that’s why I bring up the semantics aspects of the words “drive” versus “ride.” Anyway, my parents were completely opposed to my riding and/or driving on motorcycles. They felt strongly that a motorcycle was a dangerous form of transportation. They emphasized that those on the motorcycle could easily be thrown off the motorcycle and therefore could then be vulnerable to being run over by cars or that your body would certainly get torn apart if you hit the pavement at say 70 miles per hour. They also pointed out that there are a lot of car drivers that don’t pay attention to motorcycles and thus even if I was really good at motorcycle riding or driving that nonetheless a car at any time could plow into me. They showed me pictures of people that had been involved in motorcycle accidents and deaths (these were rather gruesome photos to look at), along with sharing with me the sobering statistics about the number of motorcycle incidents annually (a chilling number). This was a quite rational, sensible, logic-based way of explaining why I should neither ride on and nor drive a motorcycle. So, of course, being a rebellious teenager, I completely disregarded their advice and sneaked out to the desert with my friends so that I could learn how to drive a motorcycle. Yes, that’s the maverick in me. In the end, I admittedly never personally took to motorcycles that much. I would go riding with my friends in the desert, but never pursued getting a motorcycle license for the public roadways riding and did not drive or ride a motorcycle on city streets. Each to their own cup of tea. I am now though quite interested in motorcycles. Nowadays, I am interested in motorcycles because of their potential for being self-driving vehicles. How Motorcycles Can Be Self-Driving Vehicles When I bring up the topic of self-driving motorcycles at my various presentations on driverless cars and autonomous vehicles, I usually get some pretty quizzical looks. People will often say to me that it makes no sense to have a self-driving motorcycle. In their view, the whole purpose of a motorcycle is that it involves the close relationship between the human rider (driver) and the bike itself. A motorcycle is intended to provide that freedom of driving that allows the maverick to go where they want and how they want. This seems completely antithetical to the concept of having a self-driving capability. I know that at first glance it might seem like an odd combination. Motorcycle and AI combined together to have a self-driving motorcycle. Where does the human fit into that equation, you might wonder. In a sense, you could look at this combination in a different light and see that if you did have a self-driving motorcycle that it could open the door toward morel humans riding on motorcycles than we have today. Yes, when you consider that driving a conventional motorcycle does require a sufficient skillset and you technically do need to get a motorcycle driving license, there is a bit of a barrier to entry of deciding you want to be a motorcycle driver. Even if you are a motorcycle driver today and think that the getting of a motorcycle driver’s license is a bit of a joke (some would say it is extremely easy), it is indeed a perceived barrier to a lot of people that have contemplated getting a motorcycle. Also, the mechanical aspects of using a motorcycle just seems overly complicated to many in terms of how to use the clutch, how to use the throttle, etc. Voila, imagine if there was a self-driving fully autonomous motorcycle that all a human needed to do was sit on the motorcycle and the rest of the driving was done by the on-board AI. No driver’s license needed for the human. No awareness of how to start, drive, or stop the motorcycle is needed by the human. Just jump on the motorcycle and away you go—I assure you that this would attract a whole lot of people that otherwise would not have ventured onto a motorcycle. It could blast wide open the sales of motorcycles. Motorcycle Consumer Trends And Self-Driving Cars You might find of interest that the current market for consumer use of conventional motorcycles in the United States is somewhat in the doldrums. Sales have been flat and tend to be falling. Baby boomers are aging out of the motorcycle industry. Millennial’s aren’t as attracted to motorcycles as were the baby boomers. Most of the buyers of motorcycles continue to be predominantly men. If you are a motorcycle dealer, it is rough times, for sure, and right now the future of conventional motorcycles looks gloomy. There are some critics that are predicting an utter meltdown in the motorcycle industry once the self-driving car comes along. The thinking is that people will not care anymore about using motorcycles and will be very happy to just ride along in self-driving cars. And, there are some that predict we will change the nature of our roads to accommodate self-driving cars, making special lanes for them and ultimately forcing human driven cars into slow lanes to discourage humans from driving cars. Likewise, they claim that motorcycles won’t be allowed around the self-driving cars since a human driven motorcyclist will mess up the coordinated dance of self-driving cars that are trying to make traffic flow maximally efficient. There was even a recent accident in San Francisco involving a human driven motorcycle and a self-driving car—so far, the motorcyclist was considered at fault. Supposedly, the motorcyclist made a lane change and the motorcycle and the self-driving car bumped into each other—the police faulted the motorcyclist for not making a safe lane change. This kind of circumstance will likely give rise to self-driving car proponents including politicians of seeking some kind of ban of motorcycles on the roadways that have self-driving cars. I don’t agree that we’ll as a society be entirely banishing human motorcyclists to be in a second-class citizen position on our highways, and nor do I see that the motorcycle messes up the traffic smarts of self-driving cars per se (in my view, the self-driving cars should be AI savvy enough to deal with this and not place the burden onto the shoulders of the motorcyclists). You can divide the marketplace into consumer-used motorcycle efforts, for everyday and occasional riding, and then there’s also a significant chunk of motorcycle riding involving fleet motorcycle efforts. By fleets, I’m referring to police and other authorities or workers that use motorcycles as an integral part of their job. So, you could have fleets of self-driving motorcycles that are being used by workers that need to use a motorcycle as an element of their job. And, you could attract gobs of people that might have never considered riding on a motorcycle, allowing them to finally taste the joys and adventure of being on a motorcycle. Just as being an occupant in a self-driving car won’t take any skill by a human, nor would the human need to have any true skill in riding on a self-driving motorcycle. In theory, it would be like sitting on a motorcycle behind a top professional motorcycle driver, and just hanging on and enjoying the ride. Big Question About Driving Of A Motorcycle This does bring up one knotty question that I know you motorcycle driving fans have. Will a self-driving motorcycle allow the human rider to drive the motorcycle, or will the motorcycle be only driven by the AI? For those that aren’t into motorcycles, you might think this question is not important, and your view might be that of course we wouldn’t let the humans drive the motorcycles any longer once we have AI-proficient self-driving motorcycles. Certainly, it would be better and safer to always have the AI driving the motorcycles. The humans would be passengers, that’s it. Some motorcyclists will tell you that you will only pry their “cold dead hands” from the throttle of their motorcycle and that never shall it be that humans cannot actually drive a motorcycle. As a relevant aside, the same question has yet to be resolved about self-driving cars. Will we as a society decree that a fully autonomous car can never be a human driven car? In other words, you get into a fully driverless car, and decide that you want to be the driver, so you flip a switch that says the AI is no longer the driver and instead you are the driver. Some automakers are saying that their Level 5 fully autonomous cars won’t have any driving controls for humans, thus, presumably a licensed human driver in such a car would not be able to drive it even if they wanted to do so. It’s still an unresolved matter. Let’s put to the side for now the debate about whether humans will be allowed to drive a motorcycle, and focus on the topic of self-driving motorcycles overall. Aspects Of Self-Driving Motorcycles A fully autonomous motorcycle would allow for a human passenger or passengers to glide along and enjoy the wind and freedom of being on a motorcycle. The humans would not need to know how to drive the motorcycle. Presumably, they would not manipulate or utilize any of the controls. The humans would not steer, nor brake, nor accelerate. The human is strictly a rider. The human though does need to actively participate in the riding process per se. Of course, the human rider would need to lean properly into the riding process for the physics of the ride, and would need to make sure they stay on the motorcycle. In other words, the human does have some duties, but the duties are all about being a good passenger. This doesn’t require much of any skill and is really just more about doing the right thing when on the motorcycle. You could somewhat say the same thing about being an occupant in a self-driving car, in that you are expected to not roll down the windows of the self-driving car and climb out on the hood of the self-driving car. There are proper things to do when being a passenger in a self-driving car and likewise for a self-driving motorcycle. Nonetheless, I think we’d all agree that a motorcycle rider is likely to be more involved in the maneuvering and actions of a self-driving motorcycle than they will be as an occupant inside a self-driving car. Some of you might be wondering whether the human passenger would have to be on the motorcycle in order to keep the motorcycle balanced while it is being driven by the AI. In essence, would it be a requirement that a self-driving motorcycle could only move along if there was a human passenger on it. The answer is no. I say this because Honda for example has developed a new concept motorcycle that can balance itself, doing so not only at regular speeds but also at a low-speed crawl and even when the motorcycle comes to a complete stop (as are other motorcycle makers). So, interestingly, we could have motorcycles that drive along our roads all by themselves. No human needed. Admittedly, this would be at first a bit eerie. Though, keep in mind that we are going to have self-driving cars that drive around all by themselves and there won’t necessarily be any human passengers in them either. It’s a new reality that we’ll need to come to accept. The envisioned future of self-driving cars is that your AI self-driving car will be able to drive on its own here-and-there as based on the needs of the human owner; for example, it takes you to work, and then maybe you opt to send your self-driving car home where it can then be used by the kids to get over to school, and later in the day your self-driving car picks them up after school and drops them back at home, and finally at the end of the workday it comes to pick you up. All the time, the self-driving car is without any human driver, and for some of the time there isn’t any humans inside the self-driving car at all. You could do the same with a self-driving motorcycle. Maybe ride it to work, and then send it on its own over to a friend that needs to use it for some errands, and then the friend routes it back over to your office. Some of the motorcycle-futurists envision that a motorcycle might be redesigned as a result of the self-driving capability. For example, one concept is the Exocycle, which is a self-driving motorcycle that looks somewhat like the ones used in the movie Tron. It is an enclosed cockpit like contraption. The human occupants sit inside of a frame and are protected from the environment similar to sitting inside a car. A distinction for the Exocycle over a self-driving car is that the Exocycle remains relatively slim in comparison to a self-driving car. That being the case, the designers also came up with the idea that two Exocycles could connect to each other, side-by-side, and drive together, doing so in a width that is somewhat the same as the width of a car. It seems doubtful that the Exocycle would give the same kind of thrill ride that a conventionally shaped motorcycle would provide. As such, I think that the concept of an enclosed self-driving motorcycle is perhaps closer to a self-driving car concept than it is akin to a self-driving motorcycle per se. BMW has a futuristic concept motorcycle called the Motorrad Vision Next 100, which pretty much resembles a Speed Racer look-and-feel. They suggest that the design eliminates the need for the rider of the motorcycle to wear any protective gear (this seems somewhat questionable), but anyway the design of the Motorrad assumes that the rider is the driver, and so it is not envisioned as a self-driving motorcycle. If you are a believer that motorcycles will always (or should always) have a human driver, the BMW approach might be rather appealing to you. Robot That Rides A Motorcycle An interesting alternative to a built-in self-driving motorcycle consists of having a robot that can drive a motorcycle. This is clever because it means that all existing motorcycles could (in theory) become “self-driving” by simply having the robot sit on the motorcycle and drive it along. You could then sit behind the robot and hang-on as a passenger, or you could send your robot-driven motorcycle to go do errands for you. Yamaha pitted an early version of such a robot against a professional motorcycle driver, and the company admits that trying to develop such a “humanoid” type robot is like a moonshot. I’d speculate that having a fully functional humanoid-like robot that could also drive a motorcycle is a lot further off in the future than the formulation of a self-driving motorcycle. But, anyway, it’s an intriguing notion that posits the idea that rather than trying to reformulate the motorcycle itself, just create a human shaped robot that could drive a motorcycle. Let’s take a moment and consider what makes a self-driving motorcycle a difficult proposition. Why Self-Driving Motorcycles Is A Moonshot Plus I’ll start by considering the hardware side of things, and make a comparison to self-driving cars. When you look at a self-driving car you’ll notice that there are numerous sensors on the self-driving car. The added bulk of the sensors is not especially overwhelming and so a car can bear the added weight and such. For a motorcycle, putting those same kinds of sensors onto the slim frame of a motorcycle is not going to be easy, especially if we wanted to include multiple cameras, radar units, sonar units, LIDAR, etc. In that sense, one must question the viability of having a self-driving motorcycle simply due to where to put all the sensory devices, and yet at the same time keep the motorcycle slim and trim. You would either need to accept the idea that the shape and size of a motorcycle will need to blossom a bit, or, perhaps the sensors themselves will gradually become increasingly miniaturized. Fortunately, the tech industry is continually trying to reduce the size of such sensory devices and that bodes well for the self-driving motorcycle. The next aspect to keep in mind is that there is a lot of computer processing needed to undertake the AI part of the self-driving vehicle. Once again, for a self-driving car, we can hide the computers in the underbody or in the trunk, and so it doesn’t particularly bulk-up the size of a car. For a motorcycle, we are faced with the sizing aspects of a slim and trim bike, and the question arises as to where we’ll fit the computers needed to sustain the self-driving capabilities. There is also a significant cost factor involved in a self-driving vehicle. The sensor devices and the computer systems all add quite a bit to the cost of a self-driving vehicle. We tend to think of motorcycles as a less expansive alternative to cars, but a self-driving motorcycle could skyrocket in price due to all the added hardware and software. The same can be said of a self-driving car, though we are all pretty much used to seeing expensive cars on the road and so adding another chunk of cost might not be as bad for the self-driving car buyer as it would for the self-driving motorcycle buyer. Hopefully, the costs of the hardware and software for a self-driving vehicle will gradually come down, which makes sense as the technology becomes more commoditized and as the volume of such vehicles rises in the marketplace. Stretching Beyond A Self-Driving Car For The AI There are ways in which a self-driving motorcycle needs perhaps more sophisticated capabilities than does a self-driving car, and this will make it harder perhaps to develop a truly autonomous motorcycle. A self-driving car would usually be abiding by the lanes of the roadway and stay within a lane. Indeed, most self-driving cars do a rather simple process of lane following, which is achieved by scanning for the lane painted on the road and following it along. Lane changes by a self-driving car are intended to take place only when needed and it is not an especially continuous practice. For the self-driving motorcycle, we can assume that the motorcyclists will want to do what human drivers do, such as lane splitting. This is a harder problem than what the self-driving car faces. Also, a motorcycle can go places that a conventional car cannot readily go, such as in narrow spaces or even off the roadway and onto other areas. This again ups the ante in terms of the needed sophistication of the AI for the self-driving motorcycle. This brings up an ethics related question that somewhat confronts self-driving cars, but perhaps even more so for self-driving motorcycles. Namely, should a self-driving motorcycle be able to undertake either illegal maneuvers or at least even ill-advised maneuvers? Or, should the AI be shaped in such a means that it won’t allow for any kind of ill-advised driving and nor any illegal driving. For existing human drivers of motorcycles, I am guessing that if a self-driving motorcycle has severe restrictions to keep it legal and not allow ill-advised driving, some of the value and joy of motorcycle riding will be driven out of the self-driving motorcycle for those human riders. Would a self-driving motorcycle be allowed to exceed the speed limit? Would a self-driving motorcycle be allowed to do a street race against another self-driving motorcycle? Could you “burn rubber” with a self-driving motorcycle? These are all questions that aren’t technological, since any of those aspects could be undertaken by the AI, and instead fall into the realm of societal and political dimensions. Other interesting aspects of a self-driving motorcycles also arise that aren’t necessarily raised with self-driving cars.  For example, suppose a self-driving motorcycle comes up to a red light at an intersection and properly comes to a stop. It has been sent on an errand by its owner and there is no human riding on the motorcycle. Meanwhile, while stopped at the red light, a pedestrian decides to run over and hop onto the self-driving motorcycle. What happens now? Should the self-driving motorcycle proceed along and allow for the interloper to join for a free ride? Presumably, we wouldn’t want that. So, somehow the self-driving motorcycles needs to be able to sense the presence of a rider, and also then determine what to do when a stranger decides to jump on. Another similar kind of question involves a self-driving motorcycle that has a human rider and suppose the human rider falls off the bike. Then what? Should the self-driving motorcycle come to a halt? Suppose though that the halting action takes the self-driving motorcycle some number of yards ahead to achieve. Would we expect the self-driving motorcycle to turn around and come back to where the human rider fell off? I think you get the overall gist. Conclusion There are going to be lots of situations involving self-driving motorcycles that differ from a self-driving car. Therefore, developing a self-driving motorcycle does not imply that we can simply port over the AI of a self-driving car and voila have ourselves a functional self-driving motorcycle. The particular aspects of a motorcycle will require a specialized AI capability. This then brings up an economic question – will there be enough money to be made from a self-driving motorcycle market to warrant the motorcycle makers or tech firms or auto makers to invest in developing self-driving motorcycles? Some would say that the self-driving car will be sufficient and that fewer and fewer people will want or see the need for a self-driving motorcycle. There are others that say the self-driving car will gradually become so commonplace that there will be rising desire for something else, and that the “else” will be self-driving motorcycles. We’ll have to wait see what happens as the future of the open road and how we all want to drive on it reveals itself. Copyright 2019 Dr. Lance Eliot This content is originally posted on AI Trends. Read more »
  • US Tech Giants Reported Helping to Build the Chinese Surveillance State
    true
    By AI Trends Staff An organization founded by technology giants Google and IBM is working with a company helping China’s government build a mass surveillance system for use on its citizens, as described in a recent story in The Intercept. The OpenPower Foundation — a nonprofit led by Google and IBM executives with the aim of trying to “drive innovation” — has set up a collaboration between IBM, Chinese company Semptian, and U.S. chip manufacturer Xilinx. Together, they have worked to advance a breed of microprocessors that enable computers to analyze vast amounts of data more efficiently. Semtian, based in Shenzhen, is using the chips to enhance the capabilities of internet surveillance and censorship technology it provides to security agencies in China, according to The Intercept’s sources and documents. A company employee said ts technology is being used to monitor the internet activity of 200 million people. Semptian, Google, and Xilinx did not respond to requests from the The Intercept for comment. The OpenPower Foundation said in a statement that it “does not become involved, or seek to be informed about the individual business strategies, goals or activities of its members,” due to antitrust and competition laws. An IBM spokesperson said that his company “has not worked with Semptian on joint technology development,” but declined to answer further questions. A source familiar with Semptian’s operations said that Semptian had worked with IBM through a collaborative cloud platform called SuperVessel, which is maintained by an IBM research unit in China. Asked to comment by The Intercept, US Sen. Mark Warner, D-Va., vice chair of the Senate Intelligence Committee, said he was alarmed by the revelations. “It’s disturbing to see that China has successfully recruited Western companies and researchers to assist them in their information control efforts,” Warner said. Anna Bacciarelli, a researcher at Amnesty International, was quoted as saying that the OpenPower Foundation’s decision to work with Semptian raises questions about its adherence to international human rights standards. “All companies have a responsibility to conduct human rights due diligence throughout their operations and supply chains,” Bacciarelli said, “including through partnerships and collaborations.” The Chinese mass surveillance system is named Aegis, The Intercept reported. Aegis equipment has been embedded in the Chinese internet and phone networks. This enables the country’s government to collect people’s email records, phone calls, text messages, cell phone locations, and web browsing histories, according to two sources familiar with Semptian’s work. Documents sent to The Intercept by a Semptian employee says Aegis can provide “a full view of the virtual world,” enabling the government to “the connections of everyone,” including “location information for everyone in the country.” The Intercept posted a video on its site showing the system at work, tracking a citizen’s location over time. The documents also showed that the system can “block certain information [on the] internet from being visited,” thereby censoring content that the government does not want citizens to see. Sizing the amount of data being processed is a challenge. Of the estimated 800 million internet users in China, the system could be monitoring one quarter of them, if the information provided by the company to The Intercept is accurate. This translate to a volume of thousands of terabits per second. An internet connection that is 1,000 terabits per second could transfer 3.75 million hours of high-definition video every minute. Joss Wright, a senior research fellow at the University of Oxford’s Internet Institute, was quoted as saying, “There can’t be many systems in the world with that kind of reach and access.” He said it was technically feasible. However,  “There are questions about how much processing [of people’s data] goes on,” Wright said, “but by any meaningful definition, this is a vast surveillance effort.” Muslim Surveillance Effort in China In related news, Princeton University and the largest public pension plan in the US are among companies funding technology behind the Chinese goverment’s surveillance of some 11 million people of Muslim ethnic minorities, according to a report in BuzzFeed News. This effort related to the effort of the Chinese government since 2017 to move more than a million Uighur Muslims and other ethnic minorities into “reeducation camps” in the Chinese northwest region of Xinjiang. They are identified in part with facial recognition software created by two companies: SenseTime, based in Hong Kong, and Megvii of Beijing. Xu Li, chief executive officer of SenseTime Group, is identified by the company’s facial recognition system as he poses for a photograph at SenseTime’s showroom in Beijing on June 15, 2018. (Bloomberg / Getty Images) BuzzFeed News found that US universities, private foundations and retirement funds have supported investors that have plowed hundreds of millions of dollars into the two startups over the last three years. SenseTime and Megvii have grown into billion-dollar industry leaders, partnering with government agencies and other private companies to develop tools for the Chinese government’s efforts to monitor its citizens. The investors include the Alaska Retirement Management Board, the Massachusetts Institute of Technology and the Rockefeller Foundation, all limited partners in private equity funds that invested in SenseTime or Megvii. While Sen. Marco Rubio of Florida has backed a bill to condemn human rights abuses in Xinjiang, the Florida employee pension fund is also invested in the system for tracking Uighurs, BuzzFeed News reported. In statements to BuzzFeed News, SenseTime and Megvii distanced their technologies from what’s happening in the region and downplayed the significance of US funding. “SenseTime’s success in original AI research and commercialization has attracted top-tier investors from around the world,” the company said in a statement, which also noted that it welcomed regulation of facial recognition tools. “We have always been committed to fair and responsible applications of AI technology and we take this duty of care seriously.” Megvii told BuzzFeed News its “solutions are not designed or customized to target or label ethnic groups. We are concerned about the well-being and safety of individuals, not about monitoring any particular demographic groups.” When asked if they were aware their money was indirectly funding the surveillance of Uighurs, most institutional investors did not respond to BuzzFeed News’ requests for comment. Some declined to comment. Only one, the Los Angeles County Employees Retirement Association (LACERA), responded on the record saying it would evaluate its investment. BuzzFeed News reports that authorities have used software and devices from those firms to effectively create a police state in the region, which is home to more than 11 million Uighurs. Experts and US government officials estimate that at least a million Muslim and religious minorities are currently detained in internment camps. The US is considering cutting off the flow of American technology to Chinese companies involved in the surveillance efforts, including Megvii, according to a recent account in Bloomberg. The Trump administration is said to be concerned about their role in helping Beijing repress minority Uighurs in China’s west. Read the source accounts in The Intercept, in BuzzFeed News. and in Bloomberg. Read more »
  • AI in Manufacturing, in Use by Less than 8% Today, Seen Growing to 50% by End of 2021
    true
    By AI Trends Staff Using AI and the Internet of Things (IoT) should help manufacturers restructure supply chains, improve efficiency, address skills shortages, transform operations – and create entirely new revenue streams and business models. Today less than 8 percent of manufacturers are currently using AI, according to the Manufacturing Leadership Council’s Factories of the Future Survey. Within two years, 50 percent expect to deploy AI. Therefore manufacturers are in the early adopter phase. Here are some suggestions for rolling out some AI, from a report in The Manufacturer. Make Sense of Data Manufacturers are faced with a virtual tsunami of data – amplified by the IoT. It is now estimated that more than 75 billion connected devices will be in operation by 2025, the majority of which will be in the manufacturing sector. In a recent McKinsey survey, 60% of executives confirmed that IoT data yielded significant insights, yet 54% admit that they used less than 10% of that IoT information. To collect, process and use this mass of data to gain a competitive advantage, manufacturers should focus on an Enterprise Information Management (EIM) system. The EIM supports decision-making processes and day-to-day operations that require the availability of knowledge. Once a manufacturer has an EIM system in place, AI comes into play. Combined with advanced analytics, AI can bring together information from a wide variety of data sources, identify trends and provide recommendations for future actions – from changes to business process automation to supporting employees in their daily decision-making. Making Customers Central Manufacturers are moving from making a product, selling it and servicing it to a different business model, many offering a product-as-a-service or data-as-a-service business model. Using new business models to monetize data is now as important to generating profits as the traditional product line has been. As one example, Knorr-Bremse Group – the leading manufacturer of braking systems for rail and commercial vehicles – is now using business intelligence and AI-powered data analytics software. The company provides embedded dynamic dashboards and reporting to help its customers reduce maintenance costs and ensure better diagnostics. Taking this data-centric approach ensures Knorr-Bremse Group can give its customers the flexibility to record and review data from across various IoT subsystems and to build their own reports and dashboards as required. More Adoption Predictions More than 50 percent of all manufacturers will be using AI in some form by the end of 2021, according to an account in IndustryWeek. The move to digitize business processes that has been underway, will provide a platform for AI in manufacturing. According to an IFS survey of 750 manufacturing companies across 16 countries, 81% of manufacturers have already embraced some type of transformational technology to digitize business processes. AI will help to interpret the massive amounts of data that IoT devices provide, allowing more accurate forecasting to take place and better predictive maintenance routines. A big stumbling block for AI has always been the term ‘AI’ itself. It misleads many manufacturers, suggesting a large end-to-end system. AI is already used to support enterprise-wide business decisions in what-if planning scenarios. In reality, ‘AI’ is more often a collection of targeted technologies, from natural language processing to vision identification to chatbots to analytics to automation – each with its own strengths and applications. What they all share is the intelligence factor: a high degree of accuracy and an incredibly fast ability to learn from their mistakes. When thinking of AI, it’s important to remember that you can’t “implement AI” any more than you can “implement the internet.” Before you initiate any project, you must figure out your “why.” What exact business goal or target are you aiming for? What exactly do you want to improve or enhance? The more targeted your objectives are, the more competitive and transformative your results will be. Talking to the Systems More than 25 percent of manufacturing planners will be talking to their systems – using voice recognition – by the end of 2020, according to the Industry Week account. This percentage is based on the rate of adoption of digital transformation in general within business, and the advantages provided by robots (lights-out manufacturing, no lunch breaks). AI solutions are smarter and more eloquent than most of us realize. A year ago, a major AI customer survey found that two-thirds of people who said they had never used AI actually had, through chatbots. The quality was so high that the chatbots had been indistinguishable from human speech. The same survey found 84% of respondents were comfortable using voice-activated AI at home, in the form of Alexa, Siri or Home. //And if simplicity, speed and accuracy are crucial consumer benefits, imagine what they could do on a manufacturing line.\\ BMW’s smart integration of Alexa into its models in March 2018 was widely applauded. The integrated voice activation added layers of service and performance capability to the driving experience. What’s less well-known is that voice-activated solutions are also already being used on the production side of the automotive sector. In Japan, NEC, which manufactures servers and mainframe computers, is using voice-activated solutions in their order-picking process, where line personnel simply give spoken instructions and their order is instantly created. Read the source posts in The Manufacturer and IndustryWeek. Read more »
  • Five Principles to Advance AI at the Air Force
    true
    The Air Force has been on an almost three-year journey to integrate AI into operations and that effort will soon be more apparent as the service declassifies its strategy, Capt. Michael Kanaan, the service’s co-chair for artificial intelligence, said June 26 at the AI World Government conference in Washington, D.C. “We had to find a way to get us to a place where we could talk about AI in a pragmatic, principled, meaningful way,” said Kanaan, in an account in C4ISRNet.  During his speech, Kanaan laid out five principles that have guided the Air Force with artificial intelligence in the meantime. They are: Technological barriers will be a significant hurdle. While the service has made it a point to limit technological obstacles, contractors may face higher-priced products geared toward security-driven government programs compared to less expensive commercial programs. A new attitude toward commercial off-the-shelf technology within the service can help, he said. Data needs to be treated like a strategic asset. “We used to ask the question, if a tree falls in the forest does it make a sound. Well, in the 21st century the real question to ask is was something there to measure it,” he said. He explained this involves looking at when and how to digitize workflows. The Air Force must be able to democratize access to AI. “This is an opportunity now to say, machine learning as our end state, if done right, should be readable to everyone else,” Kanaan said. This will involve balancing support and operations and taking into consideration the reality that the demographics of the traditional workforce are going to shift, Kanaan explained. “Not looking at the top one percent, but focusing on the 99 percent of our workforce,” he said. “The Air Force, of those 450,000 people, 88 percent are millennials [adults under 40].” Looking to digital natives in the integration process will be valuable because this younger slice of the workforce already has insights into how this technology works, he suggested. Read the source post at C4ISRNet.  Read more »
  • Game Theory and AI Systems: Use Case For Autonomous Cars
    true
    By Lance Eliot, the AI Trends Insider When you get onto the freeway, you are essentially entering into the matrix. For those of you familiar with the movie of the same name, you’ll realize that I am suggesting that you are entering into a kind of simulated world as your car proceeds up the freeway onramp and into the flow of traffic. Whether you know it or not, you are indeed opting into playing a game, though one much more serious than an amusement park bumper cars arena.  On the freeway, you are playing a game of life-and-death.  It might seem like you are merely driving to work or trying to get to the ballgame, but the reality is that for every moment you are on the freeway you are at risk of your life. Your car can go awry, say it suddenly loses a tire, and you swerve across the lanes, possibly ramming into other cars or going off a freeway embankment. Or, you might be driving perfectly well, and all of a sudden, a truck ahead of you unexpectedly slams on its breaks and you crash into the truck.  I hope this doesn’t seem morbid. Nor do I want to appear to be an alarmist. But, you have to admit, these scenarios are all possible and you are in fact at the risk of your life while on the freeway. For a novice driver, such as a teenager starting to drive, you can usually see on their face an expression that sums up the freeway driving circumstance – abject fear. They know that one wrong move can be fatal. They are usually somewhat surprised that anyone would trust a teenager to be in such a situation of great responsibility. Most teenagers are held in contempt by adults for a lack of taking responsibility seriously, and yet we let them get behind the wheel of a multi-ton car and drive amongst the rest of us.  That’s not to suggest that its only teenage drivers that understand this matter. There are many everyday drivers that know how serious being on the freeway is. They grip the steering wheel with both hands and arch their backs and are paying close attention to every moment while on the freeway. Meanwhile, there are drivers that have gotten so used to driving on the freeway that they act as though they are at a bumper car ride and don’t care whether they cut off other drivers or nearly cause accidents. They zoom along and seem to not have a care in the world. One always wonders whether those drivers are the ones that get into the accidents that you see while on the freeway. Are they more prone to accidents or are they actually more able to skirt around accidents, which maybe they indirectly caused, but managed to avoid themselves getting entangled into.  For my article about how greed motivates drivers and its impacts on self-driving cars, see: https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/  Leveraging Game Theory Anyway, if you are willing to concede that we can think of freeway driving as a game, you then might be also willing to accept the idea that we can potentially use game theory to help understand and model driving behavior. With game theory, we can consider the freeway driving and the traffic to be something that can be mathematically modeled. This mathematical model can take into account conflict. A car cuts off another car. One car is desperately trying to get ahead of another car. And so on. The mathematical model can also take into account cooperation. As you enter onto the freeway, perhaps other cars let you in by purposely slowing down and making an open space for you. Or, you are in the fast lane and want to get over to the slow lane, so you turn on your blinker and other cars let you make your way from one lane to the next. There is at times cooperative behavior on the freeway, and likewise at times there is behavior involving conflict.  If this topic generally interests you, there’s key work by John Glen Wardrop that produced what is considered the core principles of equilibrium in traffic assignment. Traffic assignment is the formal name given to modeling traffic situations. He developed mathematical models that showcase how we seek to minimize our cost of travel, and that we potentially can reach various points of equilibrium in doing so. At times, traffic suffers and can be modeled as doing so due to the “price of anarchy,” which is based on presumably selfish oriented behavior.  See my article about the prisoner’s dilemma and tit-for-tat driving: https://aitrends.com/selfdrivingcars/tit-for-tat-and-ai-self-driving-cars/  See my article about bounded irrationality and driving behavior: https://aitrends.com/selfdrivingcars/motivational-ai-bounded-irrationality-self-driving-cars/  For those of you that are into computer science, you likely are familiar with the work of John von Neumann. Of his many contributions to the field of computing and mathematics, he’s also known for his work involving zero-sum games. Indeed, he made use of Brouwer’s fixed-point theorem in topology, and had observed that when you dissolve sugar in a cup of coffee that there’s always a point without motion. We’ll come back to this later on in this exploration of game theory and freeway traffic.  Let’s first define what a zero-sum game consists of. In a zero-sum game, the choices by the players will not decrease and nor increase the amount of available resources, and thus they are competing against a bounded set of resources. Each player wants their piece of the pie, and in so doing are keeping that piece away from the other player. The pie is not going to expand or contract, it stays the same size. Meanwhile, the players are fighting over the slices and when someone else takes a slice it means there’s one less for the other players to have. A non-zero sum game allows for the pie to be increased and thus one player doesn’t necessarily benefit at the expense of the other players.  When you are on the freeway, you at times experience a zero-sum game, while at other times it is a non-zero sum game. Suppose you come upon a bunch of stopped traffic up ahead of you. You realize that there’s an accident and it has led to the traffic halting. You are going to get stuck behind the traffic and be late to work. Looking around, you see that there’s a freeway offramp that you could use to get off the freeway and take side streets to get around the snarl.  It turns out that the freeway traffic is slowly moving forward up toward the blockage, and meanwhile other cars are also realizing that maybe they should try to get to the offramp. You are in the fast lane, which is the furthest lane from the exit ramp. The cars in the other closer lanes are all vying to make the exit. They don’t care about you. They care about themselves making the exit. If they were to let you into their lane, it would increase your chances of getting to the offramp, but simultaneously decrease their chances. This is due to the aspect that the traffic is slowly moving forward and will gradually push past the offramp. There’s a short time window involved and it’s a dog eat dog situation. Zero-sum game.  But suppose instead the situation involved all the cars that were behind the snarl to share with each other to get to the offramp. Politely and with civility, the cars each allowed other cars around them to make the offramp. Furthermore, there was an emergency lane that the cars opted to use, which otherwise wasn’t supposed to be used, and opened up more available resources to allow the traffic to flow over to the exit. Non-zero sum game (of sorts).  Game theory attempts to use applied mathematics to model the behavior of humans and animals, and in so doing explain how games are played. This can be done in a purely descriptive manner, meaning that game theory will only describe what is going on about a game. This can also be done in a prescriptive manner, meaning that game theory can advise about what should be done when playing a game.  Applying Game Theory To Autonomous Cars What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are using game theory to aid in modeling the traffic that will occur with the advent of AI self-driving cars.  There are some that believe in a nirvana world whereby all cars on the roadways will be exclusively AI self-driving cars. This provides a grand opportunity to essentially control all cars and do so in a macroscopic manner. Presumably, either by government efforts or by other means, we could setup some master system that would direct the traffic on our roads. Imagine that when you got onto the freeway, all of the cars on the freeway were under the control of a master traffic flow system. Each car was to obey strictly to the master traffic flow system. It alone would determine which lane each car would be in, what the speed of the car would be, when it will change lanes, etc.  In this scenario, it is assumed that there would never be traffic snarls again. Somehow the master traffic flow system would prevent traffic snarls from occurring. All traffic would magically flow along at maximum speeds and we could increase the speed limit to say 120 miles per hour. Pretty exciting!  But, this is something that seems less based on mathematics and more so based on a hunch and a dream.  It’s also somewhat hard to believe that humans are going to be willing to give up the control of their cars to a master traffic flow system. I realize you might immediately point out that if people are willing to secede control of the driving task to an AI-based self-driving car, it’s a simple next step to then secede that their particular AI self-driving car must obey some master traffic control system. We’ll have to wait and see whether people will want their AI self-driving car to be an individualized driver, or whether they’ll be accepting that their individualized driver will become a robot Borg of the larger collective.  Anyway, even if all of this is interesting to postulate, it still omits the real-world aspect that we are not going to have all and only AI self-driving cars for a very long time. In the United States alone, there are 200+ million conventional cars. Those conventional cars are not going to disappear overnight and be replaced with AI self-driving cars. It’s just not economically feasible. As such, we’re going to have a mixture of AI self-driving cars and conventional cars for quite some time.  Let’s make that even longer too, due to the aspect that there are different levels of AI self-driving cars. A true self-driving car is considered at Level 5. That’s a self-driving car for which the AI does all of the driving. There is no need for a human driver. There is indeed usually no provision for a human driver, and the driving controls such as the steering wheel and pedals are not even present. For self-driving cars less than Level 5, the driving task is co-shared between the human driver and the AI.  We might as well then say that the self-driving cars that are less than a Level 5 are pretty much in the same boat as the conventional cars. This is due to the aspect that the human driver can still take over the driving task (though, for Level 4, this is not yet quite settled and some view that a Level 4 would still have car controls for humans, while others insist it not). If we have even one ounce of human driving, we’re back to the situation that it’s going to be problematic to have a master traffic flow system that commands all cars to obey. You might argue that maybe when a less than level 5 self-driving car gets onto the freeway we could jury rig those cars to obey the master traffic flow system, but this seems like a credibility stretch of how this would play out.  You could even stand this topic on its ear by making the following proposal. Forget about AI self-driving cars per se. Instead, let’s make all cars to have some kind of remote car driving capability. We add this into all cars, conventional or otherwise. When any car gets onto the freeway, it must have this remote control feature included, otherwise it is banned from getting onto the freeway. So, we’ve now reduced all such cars to follow-along automata that the master traffic flow system can control. We would somehow lock-out the human driving controls and only allow the use of the remote control, during the time that the car is on the freeway.  See my article about the levels of AI self-driving cars: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/  See my article about swarms and AI self-driving cars: https://aitrends.com/selfdrivingcars/swarm-intelligence-ai-self-driving-cars-stigmergy-boids/  If we did this, it might give us the nirvana traffic flow advantages that some claim they see in the future. And it would still allow for human driving, but just not on the freeways, or maybe only on the freeways at off-hours and that during the morning traffic and evening traffic times the master flow system takes over all such cars on the freeways. We then wouldn’t need to be in a rush to perfect the AI self-driving cars, since instead we’ve just outfitted cars with this remote control feature. It would be a lot easier than trying to get a car to drive like a human does, which is what the AI self-driving car efforts are trying to achieve.  Well, I really doubt we’ll have us all accept the notion of having a remote control driving feature placed into our conventional cars. This seems like something that society at large would have severe heartburn over. It has too much of a Big Brother smell to it.  That’s actually why so far there seems to be such overall support for AI self-driving cars. Most people tend to assume that an AI self-driving car will obey whomever the human occupant is. It’s like having your own electronic chauffeur that is always at your beck and call. If instead it was being pitched that AI self-driving cars would allow for governmental control of all car traffic and that wherever you wanted your AI self-driving car to go would first need to be cleared by the government, I’d bet we’d have a lot of people saying let’s put the brakes on this AI self-driving car thing.  Now, it could be that we at first have AI self-driving cars that are all about individual needs. You are able to use your AI self-driving car to drive you wherever you want to go, and however you want to get there. But, then there’s a gradual realization that it might be prudent to collective guide those AI self-driving cars. And so via V2I (vehicle to infrastructure) communication, we creep down that path by having the roads tell your AI self-driving car which road to take and how fast to go. This then expands and eventually reaches a point whereby all AI self-driving cars are doing this. The next step becomes master control. Ouch, we got back to that. It’s just that it might happen by happenstance over time, rather than as part of a large-scale master plan.  You might enjoy reading my piece about conspiracy theories and AI self-driving cars: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/  Predicting A Point Of Equilibrium Returning to the aspect about using game theory, we can at least try to do traffic simulations and attempt to see what might happen as more and more cars become AI self-driving cars, especially those that are at the vaunted Level 5. These simulations use various payoff matrices to gauge what will happen as an AI self-driving car drives alongside human driven cars. A symmetric payoff is one that depends upon the strategy being deployed and not the AI or person deploying it, while an asymmetric payoff is dependent. We also include varying degrees of cooperative behavior versus non-cooperative behavior. See my article about human and AI driving styles: https://aitrends.com/selfdrivingcars/driving-styles-and-ai-self-driving-cars/  See my article about simulations for AI self-driving cars: https://aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/  John Nash made some crucial contributions to game theory and ultimately was awarded the Nobel Prize in Economic Science for it. His mathematical formulation suggested that when there are two or more players in a game, at some point there will be an equilibrium state such that no player can do any better than they are already doing. The sad thing is that we cannot yet predict per se when that equilibrium point is going to be reached – well, let’s say it is very hard to do.  This is still an open research problem and if you can solve it, good for you, and it might get you your very own Nobel Prize too.  Why would we want to be able to predict that point of equilibrium? Because we could then potentially guide the players toward it. On the freeway, imagine that you have a hundred cars all moving along. Some are not doing so well and are behind in terms of trying to get to work on time. Others are doing really well and ahead of schedule and will get to work with plenty of time to spare. All else being equal, if we had a master traffic flow system, suppose it could reposition and guide the cars so that they would all be at their best possible state.  But if we aren’t able to figure out that best possible state, there’s no means to therefore guide everyone toward it. We instead have to use a hit-and-miss approach (not literally hit, just metaphorically). In more formal terms, Nash stated that for a game of a finite number of moves, there exists a means by which the player can randomly choose their moves such that they will ultimately reach a collective point of equilibrium, and that at that point no player can further improve their situation.  You might say that everyone has reached the happiest point to which they can arrive, given the status of everyone else involved too. When I earlier said it was hard to calculate the point of equilibrium, I was suggesting that it can be found but that it is computationally expensive to do so. Some of you might be familiar with classes of mathematical problems that are considered computable in polynomial time (P), and others that are NP (non-deterministic polynomial time). We aren’t sure yet whether the calculation of Nash’s point of equilibrium is P or NP. Right now, it seems hard to calculate, that we can say for sure.  By the way, for those of you looking for a Nobel Prize, please let us know if P = NP.  Conclusion Game theory will increasingly become important to designing and shaping traffic flow on our roads, particularly once we begin to see the advent of true Level 5 AI self-driving cars. The effort to mathematically model conflict and cooperation in our traffic will involve not only the intelligent rational human decision makers, along with their irrational behavior, but also the potential intelligent rational (and maybe irrational) behavior of the AI of the self-driving cars. Getting a handle on the traffic aspects will allow AI developers to better shape the AI of the self-driving cars and will aid regulators and the public at large in experiencing what hopefully will be better traffic conditions than with human-only drivers. I don’t think we want to end-up with AI self-driving cars that drive like those crazy human drivers that seem to not realize they are involved in a game of life-and-death. It’s deadly out there and we need to make sure that the AI self-driving cars know how to best play that that serious and somber game. Copyright 2019 Dr. Lance Eliot  This content is originally posted on AI Trends.   Read more »
  • Digital Assistants Transforming Public Service
    true
    By Deborah Borfitz, Senior Science Writer, AI Trends Digital assistants have become a major trend in government at every level and across geographies, and could soon be a mainstay in many state and federal agencies in the U.S. Recent favorable signs include an executive order launching the American AI Initiative and the Health and Human Services Department awarding 57 spots on its Intelligent Automation/Artificial Intelligence (AI) contract, according to natural language processing (NLP) expert William Meisel, president of TMA Associates. Speaking at the AI World Government conference, held last month in Washington, D.C., Meisel says digital assistants (aka “intelligent” or “virtual” assistants) are among the most developed and least risky ways to implement AI—and “the closest to what we see in sci-fi.” Digital assistants are broadly applicable across departments and agencies looking to cut costs and boost human productivity and have a minimum probability of failure and unintended consequences. For a citizenry looking for answers, they’re also a “nice alternative to automated systems and long hold times,” he adds. William Meisel, president, TMA Associates Juniper Research reports that, by 2023, one-quarter of the populace will be using digital voice assistants daily, says Meisel. By the end of this year, the global install base for smart speakers is projected to exceed 200 million units and they’re doing as well as humans in understanding speech, he adds.  NLP is the core technology, matching user text input to executable commands. Digital assistants that recognize the voice typically first convert speech to text, meaning speech recognition can be tacked onto a text-only (chatbot) solution, Meisel says. Either way, the technology generates a lot of data that can be used to personalize conversations and fix flaws in websites.  Use Cases Among the smorgasbord of intelligent assistants in the public sector are: Emma, used by U.S. Citizenship and Immigration Services (ICE) to help website visitors get answers and find information in English or Spanish; and Mrs. Landingham, a chatbot of the U.S. General Services Administration that works with the Slack app and guides new-hire onboarding, says Meisel. In the UK, the National Health Service has a digital assistant to help residents determine if their medical condition warrants a trip to the emergency room, he continues. The Medical Concierge Group in Uganda has built a digital assistant to advise people on their treatment options and when to see a doctor. And a chatbot based on the Haptik platform is allowing officials in Maharashtra, India provide conversational access to information on 1,400 public services managed by the state government.  Virtual assistants that give health advice over the phone are expected to be major players as the United Nations works to meet its 2030 Sustainable Development Goals, says Meisel. The MAXIMUS Intelligent Assistant has already enhanced the customer service experience of government citizenries around the globe.  In Mississippi, the MISSI chatbot assists with public services and suggests good places to visit, says Meisel. The City of Los Angeles is particularly fond of bots. Residents can turn to CHIP (City Hall Internet Personality), based on Microsoft’s Azure bot framework, if they need help filling out forms. They can also opt to receive local daily news and information via Amazon’s Alexa. Up in San Francisco, PAIGE—built atop Facebook’s wit.ai NLP engine—is assisting workers with questions about the city’s confusing procurement process, Meisel says. OpenData KC, the Facebook messenger chatbot used by Kansas City’s open data portal, has enabled users to quickly find relevant information and datasets on a crowded online website, says Meisel. In the Carolinas, the not-for-profit hospital network Atrium Health has a HIPAA-compliant, Alexa-based digital assistant people are using to reserve a spot at one of the system’s 31 urgent care locations. By the Numbers Commercial applications of digital assistants are likewise varied and widespread, he says. Last year, Bank of America launched a mobile app called Erica that customers can use to check their account balance and make transactions. Erica is gaining new users at the rate of half a million per month and has doubled (to 4,000) the ways in which clients can ask her questions. Telecommunications conglomerate Vodafone Group has TOBi to handle customer transactions from start to finish and plans to increase the number of contacts reached by chatbot six-fold over the next few years. Adobe Analytics finds 91% of 400 surveyed business decision-makers have made significant investments in voice interaction and 99% are increasing those investments, says Meisel. Close to a quarter of companies have already released a voice app while 44% plan to do so this year, he adds. Most of those apps are for defined channels.  Failures are common when companies and governments try to build a specific AI tool, says Meisel, but there is no shortage of companies standing by to help—including TMA Associates as well as Nuance Communications, Verint, and Microsoft. The biggest challenge with digital assistants, he adds, is that “you don’t know what people are going to say when call in. You will always have customers say something you don’t expect.” The solution is to deploy slowly, using the technology to augment the human system while you learn what you don’t know—precisely what Amazon did with Alexa.  Privacy By Design Detecting and resolving misunderstandings between humans and machines is the specialty of Ian Beaver, lead research engineer at Verint, helping to ensure intelligent virtual assistants deliver tangible productivity gains. Interactive voice response technology and website FAQs are not enough for government agencies where “funding is pulled in multiple directions and customer service is typically not high on the priority list,” he says. Ian Beaver, research engineer, Verint Digital assistants can also better accommodate fluctuations and surges in user demand, says Beaver. They can deal with unforeseen uses cases, circumstantial information, changing user demographics and requirements, and focus on where improvements are most needed. In the public sector, agencies have a captive audience because people have no choice in their service provider, says Beaver, and “people don’t trust what they do not choose.” Users are more willing to provide identifiable data if they know there are guardrails around how it can be used—think privacy protection laws like GDPR and CCPA. Virtual assistants that offer “privacy by design” can likewise give users a greater sense of freedom to talk about sensitive topics with perceived repercussions, he adds.  Beaver presented the U.S. Army’s SGT STAR virtual recruiter to demonstrate his points. The chatbot went live in 2006, integrated with Facebook four years later and hit app stores in 2012, he says. It understands about 1,100 distinct user intentions, talks to 900 unique people a day and took over the work of 55 cyber-recruiters in the Army’s live chat facility. “After a while, only 3% of conversations hit a live human,” he says. People talk to SGT STAR longer than they would a human recruiter, and all that data quickly painted a portrait of users, says Beaver. They’re influenced by movies, don’t want to waste recruiters’ time, and family members care about what will happen to their loved one. They will ask hard questions of the digital assistant before talking to a human recruiter, “like a test run.” Users also have a lot of practical questions, such as “How do I pay my bills when I’m deployed?” and “Am I going to have to cut my hair?” The information was used to redesign the Army’s website, which now includes answers to 400 common questions, says Beaver. Unexpected uses of SGT STAR were also discovered—notably, to disclose embarrassing, illegal and other personal issues that could affect enrollment or Army life. “We went into the healthcare space because of how open people were.” Veterans and active duty personnel were both looking for resources on post-traumatic stress disorder, insomnia, and other service-related mental health issues, he continues. Enlisted soldiers did not want to risk being judged or discharged by going to their supervisor for answers. A second case study looked at the now-retired SpectreScout, a chatbot built to scan all internet relay channels on behalf of the resource-limited cyber-crimes division of U.S. Immigration and Customs Enforcement (ICE). “We predicted the channels where the bad boys would hang out and pretended to be looking for goods [e.g., a child to exploit],” says Beaver. “We’d spin up a bunch of users and have conversations with our self. When humans joined in, we’d arrange a meeting with a suspect and an ICE agent would take over.” It was so successful at generating leads that it finally had to be shut down; ICE agents couldn’t keep up. So Verint built Emma to answer questions, says Beaver, including sensitive inquiries about how to enter the U.S. as a refugee or to apply for asylum. The virtual assistant launched in 2015 to handle large swings in call volume triggered by mere mentions of policy changes. Emma succeeded in reducing those call spikes, and without the need to retrain a bunch of ICE representatives, he notes. In fact, Emma became a go-to resource for immigration attorneys on policy matters.  Smarter Systems In healthcare, one role of digital assistants is to create a “frictionless experience” between doctors and patients, says Eduardo Olvera, director of user experience at Nuance Communications. The company has an Alexa-like virtual assistant that extracts insights from routine exam-room dialogue and automatically uploads the information to the right spots in the electronic health record—meaning, doctors can be fully present with patients and not focused on a computer screen. In call centers, intelligent assistants can likewise be fed transcripts of phone conversations and return recommendations for improving citizen engagement, he adds. The new Pathfinder project of Nuance Communications is using machine learning and AI innovation to increase the conversational intelligence of virtual assistants and chatbots, says Olvera. Project Pathfinder reads existing chat logs and transcripts of conversations and uses the data to build effective dialog models by adding in missing pieces of information and modifying the flow. The company just came up with another, yet-to-be-named conversational AI tool that will manage the work of testing for biases in the data, he says. Systems get smarter, and interactions improve, when there is “common ground,” Olvera says. “We went from ‘please tell me in my own words’ to ‘please tell me in your own words,’ but that’s still not very grounded. What we want to see is ‘I already know. Here, it’s done.’” If-then rules only work about 35% of the time, Olvera says, and using machine learning to populate a record with everything known about a customer gets close to 80% accuracy in predicting their need. Additional AI innovation can take governments to the 90% mark that is truly transformative.  Learn more about  Pathfinder from Nuance Communications and about nextIT from Verint. Read more »
  • Encourage Immigration of Skilled AI Technologists, Suggests Former IARPA Director
    true
    To stay on the cutting edge of AI development, the U.S. needs to better encourage immigration of skilled technologists, dedicate higher levels of education funding and maintain international alliances, according to the former head of Intelligence Advanced Research Projects Agency, speaking recently at AI World Government in Washington, DC. China, Russia and the U.S. are all vying for top spots in AI development, said Jason Matheny, director of the Center for Security and Emerging Technology at Georgetown University and former IARPA director, in an account reported in fedscoop. Jason Matheny, director of the Center for Security and Emerging Technology at Georgetown University    The new technology could usher in a radical change to data analytics and military applications, giving technological advantage to whoever reaches broad-scale AI implementation first. Reports that China is massively outdoing the U.S. in AU development can be misleading, Matheny suggested. “We see lots of current estimates …but I have not seen really good empirics yet justifying those estimates,” he said during a fireside chat at the conference. On top of that, questions of how China is spending its money — be it on quantitative research or human development — are still unanswered. At IARPA, Matheny led investments in AI for applications in the intelligence community. How money is invested is as important as how much of it is spent, he said. “Our ability to attract and retain the world’s best and brightest computer scientists and electrical engineers is something we have greatly benefited from,” he said. That attraction comes mainly from the quality of higher education available in the U.S. Read the source article in fedscoop. Read more »
  • How NASA Wants to Explore AI
    true
    NASA is working to overcome barriers that once blocked it from a full pursuit of innovations in artificial intelligence and machine learning technology, according to an account in the Federal Times. NASA has previously used AI in human spaceflight, scientific analysis and autonomous systems. It currently has multiple programs that use AI/ML: CIMON, which is “Alexa for space”; ExMC, medical care AI assistance; ASE, autonomy for scheduling in space; Multi-temporal Anomaly Detection for SAR Earth Observations; FDL, partnership between industry and NASA through AIMs; and robots, and rovers. Brian Thomas, NASA data scientist In a June 26 presentation at the AI World Government Conference in Washington, D.C., Brian Thomas, a NASA agency data scientist and program manager for Open Innovation, spoke about how to get the best results for AI/ML while considering important policy and culture implications at the agency. Machine learning has been with us for 60 years, Thomas said, “so really the question is, why haven’t we been using this all along?” “We’ve already seen the value in these technologies,” Thomas said. “They are enabling NASA’s mission now. The problem is that we’re being held back, believe it or not, and so how can we do better.” Read the source article in the Federal Times. Read more »
WordPress RSS Feed Retriever by Theme Mason

Author: hits1k

Leave a Reply