AI Trends – AI News and Events

  • Boeing 737 MAX 8 and Lessons for AI: The Case of AI Self-Driving Cars
    By Lance Eliot, the AI Trends Insider The Boeing 737 MAX 8 aircraft has been in the news recently, doing so sadly as a result of a fatal crash that occurred on March 10, 2019 involving Ethiopian Airlines flight #302. News reports suggest that another fatal crash of the Boeing 737 MAX 8 that took place on October 29, 2018 for Lion Air flight #610 might be similar in terms of how the March 10, 2019 crash took place. It is noteworthy to point out that the Lion Air crash is still under investigation, possibly with a final report being released later this year, and the Ethiopian Airlines crash investigation is just now starting (at the time of this writing). I’d like to consider at this stage of understanding about the crashes whether we can tentatively identify aspects about the matter that could be instructive toward the design, development, testing, and fielding of Artificial Intelligence (AI) systems. Though the Boeing 737 MAX 8 does not include elements that might be considered in the AI bailiwick per se, it seems relatively apparent that systems underlying the aircraft could be likened to how advanced automation is utilized. Perhaps the Boeing 737 MAX 8 incidents can reveal vital and relevant characteristics that can be valuable insights for AI systems, especially AI systems of a real-time nature. A modern-day aircraft is outfitted with a variety of complex automated systems that need to operate on a real-time basis. During the course of a flight, starting even when the aircraft is on the ground and getting ready for flight, there are a myriad of systems that must each play a part in the motion and safety of the plane. Furthermore, these systems are at times either under the control of the human pilots or are in a sense co-sharing the flying operations with the human pilots. The Human Machine Interface (HMI) is a key matter to the co-sharing arrangement. I’m going to concentrate my relevancy depiction on a particular type of real-time AI system, namely AI self-driving cars. Please though do not assume that the insights or lessons mentioned herein are only applicable to AI self-driving cars. I would assert that the points made are equally important for other real-time AI systems, such as robots that are working in a factory or warehouse, and of course other AI autonomous vehicles such as drones and submersibles. You can even take out of the equation the real-time aspects and consider that these points still would readily apply to AI systems that are considered less-than real-time in their activities. One overarching aspect that I’d like to put clearly onto the table is that this discussion is not about the Boeing 737 MAX 8 as to the actual legal underpinnings of the aircraft and the crashes. I am not trying to solve the question of what happened in those crashes. I am not trying to analyze the details of the Boeing 737 MAX 8. Those kinds of analyzes are still underway and by experts that are versed in the particulars of airplanes and that are closely examining the incidents. That’s not what this is about herein. I am going to instead try to surface out of the various media reporting the semblance of what some seem to believe might have taken place. Those media guesses might be right, they might be wrong. Time will tell. What I want to do is see whether we can turn the murkiness into something that might provide helpful tips and suggestions of what can or might someday or already is happening in AI systems. I realize that some of you might argue that it is premature to be “unpacking” the incidents. Shouldn’t we wait until the final reports are released? Again, I am not wanting to make assertions about what did or did not actually happen. Among the many and varied theories and postulations, I believe there is a richness of insights that can be right now applied to how we are approaching the design, development, testing, and fielding of AI systems. I’d also claim that time is of the essence, meaning that it would behoove those AI efforts already underway to be thinking about the points I’ll be bringing up. Allow me to fervently clarify that the points I’ll raise are not dependent on how the investigations bear out about the Boeing 737 MAX 8 incidents. Instead, my points are at a level of abstraction that they are useful for AI systems efforts, regardless of what the final reporting says about the flight crashes. That being said, it could very well be that the flight crash investigations undercover other and additional useful points, all of which could further be applied to how we think about and approach AI systems. As you read herein the brief recap about the flight crashes and the aircraft, allow yourself the latitude that we don’t yet know what really happened. Therefore, the discussion is by-and-large of a tentative nature. New facts are likely to emerge. Viewpoints might change over time. In any case, I’ll try to repeatedly state that the aspects being described are tentative and you should refrain from judging those aspects, allowing your mind to focus on how the points can be used for enhancing AI systems. Even something that turns out to not have been true in the flight crashes can nonetheless still present a possibility of something that could have happened, and for which we can leverage that understanding to the advantage of AI systems adoption. So, do not trample on this discussion because you find something amiss about a characterization of the aircraft and/or the incident. Look past any such transgression. Consider whether the points surfaced can be helpful to AI developers and to those organizations embarking upon crafting AI systems. That’s what this is about. For those of you that are particularly interested in the Boeing 737 MAX 8 coverage in the media, here are a few handy examples: Bloomberg news: Seattle Times news: LA Times news: Wall Street Journal news: Background About the Boeing 737 MAX 8 The Boeing 737 was first flown in late 1960’s and spawned a multitude of variants over the years, including in the 1990s the Boeing 737 NG (Next Generation) series. Considered the most selling aircraft for commercial flight, last year the Boeing 737 model surpassed sales of 10,000 units sold. It is composed of twin jets, a relatively narrow body, and intended for a flight range of short to medium distances. The successor to the NG series is the Boeing 737 MAX series. As part of the family of Boeing 737’s, the MAX series is based on the prior 737 designs and was purposely re-engined by Boeing, along with having changes made to the aerodynamics and the airframe, doing so to make key improvements including a lowered burn rate of fuel and other aspects that would make the plane more efficient and have a longer range than its prior versions. The initial approval to proceed with the Boeing 737 MAX series was signified by the Boeing board of directors in August 2011. Per many news reports, there were discussions within Boeing about whether to start anew and craft a brand-new design for the Boeing 737 MAX series or whether to continue and retrofit the design. The decision was made to retrofit the prior design.  Of the changes made to prior designs, perhaps the most notable element consisted of mounting the engines further forward and higher than had been done for prior models. This design change tended to have an upward pitching effect on the plane. It was more so prone to this than prior versions, as a result of the more powerful engines being used (having greater thrust capacity) and the positioning at a higher and more pronounced forward position on the aircraft. As to a possibility of the Boeing 737 MAX entering into a potential stall during flight due to this retrofitted approach, particularly doing so in a situation where the flaps are retracted and at low-speed and with a nose-up condition, the retrofit design added a new system called the MCAS (Maneuvering Characteristics Augmentation System). The MCAS is essentially software that receives sensor data and then based on the readings will attempt to trim down the nose in an effort to avoid having the plane get into a dangerous nose-up stall during flight. This is considered a stall prevention system. The primary sensor used by the MCAS consists of an AOA (Angle of Attack) sensor, which is a hardware device mounted on the plane and transmits data within the plane, including feeding of the data to the MCAS system. In many respects, the AOA is a relatively simple kind of sensor and variants of AOA’s in term of brands, models, and designs exist on most modern-day airplanes. This is to point out that there is nothing unusual per se about the use of AOA sensors, it is a common practice to use AOA sensors. Algorithms used in the MCAS were intended to try and ascertain whether the plane might be in a dangerous condition as based on the AOA data being reported and in conjunction with the airspeed and altitude. If the MCAS software calculated what was considered a dangerous condition, the MCAS would then activate to fly the plane so that the nose would be brought downward to try and obviate the dangerous upward-nose potential-stall condition. The MCAS was devised such that it would automatically activate to fly the plane based on the AOA readings and based on its own calculations about a potentially dangerous condition. This activation occurs without notifying the human pilot and is considered an automatic engagement. Note that the human pilot does not overtly act to engage the MCAS per se, instead the MCAS is essentially always on and detecting whether it should engage or not (unless the human pilot opts to entirely turn it off). During a MCAS engagement, if a human pilot tries to trim the plane and uses a switch on the yoke to do so, the MCAS becomes temporarily disengaged. In a sense, the human pilot and the MCAS automated system are co-sharing the flight controls. This is an important point since the MCAS is still considered active and ready to re-engage on its own. A human pilot can entirely disengage the MCAS and turn it off, if the human pilot believes that turning off the MCAS activation is warranted. It is not difficult to turn off the MCAS, though it presumably would rarely if ever be turned off and might be considered an extraordinary and seldom action that would be undertaken by a pilot. Since the MCAS is considered an essential element of the plane, turning off the MCAS would be a serious act and not be done without presumably the human pilot considering the tradeoffs in doing so. In the case of the Lion Air crash, one theory is that shortly after taking off the MCAS might have attempted to push down the nose and that the human pilots were simultaneously trying to pull-up the nose, perhaps being unaware that the MCAS was trying to push down the nose. This appears to account for a roller coaster up-and-down effort that the plane seemed to experience. Some have pointed out that a human pilot might believe they have a stabilizer trim issue, referred to as a runaway stabilizer or runaway trim, and misconstrue a situation in which the MCAS is engaged and acting on the stabilizer trim. Speculation based on that theory is that the human pilot did not realize they were in a sense fighting with the MCAS to control the plane, and had the human pilot realized what was actually happening, it would have been relatively easy to have turned off the MCAS and taken over control of the plane, no longer being in a co-sharing mode. There have been documented cases of other pilots turning off the MCAS when they believed that it was fighting against their efforts to control the Boeing 737 MAX 8. One aspect that according to news reports is somewhat murky involves the AOA sensors in the case of the Lion Air incident. Some suggest that there was only one AOA sensor on the airplane and that it fed to the MCAS faulty data, leading the MCAS to push the nose down, even though apparently or presumably a nose down effort was not actually warranted. Other reports say that there were two AOA sensors, one on the Captain’s side of the plane and one on the other side, and that the AOA on the Captains side generated faulty readings while the one on the other side was generating proper readings, and that the MCAS apparently ignored the properly functioning AOA and instead accepted the faulty readings coming from the Captain’s side. There are documented cases of AOA sensors at times becoming faulty. One aspect too is that environmental conditions can impact the AOA sensor. If there is build-up of water or ice on the AOA sensor, it can impact the sensor. Keep in mind that there are a variety of AOA sensors in terms of brands and models, thus, not all AOA sensors are necessarily going to have the same capabilities and limitations. The first commercial flights of the Boeing 737 MAX 8 took place in May 2017. There are other models of the Boeing 737 MAX series, both ones existing and ones envisioned, including the MAX 7, the MAX 8, the MAX 9, etc. In the case of the Lion Air incident, which occurred in October 2018, it was the first fatal incident of the Boeing 737 MAX series. There are a slew of other aspects about the Boeing 737 MAX 8 and the incidents, and if interested you can readily find such information online. The recap that I’ve provided does not cover all facets — I have focused on key elements that I’d like to next discuss with regard to AI systems. Shifting Hats to AI Self-Driving Cars Topic Let’s shift hats for a moment and discuss some background about AI self-driving cars. Once I’ve done so, I’ll then dovetail together the insights that might be gleaned about the Boeing 737 MAX 8 aspects and how this can potentially be useful when designing, building, testing, and fielding AI self-driving cars. At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As such, we are quite interested in whatever lessons can be learned from other advanced automation development efforts and seek to apply those lessons to our efforts, and I’m sure that the auto makers and tech firms also developing AI self-driving car systems are keenly interested in too. I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: For the levels of self-driving cars, see my article: For why AI Level 5 self-driving cars are like a moonshot, see my article: For the dangers of co-sharing the driving task, see my article: Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion. Here’s the usual steps involved in the AI driving task: Sensor data collection and interpretation Sensor fusion Virtual world model updating AI action planning Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period. For my article about the grand convergence that has led us to this moment in time, see: See my article about the ethical dilemmas facing AI self-driving cars: For potential regulations about AI self-driving cars, see my article: For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: Returning to the matter of the Boeing 737 MAX 8, let’s consider some potential insights that can be gleaned from what the news has been reporting. Here’s a list of the points I’m going to cover:         Retrofit versus start anew         Single sensor versus multiple sensors reliance         Sensor fusion calculations         Human Machine Interface (HMI) designs         Education/training of human operators         Cognitive dissonance and Theory of Mind         Testing of complex systems         Firms and their development teams         Safety considerations for advanced systems I’ll cover each of the points, doing so by first reminding you of my recap about the Boeing 737 MAX 8 as it relates to the point being made, and then shifting into a focus on AI systems and especially AI self-driving cars for that point. I’ve opted to number the points to make them easier to refer to as a sequence of points, but the sequence number does not denote any kind of priority of one point being more or less important than another. They are all worthy points. Take a look at Figure 1. Key Point #1: Retrofit versus start anew Recall that the Boeing 737 MAX 8 is a retrofit of prior designs of the Boeing 737. Some have suggested that the “problem” being solved by the MCAS is a problem that should never have existed at all, namely that rather than creating an issue by adding the more powerful engines and putting them further forward and higher, perhaps the plane ought to have been redesigned entirely anew. Those that make this suggestion are then assuming that the stall prevention capability of the MCAS would not have been needed, which then would have not been built into the planes, which then would never have led to a human pilot essentially co-sharing and battling with it to fly the plane. Don’t know. Might there have been a need for an MCAS anyway? In any case, let’s not get mired in that aspect about the Boeing 737 MAX 8 herein. Instead, think about AI systems and the question of whether to retrofit an existing AI system or start anew. You might be tempted to believe that AI self-driving cars are so new that they are entirely a new design anyway. This is not quite correct. There are some AI self-driving car efforts that have built upon prior designs and are continually “retrofitting” a prior design, doing so by extending, enhancing, and otherwise leveraging the prior foundation. This makes sense in that starting from scratch is going to be quite an endeavor. If you have something that already seems to work, and if you can adjust it to make it better, you would likely be able to do so at a lower cost and at a faster pace of development. One consideration is whether the prior design might have issues that you are not aware of and are perhaps carrying those into the retrofitted version. That’s not good. Another consideration is whether the effort to retrofit requires changes that introduce new problems that were not previously in the prior design. This emphasizes that the retrofit changes are not necessarily always of an upbeat nature. You can make alterations that lead to new issues, which then require you to presumably craft new solutions, and those new solutions are “new” and therefore not already well-tested via prior designs. I routinely forewarn AI self-driving car auto makers and tech firms to be cautious as they continue to build upon prior designs. It is not necessarily pain free. For my article about the reverse engineering of AI self-driving cars, see: For why groupthink among AI developers can be bad, see my article: For how egocentric AI developers can make untoward decisions, see: For the unlikely advent of kits for AI self-driving cars, see my article: Key Point #2: Single sensor versus multiple sensors reliance For the Boeing 737 MAX 8, I’ve mentioned that there are the AOA (Angle of Attack) sensors and they play a crucial role in the MCAS system. It’s not entirely clear whether there is just one AOA or two of the AOA sensors involved in the matter, but in any case, it seems like the AOA is the only type of sensor involved for that particular purpose, though presumably there must be other sensors such as registering the height and speed of the plane that are encompassed by the data feed going into the MCAS. Let’s though assume for the moment that the AOA is the only sensor for what it does on the plane, namely ascertaining the angle of attack of the plane. Go with me on this assumption, though I don’t know for sure if it is true. The reason I bring up this aspect is that if you have an advanced system that is dependent upon only one kind of sensor to provide a crucial indication of the physical aspects of the system, you might be painting yourself into an uncomfortable corner. In the case of AI self-driving cars, suppose that we used only cameras for detecting the surroundings of the self-driving car. It means that the rest of the AI self-driving car system is solely dependent upon whether the cameras are working properly and whether the vision processing systems is working correctly. If we add to the AI self-driving car another capability, such as radar sensors, we now have a means to double-check the cameras. We could add another capability such as LIDAR, and we’d have a triple check involved. We could add ultrasonic sensors too. And so on. Now, we must realize that the more sensors you add, the more the cost goes up, along with the complexity of the system rising too. For each added sensor type, you need to craft an entire capability around it, including where to position the sensors, how to connect them into the rest of the system, and having the software that can collect the sensor data and interpret it. There is added weight to the self-driving car, there is added power consumption being consumed, there is more heat generated by the sensors, etc. Also, the amount of computer processing required goes up, including the number of processors, the memory needed, and the like. You cannot just start including more sensors because you think it will be handy to have them on the self-driving car. Each added sensor involves a lot of added effort and costs. There is an ROI (Return on Investment) involved in making such decisions. I’ve stated many times in my writings and presentations whether Elon Musk and Tesla’s decision to not use LIDAR is going to ultimately backfire on them, and even Elon Musk himself has said it might. I’d like to then use the AOA matter as a wake-up call about the kinds of sensors that the auto makers and tech firms are putting onto their AI self-driving cars. Do you have a type of sensor for which no other sensor can obtain something similar? If so, are you ready to handle the possibility that if the sensor goes bad, your AI system is going to be in the blind about what is happening, or perhaps worse still that it will get faulty readings. This does bring up another handy point, specifically how to cope with a sensor that is being faulty. The AI system cannot assume that a sensor is always going to be working properly. The “easiest” kind of problem is when the sensor fails entirely, and the AI system gets no readings from it at all. I say this is easiest in that the AI then can pretty much make a reasonable assumption that the sensor is then dead and no longer to be relied upon. This doesn’t mean that handling the self-driving car is “easy” and it only means that at least the AI kind of knows that the sensor is not working. The tricky part is when a sensor becomes faulty but has not entirely failed. This is a scary gray area. The AI might not realize that the sensor is faulty and therefore assume that everything the sensor is reporting must be correct and accurate. Suppose a camera is having problems and it is occasionally ghosting images, meaning that an image sent to the AI system has shown perhaps cars that aren’t really there or pedestrians that aren’t really there. This could be disastrous. The rest of the AI might suddenly jam on the brakes to avoid a pedestrian, someone that’s not actually there in front of the self-driving car. Or, maybe the self-driving car is unable to detect a pedestrian in the street because the camera is faulting and sending images that have omissions. The sensor and the AI system must have a means to try and ascertain whether the sensor is faulting or not. It could be that the sensor itself is having a physical issue, maybe by wear-and-tear or maybe it was hit or bumped by some other matter such as the self-driving car nudging another car. Another strong possibility for most sensors is the chance of it getting covered up by dirt, mud, snow, and other environmental aspects. The sensor itself is still functioning but it cannot get solid readings due to the obstruction. AI self-driving car makers need to be thoughtfully and carefully considering how their sensors operate and what they can do to detect faulty conditions, along with either trying to correct for the faulty readings or at least inform and alert the rest of the AI system that faultiness is happening. This is serious stuff. Unfortunately, sometimes it is given short shrift. For the dangers of myopic use of sensors on AI self-driving cars, see my article: For the use of LIDAR, see my article: For my article about the crossing of the Rubicon and sensors issues, see: For what happens when sensors go bad, see my article: Key Point #3: Sensor fusion calculations As mentioned earlier, one theory was that the Boeing 737 MAX 8 in the Lion Air incident had two AOA sensors and one of the sensors was faulting, while the other sensor was still good, and yet the MCAS supposedly opted to ignore the good sensor and instead rely upon the faulty one. In the case of AI self-driving cars, an important aspect involves undertaking a kind of sensor fusion to figure out a larger overall notion of what is happening with the self-driving car. The sensor fusion subsystem needs to collect together the sensory data or perhaps the sensory interpretations from the myriad of sensors and try to reconcile them. Doing so is handy because each type of sensor might be seeing the world from a particular viewpoint, and by “triangulating” the various sensors, the AI system can derive a more holistic understanding of the traffic around the self-driving car. Would it be possible for an AI self-driving car to opt to rely upon a faulting sensor and simultaneously ignore or downplay a fully functioning sensor? Yes, absolutely, it could happen. It all depends upon how the sensor fusion was designed and developed to work. If the AI developers thought that the forward camera is more reliable overall than the forward radar, they might have developed the software such that it tends to weight the camera more so than the radar. This can mean that when the sensor fusion is trying to decide which sensor to choose as providing the right indication at the time, it might default to the camera, rather than the radar, even if the camera is in a faulting mode. Perhaps the sensor fusion is unaware that the camera is faulting, and so it… Read more »
  • Government is Reskilling Workforce to Handle Disruptions from Spread of AI
    The future of work will be dramatically transformed by artificial intelligence, so Americans should be prepared to engage in lifelong learning, tech experts said at a recent AI event in Washington. “New technologies are changing so quickly that any of us who are even experts in the field get out of date very quickly,” Lynne Parker, White House Office of Science and Technology Policy’s assistant director for artificial intelligence, told attendees of The Economist’s “The AI Agenda” event in Washington. “So nationally, we need to foster an environment where we are used to the idea that we’ll have lifelong learning. It’s no longer that you go through K-12, or you go through college, and you’re done.” Parker highlighted some of the government’s attempts at reskilling the workforce to keep up with the changing landscape and encouraged all industries to emulate such efforts. She also said jobs are being affected across the board and “to a wide range of extents,” but it is important to embrace new technologies, such as AI, to alleviate some of the impending challenges workers will face. “Regardless of what kind of job you have—whether it’s in manufacturing, transportation, healthcare, or law—there are ways that AI can help you. It can provide tools to get rid of some of the mundane kinds of tasks,” Parker said. “However if we are not comfortable with those tools, […] then we will feel like we will have our hands tied behind our back.” National Science Foundation Director France Córdova agreed that AI is becoming slowly embedded into everything Americans do, including driving, shopping, flying and especially in regards to how people do work. She said the technology is also creating innovative new opportunities, particularly within the federal government. “Part of what the government is doing now is renewing itself to all of its agencies,” Córdova said. “And AI is playing an important role in that.” Within the NSF, for example, Córdova said AI is being used to approach inoperable databases, “of which [they] have a lot.” She said the foundation is also looking at AI methods that can help them better identify more diverse reviewer pools for funding research. Daniel Weitzner, founding director of MIT’s Internet Policy Research Initiative and principal research scientist at MIT CSAIL, said AI’s evolution and impacts on society will be “incremental” and “lumpy.” He said, as best as anyone can predict, the changes that AI brings to the workforce will be much more complicated than simply eliminating jobs. Instead, it is more likely that it will change the way people work. Read the source article in Nextgov. Read more »
  • Keeping Up In An AI-Driven Workforce
    By AI Trends Staff AI will change the future of work, experts say. To keep up, we’ll need a culture of lifelong learning. “New technologies are changing so quickly that any of us who are even experts in the field get out of date very quickly,” Lynne Parker, White House Office of Science and Technology Policy’s assistant director for artificial intelligence, told attendees of The Economist’s “The AI Agenda” event in Washington. “So nationally, we need to foster an environment where we are used to the idea that we’ll have lifelong learning. It’s no longer that you go through K-12, or you go through college, and you’re done.” Nextgov reported Parker’s comments and her findings that government’s attempts to reskill the workforce to keep up with the changing landscape are worth emulating. New technologies—including AI—are changing jobs across all industries, she said. We must help make our workforce comfortable with these new tools. National Science Foundation Director France Córdova highlighted the opportunities AI is creating within government, in particular. Within the NSF, for example, Córdova said AI is being used to approach inoperable databases, “of which [they] have a lot.” She said the foundation is also looking at AI methods that can help them better identify more diverse reviewer pools for funding research, Nextgov reports. But it was Daniel Weitzner, founding director of MIT’s Internet Policy Research Initiative and principal research scientist at MIT CSAIL, who noted that AI will bring more changes than simply jobs elimination. Instead, he said, it is more likely that it will change the way people work. Read the source article at Nextgov. Read more »
  • LillyWorks Adds Predictive Analytics to its Manufacturing Software
    LillyWorks has announced the addition of predictive analytics to its Protected Flow Manufacturing software. Protected Flow Manufacturing provides the user with an execution plan – what to run and when to run it – based on the mix of products and resources required to make them to provide the best timing of output to minimize delays. The Predictor, the predictive analytics tool of Protected Flow Manufacturing that incorporates AI techniques, takes work order and production requirement information and combines it with the capacity and capabilities information to predict how the shop floor will look in the future. An early user is Graphicast Inc. of Jaffrey, NH. “We’ve spent 40 years innovating the metal casting process through the combination of Zinc-Aluminum(ZA-12) Alloy, our proprietary low-turbulence, auto-fill casting process, and permanent graphite molds ideal for production volumes ranging from 100 to 20,000 parts,” said Val Zanchuk, President of Graphicast. If for example, the Predictor sees that a shipment to a customer depending on critical parts will be late three weeks out, an adjustment can be made to meet the schedule requirements. Or, for a customer request to expedite an order, the Predictor is used to see the impact of that expedite on the entire operation. A decision can be made on whether the factory can absorb the impact, or a change needs to be made in the completion date. “Being able to accurately predict shop floor activity three weeks — or three months– from now is a benefit of artificial intelligence, and it helps us respond to customers and be able to deliver on our promises,” Zanchuk said. Mark Lilly, the CEO of LillyWorks and the son of the founder, said in an interview with AI Trends, “We focus on prioritization on the shop floor primarily. We have built a framework that dynamically estimates time for the work setup and run times. With that, we are always able to see which work order is in danger of being late.” Most American manufacturing is custom or made to order, so the ability to adjust schedules is critical. The Predictor runs a simulation based o n the priority of the manufacturer. “It shows where the bottlenecks are going to be, and it shows for each work order what operations remain to be finished,” Lilly said. Next, the Predictor taps into Amazon’s machine learning service to determine the chances of a job being late. It takes into account past performance and other relevant variables. “It might see something the Predictor misses,” Lilly said. The company is also researching the use of “gamification” techniques to motivate uses to engage more with the software to achieve business objectives. Lilly is working with gamification software supplier Funifier on this work, Lilly said. “We want people to select the top priority, the first job in danger of being late, and use gamification techniques to help view the impact of alternative schedule adjustments, to model different responses.” For more information, go to LillyWorks. Read more »
  • Inspur Unveils Specialized AI Servers to Support Edge Computing in 5G Era
    By AI Trends Staff Inspur has announced a specialized AI server for edge computing applications. The new Inspur NE5250M5 includes two NVIDIA V100 Tensor Core GPUs or six NVIDIA T4 GPUs for compute-intensive AI applications, including autonomous vehicles, smart cities and smart homes. It also can enable 5G edge applications such as the Internet of Things, multi-access edge computing (MEC) and network function virtualization (NFV). It also supports optimization design for harsh deployment environments on the edge. “A lot of people in the US probably don’t know anything about us,” Alan Chang senior director of the server product line for Inspur, told AI Trends. Yet IDC ranked the China-based server supplier as the number one AI server provider in its First Half 2018 China AI Infrastructure Market Survey report, with a 51% share. Worldwide in the x86 server market, IDC has ranged Inspur at No. 3. Inspur also announced an AI server supporting eight NVIDIA V100 Tensor Core GPUs, with an ultra-high-bandwidth NVSwitch, for demanding AI and high-performance computing (HPS) applications. The Inspur NF5488M5 is designed to facilitate a variety of deep-learning and high-performance computing applications, including voice recognition, video analysis and intelligent customer service. The announcements were made at NVIDIA’s GPU Technology Conference held in March in San Jose. Looking For A Niche Some 85% of the company’s servers so far have been sold in China and 15% into Europe and the US, Chang told AI Trends. “We are really focused on cloud server providers,” he said. Inspur is interested in further penetrating western markets. “We need a niche,” Chang said. The niche is high-performance AI servers that work in conjunction with NVIDIA GPUs and other chips. Inspur is a member of the Open Compute Project in China that includes Alibaba and Tencent, which has helped the company to differentiate on software. “Our value proposition is to bring a mature, high-value motherboard. We can create a reliable, high-performance system that other companies cannot produce,” Chang said. Inspur positions to complement and not compete with NVIDIA, but rather with server shipment leaders Dell and HP in the US. The innovations in switch design is leading to performance increases that can range from 20% to 60% in early experience, Chang said. Another differentiation for Inspur is lower power requirements. Markets in China, Japan, India, Taiwan and much of Europe average 8kw per server rack. The US averages 20kw to 30 kw. The new product supporting edge computing is well-timed. “5G is exploding,” Chang said, with drivers coming from AI in autonomous driving, retail, manufacturing and the IoT (Internet of Things). “The AI inferencing will be happening at the edge,” Chang said. “That’s why we designed this new box.” Read more »
  • The Impact Of Algorithmic-Enhanced Care
    Algorithms are changing clinical care, says Sandy Aronson, Executive Director of IT, Partners HealthCare Personalized Medicine. True AI—neural networks—will play a role, but less sophisticated algorithms are already powering dramatic improvements. “We are seeing the benefits of introducing algorithms into the care delivery process to enable us to really harness that data and help the way we make decisions,” Aronson told AI Trends. “It’s logic, often complex Boolean logic, that enables us to get started, improve the care process, increase the amount of data that flows through the care process,” he says. “That should, in turn, set us up for more and more use of AI.” On behalf of AI Trends, Gemma Smith spoke with Aronson about the impact of algorithmic-enhanced care—real world examples he’s seen at Partners and the progress he expects to come. These aren’t just “last mile” technologies, he believes. Algorithm-enhanced care will have broad benefits to healthcare. Editor’s note: Gemma Smith, a conference producer at Cambridge Healthtech Institute, is helping plan a track dedicated to AI in Healthcare at the Bio-IT World Conference & Expo in Boston, April 16-18. Aronson is speaking on the program. Their conversation has been edited for length and clarity. AI Trends: Can you give me examples of where algorithmic-enhanced care has had a significant impact, and the benefits you’ve seen from that? Sandy Aronson: Sure. One example would be the way that we allocate platelets within the hospital system. At Birmingham Women’s Hospital, the way that we obtain platelets is you have these donors who come in and, again and again, sit for up to two hours while their blood is cycled outside of their body to obtain a bag of platelets. That altruistic action is critical to giving us the ability to perform bone marrow transplants. Once a patient gets a bone marrow transplant, we take their platelet count to zero. They reach the hospital floor after they’ve received their transplant, and we start transfusing them with platelets, and about 15% of the time, the patient immediately rejects the platelets that we gave to them because of a lack of a match between the patient’s HLA type (Human leukocyte antigen), and the donor’s HLA type. In that scenario, you not only have not gotten any value out of this altruistic act from the donor, but it’s also an expensive process. You’ve incurred costs, you haven’t given the patient the platelet bump that we were looking for, so they remain at risk for bleeding, and you potentially introduce new antibodies into the patient that could make them harder to match in the future. The reality is that there’s a relatively small number of altruists who consistently come in and donate most of these platelets, so they can be HLA typed.  We have to HLA type the recipient in order to match the bone marrow donor. So the information is available do a much better job at matching platelets to patients. Instead of the oldest bag of platelets being taken automatically and given to the patients when platelets are ordered, what we do now is we provide an application that uses an algorithm to sort the bags of platelets in the inventory so that the blood bank technician can see which bags of platelets are most likely to be accepted by the patient so we can prioritize those. We’re still gathering data on the impact of this, but the initial data we’ve gathered is promising relative to using platelets much more efficiently and getting much better platelet count bumps when we transfuse platelets into patients. So, we’re looking forward to collecting more data and then processing that data to assess the true impact of this program. That’s one example. Another example is the way that we care for patients with heart failure, hypertension, or high lipid values. There are guidelines for treating these patients, but in typical clinical care it is extremely difficult to bring patients into compliance with those guidelines at scale. These are conditions that effect a lot of patients. The process of implementing the guidelines requires a level of iteration that it difficult to implementing within the traditional clinical workflow. The patient needs to see a clinician, clinician needs to prescribe medication, we need to see what the effect of that medication is, and then you need to make adjustments either to the medications being given or to the doses of the medications being given to optimize the patient’s care. The problem is, we all know that doctors are incredibly busy, patients lead busy lives as well, and scheduling these visits where this optimization can happen often takes a lot of time, and therefore the time to get the patient to the optimal treatment is longer than we’d like. We’ve instituted a program where we built the guidelines into an algorithm that’s contained within an App. Navigators are assigned to work with patients and they contact the patients at the time that is optimal for the patient and their care, and they don’t have to consider the scheduling of the busy clinician. They can contact the patient, work with that patient to gather information that assists in determining what tests ideally should be ordered when, track their progress, and then they work with a pharmacist and overseeing clinicians to adjust medications. We find through this process that we’re really able to bring down lipid and blood pressure values in a way that’s really pretty gratifying to see. So those are two examples of where algorithms are entering care and making a difference. What is the single biggest challenge that you faced when implementing algorithmically-enhanced care like you describe here? We initially thought we could just surgically interject these algorithms into the existing care delivery process so that they could help someone make a better decision. What we really found though, and both of these are examples of this, is that as you’re introducing algorithms into care, it gives you the ability to rethink the care delivery process as a whole. And that’s hard; it takes a great deal of effort from both clinicians and IT folks—and sometimes business folks and others—to really figure out how to reformulate the care delivery process. But, that’s also where the power is. That’s where you can really think deeply about the optimal experience from a patient care perspective and how to deliver that. That often involves bigger changes than were first anticipated. What makes you most excited about the use of AI and algorithms in the healthcare industry? I truly believe that we are on the cusp of very, very significant changes and improvements to the healthcare system. Traditional care delivery pathways have evolved over a long time, and been incrementally improved over a long time, but what we have now is the ability to really look at how we fundamentally change these processes to make them better. The could be in the context of new technologies, new forms of data coming online, new ways we can interact with patients becoming available, and new algorithmic capabilities. And it’s not just that. When you move to algorithmically-based care, it forces you to collect clean data to drive the algorithm, and that’s something that the healthcare system hasn’t traditionally been very good at. By introducing a process that collects that type of clean data, that data then has the potential to become the fuel for machine learning to improve the process. When you implement algorithmically-based care, what you’ve really done is systematized part of the care delivery process. As a result of doing that, you can feed back improvements into that process far faster than you could in a traditional care delivery setting where decision making is so distributed. This starts to set us up for continuous learning processes that have the potential to make clinicians far, far more powerful in terms of being able to diagnose, monitor, and treat patients in ways that constantly improve. Folks have talked about the continuous learning healthcare system for a long time, but I do think we are seeing the beginning of the process that can truly make that real. I really think in the best case scenario—and we’ve all got to try to deliver the best case scenario—it could deliver improvements in human health on a scale that we’ve never seen before. Read more »
  • CMU Hosts Discussion of Ethics of AI Use by Department of Defense
    As artificial intelligence looms closer and closer to inevitable integration into nearly every aspect of national security, the U.S. Department of Defense tasked the Defense Innovation Board with drafting a set of guiding principles for the ethical use of AI in such cases. The Defense Innovation Board (DIB) is an organization set up in 2016 to bring the technological innovation and best practice of Silicon Valley to the U.S. Military. The DIB’s subcommittee on science and technology recently hosted a public listening session at Carnegie Mellon University focused on “The Ethical and Responsible Use of Artificial Intelligence for the Department of Defense.” It’s one of three DIB listening sessions scheduled for across the U.S. to collect public thoughts and concerns. Using the ideas collected, the DIB will put together its guidelines in the coming months and announce a full recommendation for the DoD later this year. Participants of the listening session included Michael McQuade, vice president for research at CMU; Milo Medin, vice president of wireless services at Google; Missy Cummings, director of Humans and Autonomy Lab at Duke University; and Richard Murray, professor of control and dynamical systems and bioengineering at Caltech. McQuade introduced the session saying that AI is more than a technology, but rather a social element that requires both obligations and responsibilities. In a press conference following the public session, Murray said the U.S. would be remiss if it did not recognize its responsibility to declare a moral and ethical stance to the use of AI. Medin added that the work done by the DIB will only be the first step. The rapidly changing nature of technology will require the policy to continue to iterate going forward. “In this day and age, and with a technology as broadly implicative as AI, it is absolutely necessary to encourage broad public dialogue on the topic,” said McQuade. At the listening session, the public brought forth the following concerns and comments: The human is not the ideal decision maker. AI should be implemented into military systems to minimize civilian casualties, to enhance national security and to protect people in the military. AI and sensor tech can be used to determine if a target is adult-sized or carrying a weapon before a remote explosive detonates or on a missile to determine whether civilian or military personnel are aboard an aircraft. “Machinewashing.” Similar to the “greenwashing” that took place when the fossil fuel industry came under fire and began to launch ad campaigns promoting its companies as earth-friendly, tech companies may be doing the same thing when it comes to AI. AI has been linked to issues like racial bias and election fraud, notably in the case of Facebook’s Cambridge Analytica scandal. Tech companies are spending millions to treat these issues like public relations challenges. As with climate change, if AI is controlled only by the tech giants, the public will see more crises like this in the coming decades. Laws may not express our social values A lot of emphasis is put on adhering to the legal framework around conducting military operations and war, but laws do not necessarily adequately express the values and morals of a society. As we deploy AI into our lives and into national security, it’s important to remember this distinction. A system of checks and balances. Any powerful AI systems should come with three other equally powerful systems to provide a system of checks and balances. If one begins to act strangely, the other two can override it. One can never override the other two. Humans should have a kill switch or back door code that allows them to delete all three. It won’t be perfect. No AI system will ever be perfect, but no military system is perfect. One of the most unpredictable is the individual soldier, who is subject to hunger, exhaustion and fear. We only need to determine whether AI can be as good as or better than what humans can perform by themselves. Read the source article at the Pittsburgh Business Times. Read more »
  • Chief Safety Officers Needed in AI: The Case of AI Self-Driving Cars
    By Lance Eliot, the AI Trends Insider Many firms think of a Chief Safety Officer (CSO) in a somewhat narrow manner as someone that deals with in-house occupational health and safety aspects occurring solely in the workplace. Though adherence to proper safety matters within a company are certainly paramount, there is an even larger role for CSO’s that has been sparked by the advent of Artificial Intelligence (AI) systems. Emerging AI systems that are being embedded into a company’s products and services has stoked the realization that a new kind of Chief Safety Officer is needed, one with wider duties and requiring a dual internal/external persona and focus. In some cases, especially life-or-death kinds of AI-based products such as AI self-driving cars, it is crucial that there be a Chief Safety Officer at the highest levels of a company. The CSO needs to be provided with the kind of breadth and depth of capability required to carry out their now fuller charge. By being at or within the top executive leadership, they can aid in shaping the design, development, and fielding of these crucial life-determining AI systems. Gradually, auto makers and tech firms in the AI self-driving car realm are bringing on-board a Chief Safety Officer or equivalent. It’s not happening fast enough, I assert, yet at least it is a promising trend and one that needs to speed along. Without a prominent position of Chief Safety Officer, it is doubtful that auto makers and tech firms will give the requisite attention and due care toward safety of AI self-driving cars. I worry too that those firms not putting in place an appropriate Chief Safety Officer are risking not only the lives of those that will use their AI self-driving cars, but also putting into jeopardy the advent of AI self-driving cars all told. In essence, those firms that give lip service to safety of AI self-driving car systems or inadvertently fail to provide the upmost attention to safety, they are likely to bring forth adverse safety events on our roadways, and for which the public and regulators will react not just toward that offending firm, such incidents will become an outcry and overarching barrier to any furtherance of AI self-driving cars. Simply stated, for AI self-driving cars, the chances of a bad apple spoiling the barrel is quite high and something that all of us in this industry live on the edge of each day. In speaking with Mark Rosekind, Chief Safety Innovation Officer at Zoox, doing so at a recent Autonomous Vehicle event in Silicon Valley, he emphasized how safety considerations are vital in the AI self-driving car arena. His years as an administrator for the National Highway Traffic Safety Administration (NHTSA) and his service on the board of the National Transportation Safety Board (NTSB) provide a quite on-target skillset and base of experience for his role. For those of you interested in the overall approach to safety that Zoox is pursuing, you can take a look at their posted report: Those of you that follow closely my postings will remember that I had previously mentioned the efforts of Chris Hart in the safety aspects of AI self-driving cars. As a former chairman of the NTSB, he brings key insights to what the auto makers and tech firms need to be doing about safety, along with offering important views that can help shape regulations and regulatory actions (see his web site: You might find of interest his recent blog post about the differences between aviation automation and AI self-driving cars, which dovetails too into my remarks about the same topic. For Chris Hart’s recent blog post, see: For my prior posting about AI self-driving car safety and Chris Hart’s remarks on the matter, see: For my posting about how airplane automation is not the same as what is needed for AI self-driving cars, see: Waymo, Google/Alphabet’s entity well-known for its prominence in the AI self-driving car industry, has also brought on-board a Chief Safety Officer, namely Debbie Hersman. Besides her having served on the NTSB and having been its chairman, she also was the CEO and President of the National Safety Council. It was with welcome relief that she has come on-board to Waymo since it also sends a signal or sign to the rest of the AI self-driving car makers that this is a crucial role and one they too need to make sure they are embracing if they aren’t already doing so. Uber recently brought on-board Nat Beuse to head their safety efforts. He had been with the U.S. Department of Transportation and oversaw vehicle safety efforts there for many years. For those of you interested in the safety report that Uber produced last year, coming after their internal review of the Uber self-driving car incident, you can find the report posted here: I’d also like to mention the efforts of Alex Epstein, Director of Transportation at the National Safety Council (NSC). We met at an inaugural conference on the safety of AI self-driving cars and his insights and remarks were spot-on about where the industry is and where it needs to go. At the NSC he is leading their Advanced Automotive Safety Technology initiative. His efforts of public outreach are notable and the public campaign of MyCarDoesWhat is an example of how we need to aid the public in understanding the facets of car automation: Defining the Chief Safety Officer Role I have found it useful to clarify what I mean by the role of a Chief Safety Officer in the context of a firm that has an AI-based product or service, particularly such as the AI self-driving car industry. Take a look at my Figure 1. As shown, the Chief Safety Officer has a number of important role elements. These elements all intertwine with each other and should not be construed as independent of each other. They are an integrated mesh of the space of safety elements needed to be fostered and led by the Chief Safety Officer. Allowing one of the elements to languish or be undervalued is likely to undermine the integrity of any safety related programs or approaches undertaken by a firm. The nine core elements for a Chief Safety Officer consist of:         Safety Strategy         Safety Company Culture         Safety Policies         Safety Education         Safety Awareness         Safety External         Safety SDLC         Safety Reporting         Safety Crisis Management I’ll next describe each of the elements. I’m going to focus on the AI self-driving car industry, but you can hopefully see how these can be applied to other areas of AI that involve safety-related AI-based products or services. Perhaps you make AI-based robots that will be working in warehouses or factories, etc., which these elements would then pertain to equally. I am also going to omit the other kinds of non-AI safety matters that the Chief Safety Officer would likely encompass, which are well documented already in numerous online Chief Safety Officer descriptions and specifications. Here’s a brief indication about each element.         Safety Strategy The Chief Safety Officer establishes the overall strategy of how safety will be incorporated into the AI systems and works hand-in-hand with the other top executives in doing so. This must be done collaboratively since the rest of the executive team must “buy into” the safety strategy and be willing and able to carry it out. Safety is not an island of itself. Each of the functions of the firm must have a stake in and will be required to ensure the safety strategy is being implemented.         Safety Company Culture The Chief Safety Officer needs to help shape the culture of the company to be on a safety-first mindset. Often times, AI developers and other tech personal are not versed in safety and might have come from a university setting wherein AI systems were done as prototypes, and safety was not a particular pressing topic. Some will even potentially believe that “safety is the enemy of innovation,” which is at times a rampant false belief. The company culture might require some heavy lifting and has to be done in conjunction with the top leadership team and done in a meaningful way rather than a light-hearted or surface-level manner.         Safety Policies The Chief Safety Officer should put together a set of safety policies indicating how the AI systems need to be conceived of, designed, built, tested, and fielded to embody key principles of safety. These policies need to be readily comprehensible and there needs to a clear-cut means to abide by the policies. If the policies are overly abstract or obtuse, or if they are impractical, it will likely foster a sense of “it’s just CYA” and the rest of the firm will tend to disregard the policies.         Safety Education The Chief Safety Officer should identify the kinds of educational means that can be made available throughout the firm to increase an understanding of what safety means in the context of developing and fielding AI systems. This can be a combination of internally prepared AI safety classes and externally provided ones. The top executives should also participate in the educational programs to showcase their belief in and support for the educational aspects, and they should work with the Chief Safety Officer in scheduling and ensuring that the teams and staff undertake the classes, along with follow-up to ascertain that the education is being put into active use.         Safety Awareness The Chief Safety Officer should undertake to have safety awareness become an ongoing activity, often fostered by posting AI safety related aspects on the corporate Intranet, along with providing other avenues in which AI safety is discussed and encouraged such as brown bag lunch sessions, sharing of AI safety tips and suggestions from within the firm, and so on. This needs to be an ongoing effort and not allow a one-time push of safety that then decays or becomes forgotten.         Safety External The Chief Safety Officer should be proactive in representing the company and its AI safety efforts to external stakeholders. This includes doing so with regulators, possibly participating in regulatory efforts or reviews when appropriate, along with speaking at industry events about the safety related work being undertaken and conferring with the media. As the external face of the company, the CSO will also likely get feedback from the external stakeholders, which then should be refed into the company and be especially discussed with the top leadership team.         Safety SDLC The Chief Safety Officer should help ensure that the Systems Development Life Cycle (SDLC) includes safety throughout each of the stages. This includes whether the SDLC is agile-oriented or waterfall or in whatever method or manner being undertaken. Checkpoints and reviews need to include the safety aspects and have teeth, meaning that if safety is either not being included or being shortchanged, this becomes an effort stopping criteria that cannot be swept under the rug. It is easy during the pressures of development to shove aside safety portions and coding, under the guise of “getting on with the real coding,” but that’s not going to cut it in AI systems involving life-or-death systems consequences.         Safety Reporting The Chief Safety Officer needs to put in place a means to keep track of safety aspects that are being considered and included into the AI systems. This is typically an online tracking and reporting system. Out of the tracking system, reporting needs to be made available on an ongoing basis. This includes dashboards and flash reporting, which is vital since if the reporting is overly delayed or difficult to obtain or interpret, it will be considered “too late to deal with” and the cost or effort to make safety related corrections or additions will be subordinated.         Safety Crisis Management The Chief Safety Officer should establish a crisis management approach to deal with any AI safety related faults or issues that arise. Firms often seem to scramble when their AI self-driving car has injured someone, yet this is something that could have been anticipated as a possibility, and preparations could have been made beforehand. The response to an AI safety adverse act needs to be carefully coordinated and the company will likely be seen as either doing sincere efforts about the incident or if ill-prepared might make matters untoward and undermine the company efforts and those of other AI self-driving car makers. In the Figure 1, I’ve also included my framework of AI self-driving cars. Each of the nine elements that I’ve just described can be applied to each of the aspects of the framework. For example, how is safety being included into the sensors design, development, testing, and fielding? How is safety being included into the sensor fusion design, development, testing, and fielding? How is safety being included into the virtual world model design, development, testing, and fielding. You are unlikely to have many safety related considerations in say the sensors if there isn’t an overarching belief at the firm that safety is important, which is showcased by having a Chief Safety Officer, and by having a company culture that embraces safety, and by educating the teams that are doing the development about AI safety, etc. This highlights my earlier point that each of the elements must work as an integrative whole. Suppose the firm actually does eight of the elements but doesn’t do anything about how to incorporate AI safety into the SDLC. What then? This means that the AI developers are left to their own to try and devise how to incorporate safety into their development efforts. They might fumble around doing so, or take bona fide stabs at it, though it is fragmented and disconnected from the rest of the development methodology. Furthermore, worse still, the odds are that the SDLC has no place particularly for safety, which means no metrics about safety, and therefore the pressure to not do anything related to safety is enhanced, due to the metrics measuring the AI developers in other ways that don’t necessarily have much to do with safety. The point being that each of the nine elements need to work collectively. Resources on Baking AI Safety Into AI Self-Driving Car Efforts At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. We consider AI safety aspects as essential to our efforts and urge auto makers and tech firms to do likewise. I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: For the levels of self-driving cars, see my article: For why AI Level 5 self-driving cars are like a moonshot, see my article: For the dangers of co-sharing the driving task, see my article: Though I often tend to focus more so on the true Level 5 self-driving car, the safety aspects of the less than Level 5 are especially crucial right now. I’ve repeatedly cautioned that as the Level 3 advanced automation becomes more prevalent, which we’re just now witnessing coming into the marketplace, we are upping the dangers associated with the interfacing between AI systems and humans. This includes issues associated with cognitive disconnects of AI-humans and the human mindset dissonance, all of which can be disastrous from a safety perspective. Co-sharing and hand-offs of the driving task, done in real-time at freeway speeds, nearly points a stick in the eye of safety. Auto makers and tech firms must get ahead of the AI safety curve, rather than wait until the horse is already out of the barn and it becomes belated to act. Here’s the usual steps involved in the AI driving task:         Sensor data collection and interpretation         Sensor fusion         Virtual world model updating         AI action planning         Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period. For my article about the grand convergence that has led us to this moment in time, see: See my article about the ethical dilemmas facing AI self-driving cars: For potential regulations about AI self-driving cars, see my article: For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: Returning to the safety topic, let’s consider some additional facets. Take a look at Figure 2. I’ve listed some of the publicly available documents that are a useful cornerstone to getting up-to-speed about AI self-driving car safety. The U.S. Department of Transportation (DOT) NHTSA has provided two reports that I especially find helpful about the foundations of safety related to AI self-driving cars. Besides providing background context, these documents also indicate the regulatory considerations that any auto maker or tech firm will need to be incorporating into their efforts. Both of these reports have been promulgated under the auspices of DOT Secretary Elaine Chao. The version 2.0 report is here: The version 3.0 report is here: I had earlier mentioned the Uber safety report, which is here: I also had mentioned the Zoox safety report, which is here: You would also likely find of us the Waymo safety report, which is here: I’d also like to give a shout out to Dr. Philip Koopman, a professor at CMU that has done extensive AI safety related research, which you can find at his CMU web site or at this company web site: As a former university professor, I too used to do research while at my university and also did so via an outside company. It’s a great way to try and infuse the core foundational research that you typically do in a university setting with the more applied kind of efforts that you do while in industry. I found it a handy combination. Philip and I seem to also end-up at many of the same AI self-driving car conferences and do so as speaker, panelists, or interested participants. Conclusion For those Chief Safety Officers of AI self-driving car firms that I’ve not mentioned herein, you are welcome to let me know that you’d like to be included in future updates that I do on this topic. Plus, if you have safety reports akin to the ones I’ve listed, I welcome taking a look at those reports and will be glad to mention those too. One concern being expressed about the AI self-driving car industry is whether the matter of safety is being undertaken in a secretive manner that tends to keep each other of the auto makers or tech firms in the dark about what the other firms are doing. When you look at the car industry, clearly it is apparent that the auto makers have traditionally competed on their safety records and used that to their advantage in trying to advertise and sell their wares. Critics have voiced that if the AI self-driving car industry perceives itself to also be competing with each other on safety, naturally there would be a basis to purposely avoid sharing about safety aspects with each other. You can’t seemingly have it both ways, in that if you are competing on safety then it is presumed to be a zero-sum game, those that do better on safety will sell more than those that do not, and why help a competitor to get ahead. This mindset needs to be overcome. As mentioned earlier, it won’t take much in terms of a few safety related bad outcomes to potentially stifle the entire AI self-driving car realm. If there is a public outcry, you can expect that this will push back at the auto makers and tech firms. The odds are that regulators would opt to come into the industry with a much heavier hand. Funding for AI self-driving car efforts might dry up. The engine driving the AI self-driving car pursuits could grind to a halt. I’ve described the factors that cane aid or impede the field: Existing disengagement reporting is weak and quite insufficient: A few foul incidents will be perceived as a contagion, see my article: For my Top 10 predictions, see: There are efforts popping up to try and see if AI safety can become more widespread as an overt topic in the AI self-driving car industry. It’s tough though to overcome all of those NDA (Non-Disclosure Agreements) and concerns that proprietary matters might be disclosed. Regrettably, it might take a calamity to get enough heat to make things percolate more so, but I hope it doesn’t come down to that. The adoption of Chief Safety Officers into the myriad of auto makers and tech firms that are pursuing AI self-driving cars is a healthy sign that safety is rising in importance. These positions have to be adopted seriously and with a realization at the firms that they cannot just put in place a role to somehow checkmark that they did so. For Chief Safety Officers to do their job, they need to be at the top executive table and be considered part-and-parcel of the leadership team. I am also hoping that these Chief Safety Officers will bind together and become an across-the-industry “club” that can embrace a safety sharing mantra and use their positions and weight to get us further along on permeating safety throughout all aspects of AI self-driving cars.  Let’s make that into reality. Copyright 2019 Dr. Lance Eliot This content is originally posted on AI Trends.   Read more »
  • The Cognitive Intersect of Human and Artificial Intelligence – Symbiotic Nature of AI and Neuroscience
    Neuroscience and artificial intelligence (AI) are two very different scientific disciplines. Neuroscience traces back to ancient civilizations, and AI is a decidedly modern phenomenon. Neuroscience branches from biology, whereas AI branches from computer science. At a cursory glance, it would seem that a branch of science of living systems would have little in common with one that springs from inanimate machines wholly created by humans. Yet discoveries in one field may result in breakthroughs in the other— the two fields share a significant problem, and future opportunities. The origins of modern neuroscience is rooted in ancient human civilizations. One of the first descriptions of the brain’s structure and neurosurgery can be traced back to 3000 – 2500 B.C. largely due to the efforts of the American Egyptologist Edwin Smith. In 1862 Smith purchased an ancient scroll in Luxor, Egypt. In 1930 James H. Breasted translated the Egyptian scroll due to a 1906 request from the New York Historical Society via Edwin Smith’s daughter. The Edwin Smith Surgical Papyrus is an Egyptian neuroscience handbook circa 1700 B.C. that summarizes a 3000 – 2500 B.C ancient Egyptian treatise describing the brain’s external surfaces, cerebrospinal fluid, intracranial pulsations, the meninges, the cranial sutures, surgical stitching, brain injuries, and more. In contrast, the roots of artificial intelligence sit squarely in the middle of the twentieth century. American computer scientist John McCarthy is credited with creating the term “artificial intelligence” in a 1955 written proposal for a summer research project that he co-authored with Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. The field of artificial intelligence was subsequently launched at a 1956 conference held at Dartmouth College. The history of artificial intelligence is a modern one. In 1969 Marvin Minsky and Seymour Papert published a research paper titled “Perceptrons: an introduction to computational geometry” that hypothesized the possibility of a powerful artificial learning technique for more than two artificial neural layers. During the 1970s and 1980s, AI machine learning was in relative dormancy. In 1986 Geoffrey Hinton, David E. Rumelhart, and Ronald J. Williams published “Learning representations by back-propagating errors” which illustrated how deep neural networks consisting of more than two layers could be trained via backpropagation. During the 1980s to early 2000s, the graphics processing unit (GPU) have evolved from gaming purpose towards general computing, enabling parallel processing for faster computing. In 1990s, the internet spawned entire new industries such as cloud-computing based Software-as-a-Service (SaaS). These trends enabled faster, cheaper, and more powerful computing. In 2000s, big data sets emerged along with the rise and proliferation of internet-based social media sites. Training deep learning requires data sets, and the emergence of big data accelerated machine learning. In 2012, a major milestone in AI deep learning was achieved when Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever trained a deep convolutional neural network with 60 million parameters, 650,000 neurons, and five convolutional layers, to classify 1.2 million high-resolution images into 1,000 different classes. The team made AI history by through their demonstration of backpropagation on a GPU implementation on such an impressive scale of complexity. Since then, there has been a worldwide gold rush to deploy state-of-the-art deep learning techniques across nearly all industries and sectors. In the future, the opportunities that neuroscience and AI offer are significant. Global spending on cognitive and AI systems is expected to reach $57.6 billion by 2021 according to IDC estimates. The current AI renaissance, largely due to deep learning, is a global movement with worldwide investment from corporations, universities, and governments. The global neuroscience market is projected to reach $30.8 billion by 2020, according to figures from Grand View Research. Venture capitalists, angel investors, and pharmaceutical companies are making significant investments in neuroscience startups. Today’s wellspring of the global commercial, financial and geopolitical investments in artificial intelligence owes, in some part, to the human brain. Deep learning, a subset of AI machine learning, pays homage to the biological brain structure. Deep neural networks (DNNs) consist of two or more “neural” processing layers with artificial neurons (nodes). A DNN will have an input layer, an output layer, and many layers in between—the more artificial neural layers, the deeper the network. The human brain and its associated functions are complex. Neuroscientists do not know many of the exact mechanisms of how the human brain works. For example, scientists do not know the neurological mechanisms of exactly how general anesthesia works on the brain, or why we sleep or dream. Similarly, computer scientists do not know exactly how deep learning arrives at its conclusions due to complexity. An artificial neural network may have billions or more parameters based on the intricate connections between the nodes—the exact path is a black-box. Read the source article in Psychology Today. Read more »
  • Avoiding a Society on Autopilot with Artificial Intelligence
    By Katherine Maher, Executive Director, Wikimedia Foundation The year 1989 is often remembered for events that challenged the Cold War world order, from the protests in Tiananmen Square to the fall of the Berlin Wall. It is less well remembered for what is considered the birth of the World Wide Web. In March of 1989, the British researcher Tim Berners-Lee shared the protocols, including HTML, URL and HTTP that enabled the internet to become a place of communication and collaboration across the globe. Katherine Maher, Executive Director, Wikimedia Foundation As the World Wide Web marked its 30th birthday on March 12, public discourse is dominated by alarm about Big Tech, data privacy and viral disinformation. Tech executives have been called to testify before Congress, a popular campaign dissuaded Amazon from opening a second headquarters in New York and the United Kingdom is going after social media companies that it calls “digital gangsters.” Implicit in this tech-lash is nostalgia for a more innocent online era. But longing for a return to the internet’s yesteryear isn’t constructive. In the early days, access to the web was expensive and exclusive, and it was not reflective or inclusive of society as a whole. What is worth revisiting is less how it felt or operated, but what the early web stood for. Those first principles of creativity, connection and collaboration are worth reconsidering today as we reflect on the past and the future promise of our digitized society. The early days of the internet were febrile with dreams about how it might transform our world, connecting the planet and democratizing access to knowledge and power. It has certainly affected great change, if not always what its founders anticipated. If a new democratic global commons didn’t quite emerge, a new demos certainly did: An internet of people who created it, shared it and reciprocated in its use. People have always been the best part of the internet, and to that end, we have good news. New data from the Pew Research Center show that more than 5 billion people now have a mobile device and more than half of those can connect to the internet. We have passed a tipping point where more people are now connected to the internet than not. In low- and middle-income countries, however, a new report shows women are 23 percent less likely than men to use the mobile internet. If we can close that gender gap it would lead to a $700 billion economic opportunity. The web’s 30th anniversary gives us a much-needed chance to examine what is working well on the internet — and what isn’t. It is clear that people are the common denominator. Indeed, many of the internet’s current problems stem from misguided efforts to take the internet away from people, or vice versa. Sometimes this happens for geopolitical reasons. Nearly two years ago, Turkey fully blocked Wikipedia, making it only the second country after China to do so. Reports suggest a Russian proposal to unplug briefly from the internet to test its cyber defenses could actually be an effort to set up a mass censorship program. And now there is news that Prime Minister Narendra Modi of India is trying to implement government controls that some worry will lead to Chinese-style censorship. But people get taken out of the equation in more opaque ways as well. When you browse social media, the content you see is curated, not by a human editor but by an algorithm that puts you in a box. Increasingly, algorithms can help decide what we read, who we date, what we buy and, more worryingly, the services, credit or even liberties for which we’re eligible. Too often, artificial intelligence is presented as an all-powerful solution to our problems, a scalable replacement for people. Companies are automating nearly every aspect of their social interfaces, from creating to moderating to personalizing content. At its worst, A.I. can put society on autopilot that may not consider our dearest values. Without humans, A.I. can wreak havoc. A glaring example was Amazon’s A.I.-driven human resources software that was supposed to surface the best job candidates, but ended up being biased against women. Built using past resumes submitted to Amazon, most of which came from men, the program concluded men were preferable to women. Rather than replacing humans, A.I. is best used to support our capacity for creativity and discernment. Wikipedia is creating A.I. that will flag potentially problematic edits — like a prankster vandalizing a celebrity’s page — to a human who can then step in. The system can also help our volunteer editors evaluate a newly created page or suggest superb pages for featuring. In short, A.I. that is deployed by and for humans can improve the experience of both people consuming information and those producing it. Read the source article in The New York Times. Read more »
WordPress RSS Feed Retriever by Theme Mason

Leave a Reply