What would you do with unlimited traffic?

There are 5 important elements for you to succeed in business:

  1. Traffic
  2. Product
  3. Price
  4. Presentation
  5. Closing


Our proprietary technology allows us to offer you unprecedented services:

  1. Unlimited traffic
  2. Fast reaction, weeks instead of months or years
  3. And best of all, we deliver or it is free

The advantage of using technology to generate traffic is the fact that now you have the opportunity to work on the other four factors that determine your success.

So even if you fail more than once and even alienate your prospects, our technology keeps on bringing you more and more people so you can keep on correcting your mistakes over and over until you get it right and succeed in your business. This is what we do for you, you do the rest.

This is what we do for you; provide you with unlimited traffic as much as you want!

Just contact us for a free consultation.


Your products or services are of paramount importance; your visitors must want what you have to offer. It is obvious, right? But even if you fail here, because we keep on bringing you people you can correct the issue. How good is it that you have the best product but no one comes?


All these visitors found you online, so if your price is not competitive, they can just as easily find your competitors.


You must be able to explain quickly and clearly what you offer.


How will you deliver, get paid, etc, this is probably the easiest but just as important.

Latest News

  • The Rules Governing AI Are Being Shaped by Tech Firms – Here’s How
    IN EARLY APRIL, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.” One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI. Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says. When a formal draft was released in December, uses that had been suggested as requiring “red lines” were presented as examples of “critical concerns.” That shift appeared to please Microsoft. The company didn’t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoft’s senior director for EU government affairs, said the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’” Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted toward industry. “We need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.” The brouhaha over Europe’s guidelines for AI was an early skirmish in a debate that’s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interest—and in some cases appear to be trying to steer construction of any new guardrails to their own benefit. Harvard law professor Yochai Benkler warned in the journalNature this month that “industry has mobilized to shape the science, morality and laws of artificial intelligence.” Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into “Fairness in Artificial Intelligence” that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed. Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,” he says. Read the source article in Wired. Read more »
  • System Load Balancing for AI Systems: The Case Of AI Autonomous Cars
    By Lance Eliot, the AI Trends Insider I recall an occasion when my children had decided to cook a meal in our kitchen and went whole hog into the matter (so to speak). I’m not much of a cook and tend to enjoy eating a meal more so than the labor involved in preparing a meal. In this case, it was exciting to see the joy of the kids as they went about putting together a rather amazing dinner. Perhaps partially due to watching the various chef competitions on TV and cable, and due to their own solo cooking efforts, when they joined together it was a miraculous sight to see them bustling about in the kitchen in a relatively professional manner. I mainly aided by asking questions and serving as a taste tester. From their perspective, I was more of an interloper than someone actually helping to progress the meal making process. One aspect that caught my attention was the use of our stove top. The stove top has four burner positions. On an everyday cooking process, I believe that four heating positions is sufficient. I could see that with the extravagant dinner that was being put together, the fact that there were only four available was a constraint. Indeed, seemingly a quite difficult constraint. During the cooking process, there were quite a number of pots and pans containing food that needed to be heated-up. I’d wager that at one point there were at least a dozen of such pots and pans in the midst of containing food and requiring some amount of heating. Towards the start of the cooking, it was somewhat manageable because they only were using three of the available heating spots. By using just three, it allowed them to then allocate one spot, the fourth one, as an “extra” for round robin needs. For this fourth spot, they were using it to do quick warm-ups and meanwhile the other three spots were for truly doing a thorough cooking job that required a substantive amount of dedicated cooking time. Pots and pans were sliding on and off that fourth spot like a hockey puck on ice. The other three spots had large pots that were gradually each coming to a bubbling and high-heat condition. When one of the three pots had cooked well enough, the enterprising cooks took it off the burner almost immediately and placed it onto a countertop waiting area they had established for super-heated pots and pans that could simmer for a bit. The moment that one pot came off of any of the three spots, another one was instantly put into its place. Around and around this went, in a dizzying manner as they contended with only having four available heating spots. They kept one spot in reserve and used it for more of a quick paced warm-up and had opted to use the other three for deep heated cooking. As they neared the end of the cooking process for this meal, they began to use nearly all of the spots for the quick paced warm-up needs, apparently because they had by then done the needed cooking already and no longer needed to devote any of the pots to a prolonged period on a heating spot. As a computer scientist at heart, I was delighted to see them performing a delicate dance of load balancing. System Load Balancing Is Unheralded But Crucial You’ve probably had situations involving multiple processors or maybe multiple web sites wherein you had to do a load balance across them. In the case of web sites, it’s not uncommon for some popular web sites to be replicated at multiple geographic sites around the world, allowing for more ready speed responses to those from that part of the world. It also can help when one part of the world starts to bombard one of your sites and you need to flatten out the load else that particular web site might choke due to the volume. In the cooking situation, the kids realized that having just four burner stove top positions was insufficient for the true amount of cooking that needed to take place for the dinner. If they had opted to sequentially and serially have placed pots of food onto the burners in a one-at-a-time manner, they would have had some parts of the meal cooked much earlier than other parts of the meal. In the end, when trying to serve the meal, it would have been a nightmarish result of some food that had been cooked earlier and was now cold, and perhaps other parts of the meal that were superhot and would need to wait to be eaten. If the meal had been one involving much less preparation, such as if they had only three items to be cooked, they would have readily been able to use the stove top without any of the shenanigans of having to float around the pots and pans. They could have just put on the three pots and then waited until the food was cooked. But, since they had more needs for cooking then just the available heating spots, they needed to devise a means to make use of the constrained resources in a manner that would still allow for the cooking process to proceed properly. This is what load balancing is all about. There are situations wherein there are a limited available supply of resources, and the number of requests to utilize those resources might exceed the supply. The load balancer is a means or technique or algorithm or automation that can try to balance out the load. Another valuable aspect of a load balancer is that it can try to even out the workload, which might help in various other ways. Suppose that one of the stove tops was known to sometimes get a bit cantankerous when it is on high-heat for a long time. One approach of a load balance might be to try and keep that resource from peaking and so purposely adjust to use some other resource for a while. We can also consider the aspect of resiliency. You might have a situation wherein one of the resources might unexpectedly go bad or otherwise not be usable. Suppose that one of the burners broke down during the cooking process. A load balance would try to ascertain that a resource is no longer functioning, and then see if it might possible to shift the request or consumption over to another resource instead. Load Balancing Difficulties And Challenges Being a load balancer can be a tricky task. Suppose the kids had decided that they would keep one of stove top burners in reserve and not use it unless it was absolutely necessary. In that case, they might have opted to use the three other burners in a manner of allocating two for the deep heating and one for the warming up. All during this time, the other fourth burner would remain unused, being held in reserve. Is that a good idea? It depends. I’d bet that the cooking with just the three burners would have stretched out the time required to cook the dinner. I can imagine that someone waiting to eat the dinner might become disturbed if they saw that there was a fourth burner that could be used for cooking, and yet it was not, and the implication being that the hungry person had to wait longer to eat the dinner. This person might go ballistic that a resource sat unused for that entire time. What a waste of a resource, it would seem to that person. Imagine further if at the start of the cooking process we were to agree that there should be an idle back-up for each of the stove burners being used. In other words, since we only have four, we might say that two of the burners will be active and the other two are the respective back-up for each of them. Let’s number the burners as 1, 2, 3, and 4. We might decide that burner 1 will be active and it’s back-up is burner 2, and burner 3 will be active and its back-up is burner 4. While the cooking is taking place, we won’t place anything onto the burners 2 and 4, until or if a primary of the burners 1 or burner 3 goes out. We might decide to keep the back-up burners entirely turned-off, in which case as a back-up they would be starting at a cold condition if we needed to suddenly switch over to one of them. We might instead agree that we’ll go ahead and put the two back-ups onto a low-heat position, without actually heating anything per se, and generally be ready then to rapidly go to high-heat if they are needed in their back-up failover mode. I had just now said that burner 2 would be the back-up for primary burner 1. Suppose I adhered to that aspect and would not budge. If burner 3 went suddenly out and I reverted to using burner 4 as the back-up, but then somehow burner 4 went out, should I go ahead and use burner 2 at that juncture? If I was insistent that burner 2 would only and always be a back-up exclusively for burner 1, presumably I would want the load balancer to refuse to now use burner 2, even though burners 3 and 4 are kaput. Maybe that’s a good idea, maybe not. These are the kinds of considerations that go into establishing an appropriate load balancer. You need to try and decide what the rules are for the load balancer. Different circumstances will dictate different aspects of how you want the load balancer to do its thing. Furthermore, you might not just setup the load balancer entirely in-advance, such that it is acting in a static manner during the load balancing, but instead might have the load balancer figuring out what action to take dynamically, in real-time. When using load balancing for resiliency or redundancy purposes, there is a standard nomenclature of referring to the number of resources as N, and then appending a plus sign along with an integer value that ranges from 0 to some number M. If I say that my system is setup as N+0, I’m saying that there are zero or no redundancy devices. If I say it is N+1, then that implies there is 1 and only 1 such redundancy device. And so on. You might be thinking that I should always have a plentiful set of redundancy devices, since that would seem the safest bet. But, there’s a cost associated with the redundancy. Why was my stove top limited to just four burners? Because I wasn’t willing to shell out the bigger bucks to get the model that had eight. I had assumed that for my cooking needs, the four sized stove was sufficient, and actually ample. For computer systems, the same kind of consideration needs to come to play. How many devices do I need and how much redundancy do I need, which has to be considered in light of the costs involved. This can be a significant decision in that later on it can be harder and even costlier to adjust. In the case of my stove top, the kitchen was built in such a manner that the four burner sized stove top fits just right. If I were to now decide that I want the eight burner version, it’s not just a simple plug-and-play, instead they would need to knock out my kitchen counters, and likely some of the flooring, and so on. The choice I made at the start has somewhat locked me in, though of course if I want to have the kids doing cooking more of the time, it might be worth the dough to expand the kitchen accordingly. In computing, you can consider load balancing for just about anything. It might be the CPU processors that underlie your system. It could be the GPUs. It could be the servers. You can load balance on an actual hardware basis, and you can also do load balancing on a virtualized system. The target resource is often referred to as an endpoint, or perhaps a replica, or a device, or some other such wording. Those in computing that don’t explicitly consider the matter of load balancing are either unaware of the significance of it or are unsure of what it can achieve. For many AI software developers, they figure that it’s really a hardware issue or maybe an operating system issue, and thus they don’t put much of their own attention toward the topic. Instead, they hope or assume that those OS specialists or hardware experts have done whatever is required to figure out any needed load balancing. Similar to my example about my four burner stove, the problem with this kind of thinking is that if later on the AI application is not running at a suitable performance level and all of a sudden you want to do something about load balancing, the horse is already out of the barn. Just like my notion of possibly replacing the four burner stove with an eight burner, it can take a lot of effort and cost to retrofit for load balancing. AI Autonomous Cars And Load Balancing The On-Board Systems What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. One key aspect of an AI system for a self-driving car is its ability to perform responsively in real-time. On-board of the self-driving car you have numerous processors that are intended to run the AI software. This can also include various GPUs and other specialized devices. Per my overall framework of AI self-driving cars, here are some the key driving tasks involved:         Sensor data collection and interpretation         Sensor fusion         Virtual world model updating         AI action planning         Car controls command issuance For my framework, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ For my article about real-time performance aspects, see: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/ For aspects about AI developers, see my article: https://aitrends.com/ai-insider/developer-burnout-and-ai-self-driving-cars/ For the dangers of Groupthink, see my article: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/ You’ve got software that needs to run in real-time and direct the activities of a car. The car will at times be in motion. There will be circumstances wherein the AI is relatively at ease and there’s not much happening, and there will be situations whereby the AI is having to work at a rip-roaring pace. Imagine going on a freeway at 75 miles per hour, and there’s lots of other nearby traffic, along with foul weather, the road itself has potholes, there’s debris on the roadway, and so on. A lot of things, all happening at once. The AI holds in its automation the key to whether the self-driving car safely navigates and avoids getting into a car accident. This is not just a real-time system, it is a real-time system that can spell life and death. Human occupants in the AI self-driving car can get harmed if the AI can’t operate in time to make the proper decision. Pedestrians can get harmed. Other cars can get hit, and thus the human occupants of those cars can get harmed. All in all, this is quite serious business. To achieve this, the on-board hardware generally has lots of computing power and lots of redundancy. Is it enough? That’s the zillion dollar question. Similar to my choice of a four burner stove, when the automotive engineers for the auto maker or tech firm decide to outfit the self-driving car with whatever number and type of processors and other such devices, they are making some hard choices about what the performance capability of that self-driving car will be. If the AI cannot run fast enough to make sound choices, it’s a bad situation all around. Imagine too that you are fielding your self-driving car. It seems to be running fine in the roadway trials underway. You give the green light to ramp up production of the self-driving car. These self-driving cars start to roll off the assembly line and the public at large is buying them. Suppose after this has taken place for a while, you begin to get reports that there are times that the AI seemed to not perform in time. Maybe it even froze up. Not good. Some self-driving car pundits say that it’s easy to solve this. Via OTA (Over The Air) updates, you just beam down into the self-driving cars a patch for whatever issue or flaw there was in the AI software. I’ve mentioned many times that the use of OTA is handy, important, and significant, but it is not a cure all. Let’s suppose that the AI software has no bugs or errors in this case. Instead, it’s that the AI running via the on-board processors is exhausting the computing power at certain times. Maybe this only happens once in a blue moon, but if you are depending upon your life and the life of others, even a once in a blue moon is too much of a problem. It could be that the computing power is just insufficient. What do you do then? Yes, you can try to optimize the AI and get it to somehow not consume so much computing power. This though is harder than it seems. If you opt to toss more hardware at this problem, sure, that’s’ possible, but now this means that all of those AI self-driving cars that you sold will need to come back into the auto shop and get added hardware. Costly. Logistically arduous. A mess. For my article about the freezing robot problem and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/ For my article about bugs and errors in AI self-driving cars, see: https://aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/ For my article about automobile recalls and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/auto-recalls/ For product liability claims against AI self-driving cars, see my article: https://aitrends.com/ai-insider/product-liability-self-driving-cars-looming-cloud-ahead/ Dangers Of Silos Among Autonomous Car Components Some auto makers and tech firms find themselves confronting the classic silo mentality of the software side and the hardware side of their development groups. The software side developing the AI is not so concerned about the details of the hardware and just expect that their AI will run in proper time. The hardware side puts in place as much computing power as it seems can be suitably provided, depending on cost considerations, physical space considerations, etc. If there is little or no load balancing that comes to play, in terms of making sure that both the software and hardware teams come together on how to load balance, it’s a recipe for disaster. Some might say that all they need to know is how much raw speed is needed, whether it is MIPS (millions of instructions per second), FLOPS (floating point operations per second), TPU’s (tensor processing units), or other such metrics. This though doesn’t fully answer the performance question. The AI software side often doesn’t really know what kind of performance resources they’ll need per se. You can try to simulate the AI software to gauge how much performance it will require. You can create benchmarks. There are all sorts of “lab” kinds of ways to gauge usage. Once you’ve got AI self-driving cars in the field for trials, you should also be pulling stats about performance. Indeed, it’s quite important that their be on-board monitoring to see how the AI and the hardware are performing. For my article about simulations and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/ For my article about benchmarks and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/ For my article about AI self-driving cars involved in accidents, see: https://aitrends.com/selfdrivingcars/accidents-happen-self-driving-cars/ With proper load balancing on-board the self-driving car, the load balancer is trying to keep the AI from getting starved, it is trying to ensure that the AI runs undisrupted by whatever might be happening at the hardware level. The load balance is monitoring the devices involved. When saturation approaches, this can be potentially handled via static or dynamic balancing, and thus the load balancer needs to come to play. If an on-board device goes sour, the load balancer hopefully has a means to deal with the loss. Whether it’s redundancy or whether it is shifting over to have another device now do double-duty, you’ve got to have a load balancer on-board to deal with those moments. And do so in real-time. While the self-driving car is possibly in motion, on a crowded freeway, etc. Fail-Safe Aspects To Keep In Mind Believe it or not, I’ve had some AI developers say to me that it is ridiculous to think that any of the on-board hardware devices are going to just up and quit. They cannot fathom any reason for this to occur. I point out that the on-board devices are all prone to the same kinds of hardware failures as any piece of hardware. There’s nothing magical about being included into a self-driving car. There will be “bad” devices that will go out much sooner than their life expectancy. There will be devices that will go out due to some kind of in-car issue that arises, maybe overheating or maybe somehow a human occupant manages to bust it up. There are bound to be recalls on some of that hardware. Also, I’ve seen some of them deluded by the aspect that during the initial trials of self-driving cars, the auto maker or tech firm is pampering the AI self-driving car. After each journey or maybe at the end of the day, the tech team involved in the trials are testing to make sure that all of the hardware is still in pristine shape. They swap out equipment as needed. They act like a race car team, continually tuning and making sure that everything on-board is in top shape. There’s nearly an unlimited budget of sorts during these trials in that the view is do whatever it takes to keep the AI self-driving car running. This is not what’s going to happen once the real-world occurs. When those self-driving cars are being used by the average Joe or Samatha, they will not have a trained team of self-driving car specialists at the ready to tweak and replace whatever might need to be replaced. The equipment will age. It will suffer normal wear and tear. It will even be taxed beyond normal wear and tear since it is anticipated that AI self-driving cars will be running perhaps 24×7, nearly non-stop. For my article about non-stop AI self-driving cars, see: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/ For repairs of AI self-driving cars, see my article: https://aitrends.com/ai-insider/towing-and-ai-self-driving-cars/ Conclusion For those auto makers and tech firms that are giving short shrift right now to the importance of load balancing, I hope that this might be a wake-up call. It’s not going to do anyone any good, neither the public and nor the makers of AI self-driving cars, if it turns out that the AI is unable to get the performance it needs out of the on-board devices. A load balancer is not a silver bullet, but it at least provides the kind of added layer of protection that you’d expect for any solidly devised real-time system. Presumably, there aren’t any auto makers or tech firms that opted to go with the four burner stove when an eight burner stove was needed. Copyright 2019 Dr. Lance Eliot This content is originally posted on AI Trends.   Read more »
  • $15M Global Learning XPRIZE Names Two Grand Prize Winners
    XPRIZE, the global leader in designing and operating incentive competitions to solve humanity’s grand challenges, announced two grand prize winners in the $15M Global Learning XPRIZE. The tie between Kitkit School from South Korea and the United States, and onebillion from Kenya and the United Kingdom, was revealed at an awards ceremony hosted at the Google Spruce Goose Hangar in Playa Vista, where supporters and benefactors including Elon Musk, celebrated all five finalist teams for their efforts. Launched in 2014, the Global Learning XPRIZE challenged innovators around the globe to develop scalable solutions that enable children to teach themselves basic reading, writing and arithmetic within 15 months. After being selected as finalists, five teams received $1M each and went on to field test their education technology solution in Swahili, reaching nearly 3,000 children across 170 villages in Tanzania. To help ensure anyone, anywhere can iterate, improve upon, and deploy the learning solutions in their own community, all five finalists’ software are open source. All five learning programs are currently available in both Swahili and English on GitHub, including instructions on how to localize into other languages. The competition offered a $10 million grand prize to the team whose solution enabled the greatest proficiency gains in reading, writing and arithmetic in the field test. After reviewing the field test data, an independent panel of judges found indiscernible results between the top two performers, and determined two grand prize winners would split the prize purse, receiving $5M each: Kitkit School (Berkeley, United States and Seoul, South Korea) developed a learning program with a game-based core and flexible learning architecture aimed at helping children independently learn, irrespective of their knowledge, skill, and environment. onebillion (London, United Kingdom and Nairobi, Kenya) merged numeracy content with new literacy material to offer directed learning and creative activities alongside continuous monitoring to respond to different children’s needs. Currently, more than 250 million children around the world cannot read or write, and according to data from the UNESCO Institute for Statistics, about one in every five children are out school – a figure that has barely changed over the past five years. Compounding on the issue, there is a massive shortage of teachers at the primary and secondary levels, with research showing that the world must recruit 68.8 million teachers to provide every child with primary and secondary education by 2030. Before the Global Learning XPRIZE field test, 74% of the participating children were reported as never attending school, 80% reported as never being read to at home and over 90% of participating children could not read a single world in Swahili. After 15 months of learning on Pixel C tablets donated by Google and preloaded with one of the five finalists learning software, that number was cut in half. Additionally, in math skills, all five software were equally as effective for girls and boys. Collectively over the course of the competition, the five finalist teams invested approximately $200M in research, development, and testing for their software, a total that rises to nearly $300M when including all 198 registered teams. “Education is a fundamental human right, and we are so proud of all the teams and their dedication and hard work to ensure every single child has the opportunity to take learning into their own hands,” said Anousheh Ansari, CEO of XPRIZE. “Learning how to read, write and demonstrate basic math are essential building blocks for those who want to live free from poverty and its limitations, and we believe that this competition clearly demonstrated the accelerated learning made possible through the educational applications developed by our teams, and ultimately hope that this movement spurs a revolution in education, worldwide.” The grand prize winners and the following finalist teams were chosen from a field of 198 teams from 40 countries: CCI (New York, United States) developed structured and sequential instructional programs, in addition to a platform seeking to enable non-coders to develop engaging learning content in any language or subject area. Chimple (Bangalore, India) created a learning platform aimed at enabling children to learn reading, writing and mathematics on a tablet through more than 60 explorative games and 70 different stories. RoboTutor (Pittsburgh, United States) leveraged Carnegie Mellon’s research in reading and math tutors, speech recognition and synthesis, machine learning, educational data mining, cognitive psychology, and human-computer interaction. See the source release XPRIZE.org. Read more »
  • Microsoft and Sony Become Partners Around Gaming and AI
    Microsoft and Sony announced an unusual partnership on May 16, allowing the two rivals to partner on cloud-based gaming services. “The two companies will explore joint development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services,” Microsoft said in a statement. Sony’s existing game and content-streaming services will also be powered by Microsoft Azure in the future. Microsoft says “these efforts will also include building better development platforms for the content creator community,” which sounds like both Sony and Microsoft are planning to partner on future services aimed at creators and the gaming community. Both companies say they will “share additional information when available,” but the partnership means Microsoft and Sony will collaborate on cloud gaming. That’s a pretty big deal, and it’s a big loss for Microsoft’s main cloud rival, Amazon. It also means Google, a new gaming rival to Microsoft and Sony, will miss out on hosting Sony’s cloud services. Google unveiled its Stadia game streaming service earlier this year, and the company will use YouTube to push it to the masses. Stadia is a threat to both Microsoft and Sony, and it looks like the companies are teaming up so Sony has some underlying infrastructure assistance to fight back. Stadia will stream games from the cloud to the Chrome browser, Chromecast, and Pixel devices. Sony already has a cloud gaming service, but Microsoft is promising trials of its own xCloud gaming service later this year. Microsoft’s gaming boss, Phil Spencer, has also promised the company will “go big” for E3 [Electronic Entertainment Expo]. As part of the partnership, Sony will use Microsoft’s AI platform in its consumer products. Read the source article in The Verge. Read more »
  • Arkansas Government Moving Aggressively to Shore Up Cybersecurity
    Arkansas will soon launch an ambitious initiative to include AI to bolster the state’s cybersecurity stance, while developing a scalable defense model that others can use in the future. Senate Bill 632, recently signed into law by Gov. Asa Hutchinson, authorizes the state’s Economic Development Commission (AEDC) to create a Cyber Initiative. This initiative will be responsible for working to mitigate the cyber-risks to Arkansas; increasing education relative to threats and defense; providing the public and private sectors with threat assessments and other intelligence; and fostering growth and development around tech including AI, IT and defense. The initiative will also create a “cyber alliance” made up of partnerships with a variety of insitutitions like “universities, colleges, government agencies and the private business sector,” all of which will work in a unified fashion toward realizing the initiative’s priorities. The bill also gives the program a potentially extensive financing framework, establishing a special fund that will consist of all money appropriated by the General Assembly, as well as “gifts, contributions, grants, or bequests received from federal, private, or other sources,” according to the text of the legislation. That money will go toward a wide variety of activities conducted through its myriad partnerships — including research, training of officials at public and private institutions in best defense, business and academic opportunities. The initiative will also have a considerable privacy component, as it will be exempt from the Freedom of Information Act (FOIA) if the request is deemed a “security risk,” according to bill text. Much of the initiative’s work will be centered around finding more effective methods to ferret out bad actors and identifying where and what those actors are looking to target within the state, said retired Col. Rob Ator, who serves as the director of Military Affairs for the AEDC. Arkansas, Ator said, is an attractive target to potential hackers because — as the bill notes — it is “home to national and global private sector companies that are considerable targets in the financial services, food and supply chain and electric grid sectors.” “For the first time in our nation’s history, the outward-facing defense for our critical infrastructure is no longer the folks in uniform and it’s no longer the government — it’s our private industry,” Ator said, adding that, as potential targets for cyberattacks, companies are now responsible for their own defense like never before. Read the source article in Government Technology. Read more »
  • S.F. Passes Facial-Recognition Ban; Capitalizing on AI’s Opportunities
    San Francisco passes facial-recognition surveillance ban. San Francisco on May 14 became the first U.S. city to pass a ban on the use of facial recognition by local agencies, reported WSJ’s Asa Fitch. The move comes amid a broad push to regulate the technology, which critics contend perpetuates police bias and provides excessive surveillance powers, although San Francisco’s own police force doesn’t use it. San Francisco isn’t alone. Similar bans have been proposed in Oakland, Calif., and Somerville, Mass. Facial recognition proponents cite the technology’s benefits. Law-enforcement groups say bans are an overreaction, adding that the technology can assist in catching criminals and locating missing people when used with police investigative techniques. Dozens of police forces around the country use the technology to analyze mug shots and driver’s-license photos to identify suspects. Opponents troubled by facial recognition’s possible flaws. Researchers at the Massachusetts Institute of Technology found that facial-recognition tools created by Amazon.com Inc. and others had significantly higher error rates when identifying darker-skinned people and women, according to the WSJ. Amazon disputed the findings. Critics say the system’s flaws raise concerns when the technology is used in decisions that affect people’s liberty. What’s needed to capitalize on AI’s opportunities. AI has the potential to alter economic growth, commerce and trade. But for AI to develop, there need to be new regulations for AI ethics and data access as well as a revisiting of existing regulations and laws around privacy and intellectual property, according to a report from the Brookings Institute. There also needs to be an international AI development agenda to avoid having a variety of unnecessary regulations that impede the technology’s adoption. The Brookings Institute offers a number of suggestions for maximizing AI’s benefits, among them: Strengthen AI diffusion within and across countries. A lack of AI diffusion is producing a widening gap between companies in business sectors. “Policies are needed to increase the rates and depth of technology diffusion across the economy,” according to the report. ·         Develop education and skills. AI will require science, technology, engineering and math education, and training. And those skills need to be developed globally. ·         Establish sound cybersecurity. Governments need to develop national cybersecurity strategies. However, international cybersecurity standards could create unneeded barriers to trade in AI, the report says. ·         Protect the privacy of personal data. Strong rules are needed, but, at the same time, those rules shouldn’t put unnecessary restrictions on cross-border data flows. ·         Domestic agenda. A “consistent, transparent, and standardized framework” is needed for sharing data sets across government agencies, researchers and the private sector, the report states. ·         Develop a balanced intellectual-property framework. AI technology needs a supportive intellectual-property rule base that includes fair-use exceptions to copyright, which would allow duplicating data for training purposes. ·         Develop AI ethical principles. In order to develop trust in artificial intelligence, AI processes and outcomes need to be ethical, according to the report. Read the source article in the Wall Streeet Journal. Read more »
  • Human-To-Machine Depersonalization Considerations and AI Systems: The Case of Autonomous Cars
    By Lance Eliot, the AI Trends Insider Is automation and in particular AI leading us toward a service society that depersonalizes us? In a prior column, I had described how AI can provide deep personalization via Machine Learning, and readers responded by asking about the concerns expressed in the mass media about the possibility of advanced automation depersonalizing us as humans and what I had to say about those qualms, so thanks to your feedback I’m covering herein the depersonalization topic. For my prior column on deep personalization via Machine Learning, see: https://www.aitrends.com/selfdrivingcars/ai-deep-personalization-the-case-of-ai-self-driving-cars/ Some pundits say yes, arguing that the human touch of providing services is becoming scarcer and scarcer, and eventually we’ll all be getting served by AI systems that treat us humans as though we are non-human. More and more we’ll see and experience that humans will lose their jobs to AI and be replaced by automation that is less costly, and notably less caring, eschewing us as individuals and abandoning personalized service. Those uncaring and heartless AI systems will simply stamp out whatever service is being sought by a human and there won’t be any soul in it, there won’t be any spark of humanity, it will be push-button automata only. In my view, those pundits are seeing the glass as only half empty. They seem to not either notice or want to observe that the glass is also half full. Let me share with you some examples of what I mean. Before I do so, please be aware that the word “depersonalization” can have a multitude of meanings. In the clinical or psychology field, it has a meaning of feeling detached or disconnected from your body and would be considered a type of mental disorder. I’m not using the word in that manner herein. Instead, the theme I’m invoking involves the humanization or dehumanization around us, floating the idea of being personalized to human needs or being depersonalized to them. That’s a societal frame rather than a purely psychological portrayal. With that said, let’s use an example to get at my depersonalization and personalization theme. Banking ATM As An Example Of Alleged Depersonalization Banking is an area ripe with prior exhortations of how bad things would become once ATM’s took over as there would no longer be in-branch banking with human tellers (that’s what was predicted). We would all be slaves to ATM machines. You were going to stand in front of the ATM and yell out “where have all the humans gone?” as you fought with the banking system to perform a desired transaction. Recently, I went to my local bank branch to make a deposit. I was in a hurry and opted to squeeze in this errand on my way to a business meeting. The deposit was somewhat sizable so I opted to go and perform the transaction with the human teller, rather than feed my deposit into the “impersonal” ATM. Upon my coming up to the teller window, the teller provided a big smile and welcomed me to the bank. How’s my day going, the teller asked. The teller proceeded to mention that it had been a busy morning at the branch. Looking outside the branch window, the teller remarked that it looked like rain was on its way. I wanted to make the deposit and get going, yet could see that the chitchat, though friendly and warm, would keep dribbling along and wasn’t seemingly in pursuit of my desired transaction. I presented my check to be deposited and was asked to first run my banking card through the pad at the teller window. I did so. The teller looked at my info on their screen and noted that I had not talked with one of their bankers for quite a while, offering to bring over a bank manager to say hello. I declined the gracious offer. The teller then noted that one of my CD’s was coming due in a month and suggested that I might want to consider various renewal options. Not just now, I demurred. The teller finally proceeded with the deposit and then, just as I was stepping away to head-out, called me back to mention that they were having a special this week on opening new accounts. Would I be interested in doing so? As you can imagine, in my haste to get going, I quickly said no thanks and tried to make my way to the door. Turns out that the teller had already signaled to the bank manager and I was met at the door with a thanks for coming in by the pleasant manager, along with handing me a business card in case I had any follow-up needs. Let’s unpack or dissect my in-branch experience. On the one hand, you could say that I was favorably drenched in high-touch service. The teller engaged me in dialogue and tried to create a human-to-human connection. Rather than solely focusing on my transaction, I was offered a bevy of other options and possibilities. My banking history at the bank was used to identify opportunities for me to enhance my banking efforts at the bank. All in all, this would seem to be the picture-perfect example of human personalized service. Having done lots of systems work in the banking industry, I know how hard it can be to get a branch to provide personalized and friendly service. One bank that I helped fix had a lousy reputation when I first was brought in, known for having branches that were terribly run. Whenever you went into a branch it was like going to a gulag. There were long lines, the tellers were ogres, and you felt as though you were a mere cog in a gigantic wheel of their banking procedures, often making the simplistic banking acts into a torturous affair. Thus, my recent experience of making my deposit at my local branch should be a shining example of what a properly run bank branch is all about. If I were to have to choose between the somewhat over-friendly experience versus going to a branch that was like descending into Dante’s inferno, I certainly would choose the overly friendly case. Nonetheless, I’d like to explore more deeply the enriched banking experience. I was in a hurry. The friendly dialogue and attempts to upsell me were getting in the way of a quick in-and-out banking transaction. In theory, the teller should have judged that I was in a hurry (I assure you that I offered numerous overt signals as such) and toned down the ultra-service effort. It is hard perhaps to fault the teller and one might point at whatever pressures there are on the teller to do the banking jingle, perhaps drummed into the teller as part of the training efforts at the bank and likely baked into performance metrics and bonuses. In any case, I walked out of the branch grumbling and vowed that in the future I would use the ATM. Unfair, you say? Maybe. Am I being a whiner by “complaining” about too much personalized service? Maybe. But it shouldn’t be that I have to make a choice between the rampant personalized service versus the utterly depersonalized gulag service. I should be able to choose which suits my service needs at the time of consuming the service. About a week later, I had to make another deposit and this time used the drive-thru ATM. After entering my PIN, the screen popped-up asking if I was there to make a deposit, and if so, there was a one-click offered to immediately shift into deposit taking mode. I used the one-click, slipped my check into the ATM, it then scanned and asked me to confirm the amount, which I did, and the ATM then indicated that I usually don’t get a printed receipt and unless I wanted one this time, it wasn’t going to print one out. I was satisfied that the deposit seemed to have been made and so I put my car into gear and drove on. The entire transaction time must have been around 30 seconds at most, making it many times faster than when I had made a deposit via the teller. I did not have to endure any chitchat about the weather. I was not bombarded with efforts to upsell me. In-and-out, the effort was completed, readily, without fanfare. Notice that the ATM had predicted that I was there to make a deposit. That was handy. Based on my last several transactions with the bank, the banking system had analyzed my pattern and logically deduced that I was most likely there to make a deposit. And, I was offered a one-click option to proceed with making my deposit, which showcased that not only was my behavior anticipated, the ATM tailored its actions to enable my transaction to proceed smoothly. Would you say that my ATM experience was one of a personalized nature or a depersonalized nature? Deciding On Whether There Is Depersonalization Or Personalization We always tend to assume that whenever something is “depersonalized” that it must be bad. The word has a connotation that suggests something untoward. Nobody wants to be depersonalized. In the case of the ATM, I wasn’t asked about the weather and there wasn’t a smiling human that spoke with me. I interacted solely with the automation. If that’s the case, ergo I must be getting “depersonalized” service, one would assume. Yet, my ATM experience was actually personalized. The system had anticipated that I wanted to make a deposit. This had been followed-up by making the act of depositing easy. Once I had made the deposit, the ATM did not just spit out a receipt, which often is what happens (and I frequently see deposit receipts laying on the ground near the ATM, presumably leftover by hurried humans). The ATM knew via my prior history that I tended to not get a receipt and therefore the default was going to be that it would not produce one in this instance. Given the other kinds of more sophisticated patterns in my banking behavior that could be found by using AI capabilities, I thought that this ATM experience was illustrative of how even simple automation can provide a personalized service experience. Imagine what more could be done if we added some hefty Machine Learning or Deep Learning into this. I’ve used the case of the banking effort to help illuminate the notion of what constitutes personalization versus depersonalization. Many seem to assume that if you remove the human service provider, you are axiomatically creating a depersonalized service. I don’t agree. Take a look at Figure 1. As shown, the performance of a service act consists of the service provider and the receiver of the service, the customer. Generally, when considering depersonalized service, we think about the service provider as being perfunctory, dry, lacking in emotion, unfeeling, aloof, and otherwise without an expression of caring for the customer. We also then think about the receiver of the service, the customer, and their reaction of presumably becoming upset at the lack of empathy to their plight as they are trying to obtain or consume the service. I argue that the service provider can provide a personalized or depersonalized service, either one, even if it is a human providing the service. The mere act of having a human provide a service does not make it personalized. I’m sure you’ve encountered humans that treated you as though you were inconsequential, as though you were on an assembly line, and they had very little if any personalization, likely bordering on or perhaps fully enmeshed into depersonalization. A month ago, I ventured to use the Department of Motor Vehicles (DMV) office and was amazed at how depersonalized they were able to make things. Each of the human workers in the DMV office had that look as though they would prefer to be anyplace but working in the DMV. The people flowing into the DMV were admittedly rancorous and often difficult to contend with. I’m sure these DMV workers had their fill each day of people that were grotesquely impolite and cantankerous. In any case, there were signs telling you to stand here, move there, wait for this, wait for that. Similar to getting through an airport screening, this was a giant mechanism to move the masses through the process. I’m sure it was as numbing for the DMV workers as it was for those of us there to get a driver’s license transaction undertaken. Let’s all agree then that you can have a human that provides a personalized or a depersonalized service, which will be contingent on a variety of factors, such as the processes involved, the incentives for the human service provider, and the like. I’d to like next assert that automation can also provide either a personalized service or a depersonalized service. Those are both viable possibilities. Depends On How The Automation Is Devised It all depends upon how the automation has been established. In my view, if you add AI to providing a service, and do it well, you are going to have a solid chance of making that service personalized. This won’t happen by chance alone. In fact, by chance alone, you are probably going to have AI service that seems depersonalized. We might at first assume that the automation is going to be providing a depersonalized service, likewise we might at first assume that a human will provide a personalized service. That’s our usual bias. Either of those assumptions can be upended. Furthermore, it can be tricky to ascertain what personalized versus depersonalized service consists of. In my example about the bank branch and the teller, everything about the setup would seem to suggest a high-touch personalized service. I’m sure the bank spent a lot of money to try and arrive at the high-touch level of service. Yet, in my case, in the instance of wanting to hurriedly do a transaction, the high-touch personalized service actually defeated itself. That’s a problem with having personalized service that is ironically inflexible. It is ironic in that the assumption is that personalized means that you will be incessantly presented with seeming personalization. Instead, the personalization should be based on the human receiving the service and what makes most sense for them. Had the teller picked-up on the aspect that I was in a hurry, it would have been relatively easy to switch into a mode of aiding my transaction and getting me out of the bank, doing so without undermining the overarching notion of personalization. For those of you that are AI developers, I hope that you keep in mind these facets about depersonalization and personalization. Namely, via AI, you have a chance at making a service providing system more responsive and able to provide personalization, yet if you don’t seek that possibility, the odds are that your AI system will appear to be the furtherance of depersonalization. Humans interacting with your AI system are more likely to be predisposed to the belief that your AI will be depersonalizing. In that sense, you have a larger hurdle to jump over. In the case of a human providing a service, by-and-large we all tend to assume that it is likely to be imbued with personalization, though for circumstances like the DMV and airport screening we’ve all gotten used to the idea that you are unlikely to get personalized service in those situations (when it happens, we are often surprised and make special note of it). You also need to take into account that there is personalization of an inflexible nature, which can then undermine the personalization being delivered. As indicated via the bank branch example, using that as an analogy, consider that if you do have AI that seems to provide personalization, don’t go overboard and force whatever monolithic personalization that you came up with onto all cases of providing the service. Truly, personalized service should be personalized to the needs of the customer in-hand. AI Self-Driving Cars As An Example What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. There are numerous ways in which the AI can either come across as personalized or depersonalized, and it is important for auto makers and tech firms to realize this and devise their AI systems accordingly. Allow me to elaborate. I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/ For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/ For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/ Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion. Here’s the usual steps involved in the AI driving task: Sensor data collection and interpretation Sensor fusion Virtual world model updating AI action planning Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/ See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/ For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/ Returning to the topic of depersonalization and personalization, let’s consider how AI self-driving cars can get involved in and perhaps mired into these facets. Bike Riders And AI Self-Driving Cars I was speaking at a recent conference on AI self-driving cars and during the Q&A there was an interesting question or point made by an audience member. The audience member stood-up and derided human drivers that often cut-off bike riders. She indicated that to get to the conference, she had ridden her bike, which she also rides when going to work (this event was in the Silicon Valley, where bike riding for getting to work is relatively popular). While riding to the convention, she had narrowly gotten hit at an intersection when a car took a right turn and seemed to have little regard for her presence as she rode in the bike lane. You might assume that the car driver was not aware that she had been in the bike lane and therefore mistakenly cut her off. If that was the case, her point could be that an AI self-driving car would presumably not make that same kind of human error. The AI sensors would hopefully detect a bike rider and then appropriately the AI action planner would attempt to avoid cutting off the bike rider. It seemed though that she believed the human driver did see her. The act of cutting her off was actually deliberate. The driver was apparently of a mind that the car had higher priority over the bike rider, and thus the car could dictate what was going to happen, namely cut-off the bike rider so that the car could proceed to make the right turn. I’m sure we’ve all had situations of a car driver that wanted to demand the right-of-way and figured that a multi-ton car has more heft to decide the matter than does a fragile human on a bicycle. What would an AI self-driving car do? Right now, assuming that the AI sensors detected the bike rider, and assuming that the virtual world model was updated with the path of the bike rider, and assuming that the AI action planner portion of the system was able to anticipate a potential collision, presumably the AI would opt to brake and allow the bike rider to proceed. We must also consider the traffic situation at the time, since we don’t know what else might have been happening. Suppose a car was on the tail of the AI self-driving car and there was a risk that if the AI self-driving car abruptly halted, allowing the bike rider to proceed, the car behind the AI self-driving car might smack into the rear of the AI self-driving car. In that case, perhaps the risk of being hit from behind might lead the AI to determine that the risk of cutting off the bike rider is less overall and therefore proceed to cut-off the bike rider. I mention this nuance about the AI self-driving car and its choice of what to do because of the oft times assumption by many that an AI self-driving car is always going to do “the right thing” in terms of making car driving decisions. In essence, people often tell me about situations of driving that they assume an AI system would “not make the same mistake” that a human made, and yet this assumption is often in a vacuum. Without knowing the context of the driving situation, how can we really say what the “right thing” was to do. For my article about the human foibles of driving, see: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/ For the use of probabilities and uncertainty in AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/probabilistic-reasoning-ai-self-driving-cars/ For the safety aspects of AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/ For my article about defensive driving for AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/ In any case, you might argue that the question brought up by the audience member is related to personalization and depersonalization. If the human driver was considering the human bike rider in a depersonalized way, they might have made the cut-off decision without any sense of humanity being involved. Here’s what might have been taking place. That’s not a human on that bicycle, it is instead a thing on an object that is moving into my path and getting in my way of making that right turn, the driver might have been thinking. Furthermore, the driver might have been contemplating this: I am a human and my needs are important, and I need to make that right turn to proceed along smoothly and not be slowed down. The human driver objectifies the bike rider. The bike is an impediment. The human on the bike is meshed into the object. Now, we don’t know that’s what the human driver was contemplating, but it is somewhat likely. It is easy when driving a car to fall into the mental trap that you are essentially in a video game. Around you are these various objects that are there to get in your way. Using your video gaming skills, you navigate in and around those objects. If this seems farfetched, you might consider the emergence of road rage. People driving a car will at times become emboldened while in the driver’s seat. They are in command of a vehicle that can determine life-or-death of others. This can inflate their sense of self-importance. They can become irked by other drivers and by pedestrians and decide to take this out on those around them. For my article about road rage, see: https://www.aitrends.com/selfdrivingcars/road-rage-and-ai-self-driving-cars/ As I’ve said many times in my speeches and in my writings, it is a miracle that we don’t have more road rage than we already have. It is estimated that in the United States alone we drive a combined 3.22 trillion miles. Consider those 250 million cars in the United States and the drivers of those cars, and how unhinged some of them might be, or how unhinged they might become as a result of being slighted or they perceived they were slighted while driving, and it really is a miracle that we don’t have more untoward driving acts. Encountering Humans In A Myriad of Ways Back to the bike rider that got cut-off, there is a possibility that the human driver depersonalized the bike rider. This once again illustrates that humans are not necessarily going to provide or undertake personalized acts in what they do. An AI self-driving car might or might not be undertaking a more personalized approach, depending upon how the AI has been designed, developed, and fielded. Take a look at Figure 2. As shown, an AI self-driving car is going to encounter humans in a variety of ways. There will be human passengers inside the AI self-driving car. There will be pedestrians outside of the AI self-driving car and that the AI self-driving car comes across. There will be human drivers in other cars, of which the AI self-driving car will encounter while driving on the roadways. There will be human bike riders, along with other humans on motorcycles, scooters, and so on. If you buy into the notion that the AI is by necessity a depersonalizing mechanism, meaning that in comparison to human drivers the AI driver will be acting toward humans in a depersonalized manner, more so than presumably other human drivers would, this seems to spell possible disaster for humans. Are all of these humans that might be encountered going to be treated as mere objects and not as humans? The counter-argument is that the AI can be embodied with a form of personalization that would enhance the AI driver over the at-times depersonalizing human driver. The AI system might have a calculus that assesses the value of the bike as based on the human riding the bike. Unlike the human driver of earlier mention, presumably the AI is going to take into account that a human is riding the bike. In the case of interacting with human passengers, there is a possibility of having the AI make use of sophisticated Natural Language Processing (NLP) and socio-behavioral conversational computing. In some ways, this could be done such that the personalization of interaction is on par with a typical human driver, perhaps even better so. Have you been in a cab or taxi whereby the human driver was lacking in conversational ability, and unable to respond when you asked where’s a good place to eat in this town? Or, the opposite extreme, you’ve been in a ridesharing car and the human driver was trying to be overly responsive by chattering the entire time, along with quizzing you about who you are, where you work, what you do. That’s akin to my bank teller example earlier. Goldilocks Approach Is Best AI developers ought to be aiming for the Goldilocks version of interaction with human passengers. Not too little of conversation, and not too much. On some occasions, the human passenger will just want to say where they wish to go and not want any further discussion. In other cases, the human passenger might be seeking a vigorous dialogue. One size does not fit all. For socio-behavioral computing, see my article: https://www.aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/ For the use of ESP2, see my article: https://www.aitrends.com/selfdrivingcars/extra-scenery-perception-esp2-self-driving-cars-beyond-norm/ For how the AI might interact during family trips, see: https://www.aitrends.com/selfdrivingcars/family-road-trip-and-ai-self-driving-cars/ For the use of NLP for AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/ In terms of interacting with humans that are outside of the AI self-driving car, there is definitely a bit of a problem on that end of things. Just the other day, I drove up to a four-way stop. There was another car already stopped, sitting on the other side of the intersection, and apparently waiting. I wasn’t sure why the other driver wasn’t moving forward. They had the right-of-way. Were they worried that I wasn’t going to come to a stop? Maybe they feared that I was going to run thru the stop signs and so they were waiting to make sure I came to… Read more »
  • How AI in the Energy Sector Can Help Solve the Climate Crisis
    Artificial intelligence conjures fears of job loss and privacy concerns — not to mention sci-fi dystopias. But machine learning can also help us save energy and make renewables better. Artificial intelligence (AI) is infiltrating every corner of our lives. Video streaming services use it to learn our tastes and suggest what we might like to watch next. AIs have beaten the world’s best players in complex board games like chess and Go. Some scientists even believe AI could one day achieve superhuman intelligence resulting in apocalyptic scenarios reminiscent of films like “The Matrix.” As if to dispel such fears, the UN AI For Good Global Summit in Geneva later this month highlights AI applications to address the pressing problems of our time, including climate change. Most countries aren’t cutting emissions nearly fast enough. AI could help speed things up. In particular, a field called machine learning can process colossal amounts of data to make energy systems more efficient. To fulfil the Paris Agreement, we will have to virtually eliminate fossil-fueled energy from all sectors of the economy. This will mean networking decentralized, fluctuating renewable power generation with consumers that automatically adjust to minimize waste and balance the entire system. Hendrik Zimmermann, a researcher into digitalization and sustainability at environmental NGO Germanwatch, says efficiently managing data on this scale is only possible with AI. “To be able to design this system, we need digital technologies and a lot of data that have to be quickly collected and analyzed,” Zimmerann told DW. “AI or machine learning algorithms can help us manage this complexity and get to zero emissions.” Cutting energy consumption But digitalization comes with a host of problems, too — not least the huge amount of energy all this data processing itself consumes. Sims Witherspoon is a program manager at Deepmind, the British AI firm owned by Google’s parent company Alphabet that developed the Go-playing bot. She told DW that data centers — the huge “server farms” around the world that store users’ data — now consume 3% of global energy. Which is why Deepmind decided to use its “general purpose learning algorithms” to reduce the energy needed to cool Google data centers by up to 40 percent. Read the source article in DW. Read more »
  • AI System Tries to Match Accuracy of 101 Radiologists on Mammograms
    A commercial artificial intelligence (AI) system matched the accuracy of over 28,000 interpretations of breast cancer screening mammograms by 101 radiologists. Although the most accurate mammographers outperformed the AI system, it achieved a higher performance than the majority of radiologists. With the addition of deep-learning convolutional neural networks, new AI systems for breast cancer screening improve upon the computer-aided detection (CAD) systems that radiologists have used since the 1990s. The AI system evaluated in this study — conducted by radiologists and medical physicists at Radboud University Medical Centre — has a feature classifier and image analysis algorithms to detect soft-tissue lesions and calcifications, and generates a “cancer suspicion” ranking of 1 to 10. The researchers examined unrelated datasets of images from nine previous clinical studies. The images were acquired from women living in seven countries using four different vendors’ digital mammography systems. Every dataset included diagnostic images, radiologists’ scores of each exam and the actual patient diagnosis. The 2652 cases, of which 653 were malignant, incorporated a total of 28,296 individual single reading interpretations by 101 radiologists participating in previous multi-reader, multi-case observer studies. The readers included 53 radiologists from the United States, who represented an equal mix of breast imagers and general radiologists, plus 48 European radiologists who were all breast specialists. Principal investigator Ioannis Sechopoulos and colleagues reported that the performance of the AI tool (ScreenPoint Medical’s Transpara) was statistically non-inferior to that of the radiologists, with an AUC (area under the ROC curve) of 0.840, compared with 0.814 for the radiologists. The AI system had a higher AUC than 62 of the radiologists and higher sensitivity than 55 radiologists. The performance of the AI system was, however, consistently lower than the best performing radiologists in all datasets. The authors suggested that this may be because the radiologists had more information available for assessment, such as prior mammograms, for the majority of cases. However, the team did not have access to the experience levels of the 101 radiologists, and therefore could not determine whether the radiologists who outperformed the AI system also were the most experienced. The researchers suggest that there may be several ways that an AI system designed to detect breast cancer could be used. One possibility is its use as an independent first or second reader in regions with a shortage of radiologists to interpret screening mammograms. It also could be employed in the same manner as CAD systems, as a clinical decision support tool to aid an interpreting radiologist. Sechopoulos also thinks that AI will be useful for identifying normal mammograms that do not need to be read by a screening radiologist. “With the right developments, it could also be used to identify cases that can be read by only one radiologist to confirm that recalling the patient is necessary,” he tells Physics World. “These strategies could give radiologists more time to focus on more complex cases, and eventually could be part of the solution needed to implement digital tomosynthesis in screening programs. This is important because tomosynthesis takes considerably longer to read than mammography.” Read the source article in physicsworld. Read more »
  • AI at Mastercard: Here Are Current Projects and Services
    Banks and financial institutions are particularly opaque when it comes to how they implement and leverage AI for their business. Mastercard is a key example of this because they use most of their AI applications internally and have only recently begun to make their technology more transparent to the greater financial industry. Since their initial adoption of AI and machine learning in 2016, Mastercard acquired Brighterion in 2017 and has continually expanded their AI capabilities. In this article, we give an overview of three AI initiatives from Mastercard. We detail the use cases for each one and highlight other possibilities in those areas. Mastercard offers the following AI services: Credit card fraud detection AI consulting services Biometric authentication for purchases and account access Before we start our overview, we discuss the most important insights from our research into Mastercard’s AI projects: AI at Mastercard – Key Insights It is clear that Mastercard’s most refined application of artificial intelligence is in their fraud detection solutions. Using predictive analytics technology to detect and score transactions on how likely they are to be fraudulent allows for continued learning of new fraud techniques. In addition, their solution allows for the detection of abnormal shopping behavior based on a customer’s spending history. Our research led us to Mastercard’s artificial intelligence consulting service, an AI development crash course called AI Express. The service is for businesses with little experience in AI that are looking to get acquainted with machine learning quickly. AI Express is intended to allow businesses to get a closer look at the machine learning algorithms MasterCard uses for various services to learn from their work so that they might develop their own machine learning models. It is also likely that MasterCard provides AI consulting with the data science and machine learning talent at their company, many of whom they hired when they acquired AI consulting firm Brighterion. Mastercard claims that through AI Express they can help companies create tailor-made machine learning models for their specific business problems, but the exact degree of specificity is unclear. It’s also likely that data science expertise is required to make sense of Mastercard’s machine learning, and so business leaders should not expect to look “under the hood” at Mastercard’s AI and easily understand how to implement something similar at their own business. Businesses will likely need to have their own in-house data scientists work with Mastercard’s in order to build a model for use in business. Mastercard’s AI-based biometric authentication software, Mastercard Identity Check, seems capable of facial recognition and analyzing live video. The software enables two-factor authentication which resembles taking a selfie. First, it verifies the face of the account owner, then prompts the user to blink. The software then detects the blink and uses it to confirm the authenticated face is alive. We begin our overview of Mastercard’s AI initiatives with their predictive analytics approach to detecting and preventing fraud: Credit Card Fraud Detection Mastercard’s most prominent use of artificial intelligence is their fraud detection solution called Decision Intelligence. The software uses predictive analytics to analyze customer and transaction data to determine a score of how likely a transaction might be fraud or not. These scores could help Mastercard both decline fraudulent transactions in real time and prevent false declines of legitimate transactions. “We are solving a major consumer pain point of being falsely declined when trying to make a purchase,” said Ajay Bhalla, President of Cyber and Intelligence Solutions at Mastercard, regarding the first implementation of Decision Intelligence. It is clear that false declines and fraud prevention were among Mastercard’s chief concerns while developing the model behind Decision Intelligence. Read the source article at emerj. Read more »
WordPress RSS Feed Retriever by Theme Mason