What would you do with unlimited traffic?

There are 5 important elements for you to succeed in business:

  1. Traffic
  2. Product
  3. Price
  4. Presentation
  5. Closing


Our proprietary technology allows us to offer you unprecedented services:

  1. Unlimited traffic
  2. Fast reaction, weeks instead of months or years
  3. And best of all, we deliver or it is free

The advantage of using technology to generate traffic is the fact that now you have the opportunity to work on the other four factors that determine your success.

So even if you fail more than once and even alienate your prospects, our technology keeps on bringing you more and more people so you can keep on correcting your mistakes over and over until you get it right and succeed in your business. This is what we do for you, you do the rest.

This is what we do for you; provide you with unlimited traffic as much as you want!

Just contact us for a free consultation.


Your products or services are of paramount importance; your visitors must want what you have to offer. It is obvious, right? But even if you fail here, because we keep on bringing you people you can correct the issue. How good is it that you have the best product but no one comes?


All these visitors found you online, so if your price is not competitive, they can just as easily find your competitors.


You must be able to explain quickly and clearly what you offer.


How will you deliver, get paid, etc, this is probably the easiest but just as important.

Latest News

  • Executive Interview: David Bray and Bob Gourley, Technology Entrepreneurs and Thought Leaders
    Risk Management of AI and Algorithms for Organizations, Societies and Our Digital Future Two technology entrepreneurs and thought leaders, Bob Gourley and Dr. David Bray, recently spoke with AI Trends Editor John Desmond about managing the risk of AI rollouts, addressing security of the organization and realizing the benefits of new AI technologies. Gourley is an experienced CTO and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. Creator and publisher of the widely read CTOvision site and co-founder of OODA LLC, a unique team of international experts capable of providing in advanced intelligence and analysis, strategy and planning support, investment and due diligence, risk and threat management. Among past positions, he has served as the CTO for the Defense Intelligence Agency. Bray also is a C-Suite leader with experience in bioterrorism response, thinking differently on humanitarian efforts and crafting national security strategies, as well as leading a national commission focused on the U.S. Intelligence Community’s research and development and leading large-scale digital transformations. He has advised six different startups and is Executive Director for the People Centered Internet coalition provides support, expertise, and funding for demonstration projects that measurably improve people’s lives. Both are co-chairing and speaking at the AI World Government Conference & Expo, being produced by Cambridge Innovation Institute. The event will be held on June 24-26, 2019 at the Ronald Reagan Building and International Trade Center in Washington, DC. AI Trends: What opportunities can AI assist with now to improve the risk management of organizations? Bob Gourley: AI can contribute to mitigating risks in organizations of all sizes. For smaller businesses that will not have their own data experts to field AI solutions, the most likely contributions of AI to risk mitigation will be by selecting security products that come with AI built in. For example, the old-fashioned anti-virus of years ago that people would put on their desktops has now been modernized into next-generation anti-virus and anti-malware solutions that leverage machine learning techniques to malicious code. Solutions like these are being used by businesses of all sizes today. The traditional vendors, like Symantec and McAfee, have all improved their products to leverage smarter algorithms, as have many newer firms like Cylance. Bob Gourley Larger organizations can make use of their own data in unique ways by doing things like fielding their own enterprise data hub. That’s where you put all of your data together using a machine-learning platform capability like Cloudera’s foundational data hub, and then run machine learning on top of that yourself. Now, that requires resources, which is why I say that’s for the larger businesses. But once that’s done, you can find evidence of fraud or indications of hacking and malware much faster using machine learning and AI techniques. Many cloud-based risk mitigation capabilities also leverage AI. For example, the threat intelligence provider Recorded Future uses advanced algorithms to surface the most critical information to bring to a firm’s attention. Overall I want to make the point that organizations of all sizes can now benefit from the defensive protections of artificial intelligence. Dr. David Bray: Bob is spot-on that what is happening is the “democratization of use of AI techniques” that now it can be available to even small-sized companies and startups that previously may not have been available, unless they had sufficient resources. He also is right about the scaling question. The additional lens that I would like to add is thinking about how AI can be used both for what an organization presents externally to the world, as well as what it does internally. For example, how you can use AI to understand if there are things on your website or in your mobile applications, that can be assessed for risk vulnerabilities on an ongoing basis? Threats are always changing. That’s why having the ability to use continuous services to scan what you’re presenting externally with regards to a potential attack surface will be an advantage, for large and small companies. David Bray The other lens is to look for abnormal patterns that may be happening internal to your organization. Risk occurs between the combination of humans and technologies. Smaller companies can obtain new tools through software as a service, or bigger companies use boutique tools to look for patterns of life. These tools try to establish what should be the normal patterns of life in your organization, so that if something else shows up that doesn’t match that pattern, it’s enough to raise a flag. The overarching goal is to use AI to improve the security and resilience of the organization, in how it’s presented externally and working internally. Where will AI introduce new challenges to the security of organizations? David: You can think of artificial intelligence being like a five-year old that gets exposed to enough language data, learns to say, “I’m going to run to school today.” And when you ask the five-year-old, “Well, why did you say it that way as opposed, ‘To school today I’m going to run,'” which sounds kind of awkward, the five-year-old is going to say, “Well, it’s just because I never heard it said that way.” The same thing is true for this current third wave of AI, which includes artificial neural network techniques to provide security and resilience for an organization. It’s looking for things that fit patterns or that are outside of patterns. It’s not discerning whether the patterns or things outside of the patterns are ethically correct. [Ed. Note: Learn more about the third wave of AI.] Bob: The two primary new challenges that AI gives to organizations that use it are, number one, your algorithms must be protected against manipulation by adversaries. If an adversary manipulates your AI algorithms, it will manipulate your results and that’s a problem. An additional problem is your data used for AI must be protected. If an adversary manipulates your data, then, of course, your results are going to be wrong. Both of those require protection. Now, you can protect those the old-fashioned way, by building up security of your enterprise, but you have to monitor them while they’re being used. Additionally, in this category of new risks due to AI, there are problems with ethics around AI. We have seen example after example of AI that’s fielded, then produces results that unintentionally are biased. That includes a famous example of 2017 when an Amazon-based resume system used to screen Amazon job applicants for employment taught itself to be misogynist. After a time, the algorithm hated women and had to be terminated. That kind of problem with bias in algorithms that are machine learning algorithms, has to be monitored in real time to prevent that from happening. It’s a very serious security concern that increases risk. It’s the same with ethics around AI. How do you know that your AI is performing ethically over time if it’s a machine learning algorithm that changes over time? Both are serious new risks. David: With Bob’s example, a machine learning algorithm might be compromised if it is exposed to enough bad data to train it to say things that are hateful or mean. In this instance the software and the hardware are working correctly, they haven’t been compromised, yet the algorithm is now doing things an organization probably doesn’t want it to do as a result of exposure to bad data. What steps can the private and public sector organizations start to take now to ensure this third wave of AI benefits organizations? David: For societies that are open and are pluralistic in nature, I think we need to have a conversation across both private and public sector interests about where we want to go in AI security and resilience. We have a military to protect against nation-state threats. Yet the open society forces the security responsibility onto the small business or startup. And it creates an interesting challenge. We talked a little bit about the cybersecurity threats, but we also have the challenge of dealing with misinformation; we are finding more cases where bad actors are using AI to create the appearance of uniquely scripted, uniquely edited videos read by a computer narrator. They make it appear as though lots of people are having conversations or watching videos of a certain type. As a result, the cognitive thought space of open society is being challenged. In open societies, with freedom of the press, people should be able to say whatever they want. With AI, we now have the added challenge of having to go beyond simple tests of whether an entity is a human or not. Now we need to think about who might be mass-producing a video or mass uploading videos to try and spread misinformation, overwhelm systems, or make it look as though lots of people are having video conversations about an issue. Closed autocratic societies that don’t separate their private and their public sector, can deal with misinformation simply by removing the sources or censoring it. That’s not the path you want to take in open societies. Bob: Organizations of all sizes can take advantage of AI in multiple ways. One is you can tap into what somebody else is doing. For example, every one of us with a smartphone now has access to either Amazon or Apple or Google’s AI capabilities through voice. And so as individuals, we’re starting to use that more and more frequently. As businesses, we can use AI capabilities like that to improve our cybersecurity or improve our market understanding or shape what we need to do with our products to better serve our customers. AI is being used a lot to help with these customer 360-degree views. So I can understand everything I need to about my potential customer to better serve them and create tailored products for them. And those are solutions that are out there right now. And so as companies use those, they have choices to make. Do you outsource to a provider who’s doing it all for you, or are you big enough to in-source it yourself? And if you in-source it, do you have a data scientist who’s managing it, or do you have a vendor providing you technologies that give your average users access to the AI? A lot of planning needs to be put into what you want to do. And that’s the first step. So building your AI strategy first, your objectives, and then proceeding on that is just the way to get involved and keep moving. You mentioned public sector also. For government use, AI also has many use cases. Government invest in AI for counter-fraud and for law enforcement or intelligence community uses and for Department of Defense. In the public sector, some of the uses of AI getting a lot of traction may sound a little bit boring, but they’re making huge differences. For example, in logistics and supply in the Department of Defense, using artificial intelligence to predict where supplies are needed is extremely helpful in getting the right material to the right place. And when it comes to maintenance, predicting when engine or a part on an aircraft is likely to fail is extremely important; you may be able to do some preventive maintenance to keep that engine running. The application of artificial intelligence to those kinds of use cases is already paying off. So there’s a lot of public sector investment in AI and we expect that that will continue. Would you use any of the security software from Kaspersky? [Ed. Note: Kaspersky is a Moscow-based security software firm banned for use by the US government by order of the President in December 2017, amid concerns it was vulnerable to Kremlin influence.] Bob: No, I would not use Kaspersky software because it’s in a country that can be influenced by bad actors that do not have U.S. national interests in mind. That’s just me. There’s a lot of global companies that might say, “Oh okay, I wouldn’t use any U.S. software either.” Well, I hate to say it, but people are going to have to start making decisions like that and for me, Kaspersky is on the no-buy list. We know for a fact that company operates inside a country where the rule of law is only respected when the rulers of that country want it to be. So if they want to twist the company’s arm and say, “You got to do something for me,” they’ll do it. That also goes for software companies in China. The rule of law exists in China, it’s very important there, but when the Communist Party wants to do something, the rule of law is secondary. So I don’t believe we should be depending on software that we buy from China either. What trends do you see for societies and AI for the decade ahead? David: The overarching question that open, pluralistic societies need to ask is how they can use AI as “forces for good” in the world? Currently I would submit the world we’re going into for the next 10 years, is better positioned for closed autocratic societies that don’t separate their private from their public sector to capitalize on what AI can do, compared to those open pluralistic societies that do. This is a significant concern that pluralistic societies might become either more fragmented or fall behind when it comes to keeping up with the social uses of AI and related technologies because of how they are structured. Please note, I’m not saying we don’t keep our private and public sectors separate. This is a strength of what we do here in the U.S. and in Europe. Yet I raise this concern that open, pluralistic societies might be at a structural disadvantage for our digital futures ahead because we’ve got to figure out how we improve our resilience as a society to these new challenges. No one answer will come from any one sector. To thrive in the future ahead will require collaborations across sectors to collectively up our game. Learn more about Bob Gourley’s OODA. Learn more about Dr. David Bray’s People-Centered Internet. Read more »
  • Road Racing and AI Autonomous Cars: Fast & Furious Contentions
    By Lance Eliot, the AI Trends Insider When I was in college, a friend of mine had a “hotrod” car that he doted over and treated with loving tender care. One day, we were at a red light and another souped-up car pulled alongside of us. For a moment, I almost thought I was in a James Dean movie, which was well-before my time, I might add, but in any case, it is generally well-known here in California that James Dean died when driving his Porsche at high speeds and ran into a Ford Tudor at an intersection in Cholame, California. My friend glanced over at the other driver and made one of those kinds of glances that says “my car is better than your car” kind of message. The other driver looked back, slowly nodded his head as though saying prove it, and the next things I knew the engines of both cars were being revved up. There I was, sitting in the front passenger seat of my friends racing-like car and apparently, I was about to become entrenched in a road race, also sometimes called a street race. You might find of idle interest that in Los Angeles alone there are about 700 illegal and completely unsanctioned road races each year (that’s based on the latest stats collected in 2017). In some cases the road race starts just as my situation in college wherein one car driver challenges another car driver on a spur of the moment basis. In today’s world, the use of social media has allowed illegal road races to become much larger and semi-organized affairs. There are social media sites that you can post your intent to engage in an illegal road race and it will give a heads-up for people that want to come and watch or perhaps directly participate. If you are under the assumption that only the drivers would be facing the chance of going to jail for breaking the law by undertaking an illegal road race, you might want to know that bystanders can also be arrested. According to the Department of Motor Vehicles (DMV) here in California, anyone that aids in a speed contest, including those that are merely viewing it, observing it, watching it, or witnessing it, they too are violated the Speed Contest law (speed contest is another name given to the illegal road races). In California, if convicted of participating in a speed race, you can be imprisoned for up to three months, which encompasses those doing the street racing and those “aiding or abetting” a street race. Plus, you can be fined up to $1,000, have your car impounded, and have your driver’s license revoked. I remember one such illegal road race here in Los Angeles that the police broke-up and arrested 109 people. That’s right, over one hundred people were busted for participating, of which only a small fraction of those people was actually racing a car. Getting back to my situation in my college days, I knew at the time that my being a passenger in a racing car would do little to prevent me from potentially being arrested, assuming that we got caught. Of the things that I might get arrested for, it did not seem like being involved in a speed contest was one of the worthy reasons (I’m not going to list what reasons would be worthy, sorry). I knew that my college buddy would consider me “a chicken” if I tried to prevent the road race from occurring. Which was more important, the off-chance that I might get arrested for participating in an illegal road race, or being called a chicken by my friend and perhaps word spreading that I was a party pooper when it came to doing a speed contest? Dangers of Unsanctioned Road Racing Before I answer the question and tell you what I did, let’s also consider some of the other reasons why participating in an illegal road race is a bad idea. The most obvious perhaps is that you can get injured or killed. It is relatively common that when a road race occurs, inevitably someone spins out of control or somehow loses control of their racing car and hits someone or something. Another racing car might get hit. Bystanders might get hit. Innocent pedestrians that had nothing to do with the road race might get hit. Other cars that had nothing to do with the road race might get hit. In fact, one particular criticism of these illegal road races is that the drivers are often not skilled in driving a car at high-speeds and in a racing manner. These amateurs are wanna-be high-speed race drivers. They are cocky and think they can drive fast, when in reality they lack the skills and demeanor to do so properly. If they really were serious about wanting to race cars, they’d do so on a closed track in a sanctioned manner. In proper and legal road racing on a closed track, the cars themselves are also specially prepared for sanctioned road racing purposes. These cars are outfitted with safety gear meant to protect the driver of the car. The cars might be augmented with special NOS (Nitrous Oxide System) capabilities to allow for the boosting of speed via increasing the power output of the engine. There might be special tires with extra thick tread. For the illegal road races, it is a wild west of however the racing car shows-up. It might be completely done up in a flimsy manner, and there have been many instances of these cars exploding by their own means. Another factor to keep in mind is that a sanctioned road race on a closed track is going to presumably have a proper roadway set aside for a race. The road surface is likely well prepared for a race. When the illegal road races occur, they do so wherever they can find a place to do the race. This can include quiet neighborhoods that have families and children and pets, all of which might inadvertently get dragged into or run over by the road racers. The street itself will likely get torn up by the racing cars. If the road racers lose control of their cars, they can damage property such as light poles, fences, and so on. Sometimes the illegal road races tempt fate in additional ways. For example, a so-called Potts Race involves the racing cars trying to drive through a multitude of successive intersections and the “loser” is the first racing car that comes to a stop at a red lighted traffic signal (the phrase of “Potts” comes from the aspect that these kinds of races were quite popular in Pottstown, Pennsylvania in the 1980s). You can imagine that other cars not involved in the road racing are all at risk of either getting struck by these maniac racers or those innocent and unaware drivers might accidentally run into one of the racing cars. A recipe for disaster, either way. To further bolster the case for not doing illegal road races, I’ll mention too that often times drinking or drugs accompanies these underground events. The drivers might opt to get themselves jacked-up for the racing and the participants might do the same. Obviously, this adds to the chances that something untoward will arise. The police also point out that often there is illegal betting that takes place, and these races are ways for gangs to congregate and add to their ranks. Plus, the gangs will at times decide to after the race perform other illegal acts, and especially if they are already “lit” after drinking and taking drugs. I’ll mention another factor that I’ve seen many times about these illegal road races, and I’m not sure how much it also contributes to the negative aspects of road races. I’ll see a bunch of similar souped-up cars all going along on the freeway or a highway, likely heading to an illegal road race. They try to stick together and thus it is apparent that they somehow are linked with each other. This would not be problematic except that they often want to do a kind of mini-race before the “real” race that they are heading to. Thus, on the freeway or highway, they will each try to outdo each other. If there are other cars around them that are somewhat blocking their progress, they often delight in zipping around those cars. They tend to cut into and out of traffic, doing so without regard for the other drivers. They have turned the normal freeway or highway into a game for them to play, while on their way to the race. I’ve seen many close calls of them ramming into other cars. In that sense, those driving to and presumably later on driving from an illegal road race are potentially menaces to the normal driving conditions. They are eager to showcase their own prowess. They want to do their own pretend car racing. I wonder how many car accidents happen due to this tomfoolery and horseplay that they do. I’d wager that besides the potential for injuries, deaths, and damages while an actual road race happens, there is some similar kind of likelihood for untoward results either just before or just after the road race occurs. Why do these presumably licensed drivers do this? As mentioned, it can be gang-related. It can also be out of boredom or having nothing else to do. It can be due to a bet or challenge to someone else. It can be as a result of a kind of pride of their own car and a desire to show-off what they have. There is a sub-culture aspect often to illegal road races, involving those that perhaps in-their-hearts love cars and racing, and maybe also like the idea of going to the edge. Some relish the lawbreaking aspects, even though they would assert that it is not much of an illegal act. I’ve heard some of these illegal road racers claim that it is unfair to stop them from their efforts. They aren’t hurting anyone, they’ll avow. They are just having a good time. Don’t the police have better things to do such as busting “real criminals” is another refrain. Given the lengthy list of dangers and downfalls of illegal road racing, I have little sympathy for such pleas. If you want to road race, do so legally and go to the right places, using the right equipment, in the right manner, would be some potential advice. Large and Miniature Moments of Road Racing In Our Lives Now that I’ve covered some of the background about these illegal road races, let’s get back to my personal dilemma while I was in college. As mentioned, I was sitting in my friend’s car, and he was making a silent but clear-cut challenge to a car driver next to us, and they were both now revving their engines. An illegal road race was imminent. Do I participate as a passive passenger, which nonetheless means I’ve actively been involved in an illegal act, subject to prison time and other criminal penalties due to aiding and abetting? Or, do I “chicken out” and insist to my friend that we not compete, but this will surely have him tout to the world that I backed-out and I didn’t have “the guts” to do a road race. Imagine though that we do the illegal road race and the car hits a tree, or the car rolls over and there isn’t a roll cage to protect us? Or, suppose the other car crashes and they die because we played this game? Maybe we all hit other innocent cars that happen to be in the road ahead. Perhaps by dumb-luck there is a police car that catches us, and I end-up with a police record that follows me the rest of my life? All of those aspects had to be weighed against the being-a-wimp outcome. When you are in college, these things matter, though upon reflection now it is kind of obvious to me which was the right answer. Assuming you are on the edge of your seat waiting to find out what happened, I sheepishly admit that before I could take any action, the light turned green and the other car went ahead at a breakneck speed, tires squealing and burning rubber could be smelled. My friend, giving me a big grin, merely proceeded ahead at a normal driving pace. He had tricked the other driver. For him, he told me that it would have been a waste of his precious car’s assets to race against some idiot that happened to be at an intersection during a red light with him. Of course, that’s not the only moment in my life involving the notion of road racing. Indeed, I would suggest that we all have our own miniature moments of road racing during our daily driving. Let me share with you an example that happened just this morning. I was at a red light and there was a lane to my left going in my same direction. A car was there. We were both at the front of the line of cars waiting for the red light to turn green. There was a lane also to my right, but it was slated to runout once you got across the intersection. You could use that lane to proceed ahead straight, though you would quickly need to merge into my lane once you passed through the intersection. By-and-large, most people used the lane that was to my right to make a right turn and did not use it to proceed ahead through the intersection. I’ve always thought that this setup of the traffic structure was begging to get someone into trouble. If there was a car in my current position and they were not looking around to realize that the lane to their right can go straight, they might inadvertently stray into that lane as they cross the intersection, perhaps cutting off a car in that lane that is trying to go straight. Likewise, a car in that lane, if not paying attention, might inadvertently panic as they go through the intersection and realize at the last moment that their lane is disappearing, and therefore attempt wildly to merge into my lane. Well, a car in that rightmost lane pulled up beside me and it was apparent that the driver was not intending to make the right turn. In which case, I knew they would be desirous of going straight through the intersection once the light turned green. This also meant that they would quickly want to get into my lane, since their lane disappeared rapidly upon reaching the other side of the intersection. Did this driver realize they were going to lose their lane? If so, would they be polite about it? Presumably, the driver should allow my car to proceed ahead and then they should come back behind me. In some cases, a driver in that rightmost lane opts instead to hit the gas and try to get ahead of the cars in my lane. They figure that they can race through the intersection once the light turns green, and get ahead of the other cars, allowing them to take over the merged lane and proceed ahead unimpeded. Notice that I alluded to the notion of racing in that last statement. Yes, there was a possibility that my car would be raced by the car to my right. This would be an unsanctioned race. Unfortunately, the roadway engineers that devised the road structure had created a circumstance that invited a kind of road race to take place. I’m sure that throughout the day, this spot has its repeated moments of miniature road races. Over and over again this would play out. Unsuspecting drivers would get dragged into a road race. It would be interesting to know how many scuffles and bumper scrapes this produced. Hopefully it wasn’t leading to injuries and deaths, though it was certainly devised to encourage such untoward results. Since I didn’t know what the other driver might do, I decided I would rapidly accelerate once the green light appeared and try to get ahead of the other car. It was my hope that doing so would make it clear to the other driver that they could simply fall in behind my car. They might not realize the need to do so until after getting across the intersection, but in any case, I’d have already cleared past the intersection and so the obvious choice for that driver at that juncture would be to merely get into my lane, being positioned behind my car which had already sped ahead. That was my plan. When the light turned green, I sped ahead at a faster speed than I might normally do so when starting from a stopped red light to a go-ahead with a green light position. The other car in the rightmost lane though must have perhaps driven this stretch of road before, or perhaps detected the disappearing lane while sitting at the red light and therefore opted to also accelerate rapidly as a means to zip forward. The driver seemed to be intending to get ahead of me and ultimately swing into my lane, rather than a willingness to fall behind me. The road race was on! I looked in my rear mirror and could see that the car behind me had decided to go at my same speed and was right on my bumper. This other driver was perhaps also thinking that the rightmost lane driver should fall behind us all. Or, the driver behind me was just a pushy driver and wanted to get going fast, and maybe was oblivious to the road race situation happening. It’s hard to know what the other driver behind me knew or was thinking about. If I slowed down mid-intersection, doing so to allow the rightmost driver to pull ahead, I might risk having the car behind me ram into my car. Me and the driver behind me were both accelerating at the same pace and an unexpected braking or slow down might have caught them unawares. Another option was to go even faster. I already had the presumed right-of-way in my lane and if I sped up it would prevent the rightmost driver from trying to get ahead of me and merge into my lane in front of me. I was sure that the driver behind me would welcome my going even faster. At this point in time, none of us was going faster than the speed limit. I mention this because I don’t want you to think that any of us were doing an outright race at top speeds. We were all accelerating at rather rapid amounts but still well-below the actual posted speed limit. As I say, this was a miniature road race. Upon my accelerating even more, the rightmost driver did the same. Was he doing so by happenstance and merely trying to get ahead of me, or was he doing so because he felt challenged by my car and thought we were somehow immersed in a personal road race? There is always the chance of sparking a road rage by seemingly engaging someone into an even mild road race situation. For more about road rages, see my article: https://aitrends.com/selfdrivingcars/road-rage-and-ai-self-driving-cars/ For defensive driving, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/ For the foibles of human drivers, see my article: https://aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/ For why drivers are greedy, see my article: https://aitrends.com/selfdrivingcars/selfishness-self-driving-cars-ai-greed-good/ For irrational driving aspects, see my article: https://aitrends.com/selfdrivingcars/motivational-ai-bounded-irrationality-self-driving-cars/ Illegal Versus Simply Ill-Advised Road Racing Assuming you are once again on pins and needles and wondering what happened in this miniature road race, I “prevailed” in that I got ahead of the rightmost driver and he ended-up falling in behind the car that was behind me. All in all, the situation played itself out in a matter of a few seconds. It is the kind of driving moment that most people have all the time and never give much thought to. You mentally move on. I brought up the circumstance to try and point out that we do road racing in our daily driving. It certainly isn’t the kind of road racing that brings together a hundred spectators and gets posted onto social media. Instead, our day-to-day driving challenges will at times get us into a kind of road race with other drivers, whether we pay attention to it or not. Was this road race illegal? I suppose you could claim that it was perhaps ill-advised, and it could have led to an untoward outcome. Maybe I should have motioned beforehand to let the rightmost driver know that I was going to give them passage into my lane and been civil about the predicament. Maybe the rightmost driver should have respected the cars in my lane and not tried to get ahead of us and waited his turn to fall in behind us. Any driving act that is considered untoward and creates a dangerous driving situation can be considered “illegal” per se, and as such I suppose you could say that we all were not showing the proper respect for the right-of-way of others and keeping safety as the top priority in our driving actions. AI Autonomous Cars and Road Racing What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect involves the AI being prepared for and able to contend with road racing. Allow me to elaborate. I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/ For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/ For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/ Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion. Here’s the usual steps involved in the AI driving task: Sensor data collection and interpretation Sensor fusion Virtual world model updating AI action planning Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. For my article about the grand convergence that has led us to this moment in time, see:https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/ See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/ For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/ Returning to the topic of road racing, let’s explore what an AI self-driving car should know about this topic and what kinds of actions it should be able to undertake. First, many AI developers might argue that an AI self-driving car does not need to know anything about road racing at all. They would say that since road racing or street racing is considered illegal, and since in their perspective an AI self-driving will always and only be driving in a legal manner, there is presumably no reason or basis for the AI self-driving car to be concerned about road racing. That’s when I debunk their false belief. Let’s start by acknowledging that there are going to be instances whereby an AI self-driving car will potentially be driving in an illegal manner.  Never go faster than the posted speed limit is considered by some naïve AI developers consider as an inviolable legal restriction that shall not ever be disobeyed by an AI self-driving car. Hogwash. We all know that there are times that you will inevitably be going faster than the posted speed limit. Suppose there is an emergency and you are rushing to the hospital? If that seems overly extreme as a use case, the prevailing speed on our freeways here in Southern California is typically well above the stated speed limit (when the freeways aren’t otherwise snarled) – is an AI self-driving car going to puddle along in the traffic stream and strictly be going no more than the speed limit? For my article about why AI self-driving cars will need to drive “illegally” at times, see:https://aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/ So, I am arguing that simply because something is considered an illegal driving act, it is nonetheless still potentially a driving act that an AI self-driving car might need to undertake at some point in time. Therefore, the AI ought to know about it. Act of Knowing Is Not the Same as Necessarily Doing The act of knowing does not mean that the AI will necessarily undertake the illegal act. I say this because some of the AI developers would claim that if you give the AI system the ability to perform an illegal driving act, you are opening a Pandora’s box to the AI opting to routinely and wantonly perform illegal driving acts. I’ll say this again, hogwash. Knowing about something does not equate to doing it for the sake of doing it. Instead, it will be crucial that the AI be equally versed in when to perform such an act and when not to perform such an act. For the paperclip AI dilemma that relates to knowing versus doing, see my article:https://aitrends.com/selfdrivingcars/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/ For the AI as Frankenstein, see my article: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/ For the Turing test for AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/ For the potential of an AI singularity, see my article: https://aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/ Based on my remarks so far on this, at about this time I’m sure there are some AI developers that are wondering to themselves whether I am a proponent of AI self-driving cars participating in illegal road races. Am I that kind of a scofflaw that I want AI systems to encourage and abet speed contests? No, I am not. That being said, I think it is useful for the AI system to be wary of road racing and speed contests so that it can recognize one. Imagine that you are driving your car and you come upon a situation whereby you are able to assess the scene around you and hypothesize that a road race is brewing. As a defensive driver, I am guessing you would reason to try and get away from the area and tend to keep from getting immersed into the matter. The AI ought to be able to do the same. Thus, it is vital that the AI be able to detect the surroundings and assess whether or not a road race is either brewing or maybe already underway. Furthermore, remember how I earlier mentioned that I saw the remnants of a potential road race by noticing cars that were going to one or that had come from one? Once again, the AI ought to be looking around for these kinds of telltale signs. It makes the AI be more defensive and drive in a manner to aim for heightened safety. I suppose we ought to also consider another angle about the road racing topic. If we somehow restrict the AI by preventing it from ever being about to perform in a road race at all, what about sanctioned road races? Would we be preventing an AI self-driving car from participating in a closed track and legally abiding road race? Even if you retort that it seems silly to think that anyone might want to see an AI self-driving car… Read more »
  • The US Navy’s Robotic Sea Hunter Warship Uses AI In Place of a Crew
    The swells in the middle of the North Pacific were reaching nine feet when one of two engines on the diesel-powered U.S. naval ship called Sea Hunter shut down. About 1,500 nautical miles from its home base in San Diego, the 132-foot-long craft, which had been cruising at 10 knots, couldn’t send a member of its crew to check out the problem—because it didn’t have a crew. Sea Hunter’s sleek, spiderlike silhouette, with a narrow hull and two outriggers, is a prototype of what could be a new class of autonomous warships for the U.S. Navy. Its artificial intelligence–based controls and navigation system, designed by Leidos Holdings, a defense contractor based in Reston, Va., were seven years in the making. And this maiden voyage—a more than 4,000-mile roundtrip to the giant Pearl Harbor naval station—was its first major proof of concept. Nothing like this had ever been attempted before. And while the A.I. systems that keep the ship on course and help it avoid collisions with other vessels were working exactly as advertised, a glitch in its mechanical systems threatened to scuttle the trip—a reminder to tech geeks that no matter how advanced the technology, mundane mechanical problems can bring a project down. A group of 14 support staff in a trailing escort ship sprang into action. Keith Crabtree, a systems engineer with Leidos, and other staff jumped into a rigid inflatable boat and zipped over to Sea Hunter. Crabtree, who had helped put the ship through its paces in the calmer waters of San Diego Bay, says he wasn’t worried about the swells as he rode across the waves to Sea Hunter. The triple-hulled design of the prototype, inspired by the Polynesian waka canoe, offered a more stable perch than the bouncing journey aboard the escort ship. “We were in for a smoother ride than what we had been enduring,” Crabtree recalls. A simple software fix corrected the problem, and after docking at Pearl Harbor, Sea Hunter completed the 10-day return trip without incident. Sea Hunter, it bears noting, is the first autonomous ship to make an ocean crossing and, remarkably, the first Navy ship designed from scratch by Leidos. Little known outside government contracting circles, Leidos, then dubbed Science Applications International Corp. (SAIC), was founded 50 years ago by Robert Beyster, a brilliant and entrepreneurial physicist who had worked on the hydrogen bomb at the Los Alamos National Laboratory. An avid sailor and a friend of yacht-racing captain Dennis Conner, Beyster tasked SAIC to develop software to model improved hull designs after Conner’s squad lost the America’s Cup to an Australian team led by Alan Bond in 1983—the first American loss in the race’s 132-year history. Connor regained the Cup the following year. That expertise came in handy on future projects with the Navy but didn’t publicly reemerge until 2012, when a $59 million contract win to develop an autonomous ship put the software front and center once again. For Sea Hunter, the company also drew on expertise gained from many loosely related projects, including developing underwater sensors for the Navy, performing coastline surveys for the National Oceanic and Atmospheric Administration, and conducting A.I. work to process satellite imagery. Read the source article in Fortune.  Read more »
  • AI is Beating the Hype With Stunning Growth
    Follow the money. It is true in politics, business and investing. Right now, there is no question that the money is headed into artificial intelligence. Gartner, a global IT research and advisory company, surveyed 3,000 CIOs operating in 89 countries in January. The Stamford, Conn., firm found that AI implementations grew 37% during 2018, and 270% over the last four years. This is a trend investors should embrace. That’s because it is going to last for a while. And it’s going to make a lot of people very rich. Investors have soured on AI recently. Self-driving cars, smart cities and robotics keep getting smacked down as idealistic hype. That’s mostly because their implementations are decades away … or that these ideas are expensive solutions looking for problems. So say the critics, anyway. They point to once-high-flying stocks like Nvidia, which just saw its share price get cut in half because of slowing demand for cutting-edge gear and software. However, that assessment is lazy. It also misses the point. AI is a digital transformation story. Corporate managers realize AI software can help automate large parts of the enterprise, increasing productivity and saving a boatload of money. It is true that machines will not be able to wholly replace complex human decision-making anytime soon. But the software is more than sufficient to processes mundane, repetitive tasks. And machine learning, a type of data science, can help humans see important patterns they might otherwise miss. So companies are going all-in. They are deploying software bots online, along with customer-relationship software to help service reps assist customers. Executives are using integrated suites and data analytics to manage projects, workflows, payrolls and human resources. For chief information officers, it’s a no-brainer. In a brave new world informed by data, using AI to provide insights and streamline operations is becoming almost mandatory. In other words, not using AI puts companies at a competitive disadvantage. Gartner surveyed CIOs whose combined companies had $15 trillion in revenue with $284 billion allocated for information technology spending. The findings are startling: Fiscal 2019 will see a 3X upgrade in AI deployment. The data jibes with research from IDC Corp., an international investment service. Analysts forecast spending for cognitive and AI systems will reach $77.6 billion by 2022. That’s a five-year compound growth rate of 37.3%. In a related IDC report, the company notes that 60% of global GDP should be digitized by 2022, driving almost $7 trillion in IT spending. And by 2024, AI-enabled user interfaces and process automation will replace a third of the screen-based applications on smartphones. The key for CIOs and investors alike will be finding trusted helpers. That’s the story of Globant S.A. … From its home base tucked away in La Plata, Argentina, the software and IT developer works directly with large companies to implement digital-transformation strategies. Disney, Cisco Systems, Coca-Cola, Electronic Arts, Alphabet and other large international companies have been drawn to Globant’s unique corporate culture and skill sets. From Brazil to Belarus, via outposts in 10 other countries, Globant’s regional managers have their finger on the pulse of emerging social trends. The company has the AI engineering talent to push the envelope with contextually aware, personalized customer experiences. Read the source article in Forbes. Read more »
  • The Machine Learning Life Cycle – 5 Challenges for DevOps
    By Diego Oppenheimer Machine learning is fundamentally different from traditional software development applications and requires its own, unique process: the ML development life cycle. More and more companies are deciding to build their own, internal ML platforms and are starting down the road of the ML development life cycle. Doing so, however, is difficult and requires much coordination and careful planning. In the end, though, companies are able to control their own ML futures and keep their data secure. After years of helping companies achieve this goal, we have identified five challenges every organization should keep in mind when they build infrastructure to support ML development. Heterogeneity Depending on the ML use case, a data scientist might choose to build a model in Python, R, or Scala and use an entirely different language for a second model. What’s more, within a given language, there are numerous frameworks and toolkits available. TensorFlow, PyTorch, and Scikit-learn all work with Python, but each is tuned for specific types of operations, and each outputs a slightly different type of model. ML–enabled applications typically call on a pipeline of interconnected models, often written by different teams using different languages and frameworks. Iteration In machine learning, your code is only part of a larger ecosystem—the interaction of models with live, often unpredictable data. But interactions with new data can introduce model drift and affect accuracy, requiring constant model tuning and retraining. As such, ML iterations are typically more frequent than traditional app development workflows, requiring a greater degree of agility from DevOps tools and staff for versioning and other operational processes. This can drastically increase work time needed to complete tasks. Infrastructure Machine learning is all about selecting the right tool for a given job. But selecting infrastructure for ML is a complicated endeavor, made so by a rapidly evolving stack of data science tools, a variety of processors available for ML workloads, and the number of advances in cloud-specific scaling and management. To make an informed choice, you should first identify project scope and parameters. The model-training process, for example, typically involves multiple iterations of the following: an intensive compute cycle a fixed, inelastic load a single user concurrent experiments on a single model After deployment and scale, ML models from several teams enter a shared production environment characterized by: short, unpredictable compute bursts elastic scaling many users calling many models simultaneously Operations teams must be able to support both of these very different environments on an ongoing basis. Selecting an infrastructure that can handle both workloads would be a wise choice. Diego Oppenheimer is Founder and CEO of Algorithmia, a service that enables the creation of applications through the use of community contributed machine learning models. Read the source article at AI Business. Read more »
  • The Rules Governing AI Are Being Shaped by Tech Firms – Here’s How
    IN EARLY APRIL, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.” One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI. Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says. When a formal draft was released in December, uses that had been suggested as requiring “red lines” were presented as examples of “critical concerns.” That shift appeared to please Microsoft. The company didn’t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoft’s senior director for EU government affairs, said the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’” Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted toward industry. “We need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.” The brouhaha over Europe’s guidelines for AI was an early skirmish in a debate that’s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interest—and in some cases appear to be trying to steer construction of any new guardrails to their own benefit. Harvard law professor Yochai Benkler warned in the journalNature this month that “industry has mobilized to shape the science, morality and laws of artificial intelligence.” Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into “Fairness in Artificial Intelligence” that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed. Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,” he says. Read the source article in Wired. Read more »
  • System Load Balancing for AI Systems: The Case Of AI Autonomous Cars
    By Lance Eliot, the AI Trends Insider I recall an occasion when my children had decided to cook a meal in our kitchen and went whole hog into the matter (so to speak). I’m not much of a cook and tend to enjoy eating a meal more so than the labor involved in preparing a meal. In this case, it was exciting to see the joy of the kids as they went about putting together a rather amazing dinner. Perhaps partially due to watching the various chef competitions on TV and cable, and due to their own solo cooking efforts, when they joined together it was a miraculous sight to see them bustling about in the kitchen in a relatively professional manner. I mainly aided by asking questions and serving as a taste tester. From their perspective, I was more of an interloper than someone actually helping to progress the meal making process. One aspect that caught my attention was the use of our stove top. The stove top has four burner positions. On an everyday cooking process, I believe that four heating positions is sufficient. I could see that with the extravagant dinner that was being put together, the fact that there were only four available was a constraint. Indeed, seemingly a quite difficult constraint. During the cooking process, there were quite a number of pots and pans containing food that needed to be heated-up. I’d wager that at one point there were at least a dozen of such pots and pans in the midst of containing food and requiring some amount of heating. Towards the start of the cooking, it was somewhat manageable because they only were using three of the available heating spots. By using just three, it allowed them to then allocate one spot, the fourth one, as an “extra” for round robin needs. For this fourth spot, they were using it to do quick warm-ups and meanwhile the other three spots were for truly doing a thorough cooking job that required a substantive amount of dedicated cooking time. Pots and pans were sliding on and off that fourth spot like a hockey puck on ice. The other three spots had large pots that were gradually each coming to a bubbling and high-heat condition. When one of the three pots had cooked well enough, the enterprising cooks took it off the burner almost immediately and placed it onto a countertop waiting area they had established for super-heated pots and pans that could simmer for a bit. The moment that one pot came off of any of the three spots, another one was instantly put into its place. Around and around this went, in a dizzying manner as they contended with only having four available heating spots. They kept one spot in reserve and used it for more of a quick paced warm-up and had opted to use the other three for deep heated cooking. As they neared the end of the cooking process for this meal, they began to use nearly all of the spots for the quick paced warm-up needs, apparently because they had by then done the needed cooking already and no longer needed to devote any of the pots to a prolonged period on a heating spot. As a computer scientist at heart, I was delighted to see them performing a delicate dance of load balancing. System Load Balancing Is Unheralded But Crucial You’ve probably had situations involving multiple processors or maybe multiple web sites wherein you had to do a load balance across them. In the case of web sites, it’s not uncommon for some popular web sites to be replicated at multiple geographic sites around the world, allowing for more ready speed responses to those from that part of the world. It also can help when one part of the world starts to bombard one of your sites and you need to flatten out the load else that particular web site might choke due to the volume. In the cooking situation, the kids realized that having just four burner stove top positions was insufficient for the true amount of cooking that needed to take place for the dinner. If they had opted to sequentially and serially have placed pots of food onto the burners in a one-at-a-time manner, they would have had some parts of the meal cooked much earlier than other parts of the meal. In the end, when trying to serve the meal, it would have been a nightmarish result of some food that had been cooked earlier and was now cold, and perhaps other parts of the meal that were superhot and would need to wait to be eaten. If the meal had been one involving much less preparation, such as if they had only three items to be cooked, they would have readily been able to use the stove top without any of the shenanigans of having to float around the pots and pans. They could have just put on the three pots and then waited until the food was cooked. But, since they had more needs for cooking then just the available heating spots, they needed to devise a means to make use of the constrained resources in a manner that would still allow for the cooking process to proceed properly. This is what load balancing is all about. There are situations wherein there are a limited available supply of resources, and the number of requests to utilize those resources might exceed the supply. The load balancer is a means or technique or algorithm or automation that can try to balance out the load. Another valuable aspect of a load balancer is that it can try to even out the workload, which might help in various other ways. Suppose that one of the stove tops was known to sometimes get a bit cantankerous when it is on high-heat for a long time. One approach of a load balance might be to try and keep that resource from peaking and so purposely adjust to use some other resource for a while. We can also consider the aspect of resiliency. You might have a situation wherein one of the resources might unexpectedly go bad or otherwise not be usable. Suppose that one of the burners broke down during the cooking process. A load balance would try to ascertain that a resource is no longer functioning, and then see if it might possible to shift the request or consumption over to another resource instead. Load Balancing Difficulties And Challenges Being a load balancer can be a tricky task. Suppose the kids had decided that they would keep one of stove top burners in reserve and not use it unless it was absolutely necessary. In that case, they might have opted to use the three other burners in a manner of allocating two for the deep heating and one for the warming up. All during this time, the other fourth burner would remain unused, being held in reserve. Is that a good idea? It depends. I’d bet that the cooking with just the three burners would have stretched out the time required to cook the dinner. I can imagine that someone waiting to eat the dinner might become disturbed if they saw that there was a fourth burner that could be used for cooking, and yet it was not, and the implication being that the hungry person had to wait longer to eat the dinner. This person might go ballistic that a resource sat unused for that entire time. What a waste of a resource, it would seem to that person. Imagine further if at the start of the cooking process we were to agree that there should be an idle back-up for each of the stove burners being used. In other words, since we only have four, we might say that two of the burners will be active and the other two are the respective back-up for each of them. Let’s number the burners as 1, 2, 3, and 4. We might decide that burner 1 will be active and it’s back-up is burner 2, and burner 3 will be active and its back-up is burner 4. While the cooking is taking place, we won’t place anything onto the burners 2 and 4, until or if a primary of the burners 1 or burner 3 goes out. We might decide to keep the back-up burners entirely turned-off, in which case as a back-up they would be starting at a cold condition if we needed to suddenly switch over to one of them. We might instead agree that we’ll go ahead and put the two back-ups onto a low-heat position, without actually heating anything per se, and generally be ready then to rapidly go to high-heat if they are needed in their back-up failover mode. I had just now said that burner 2 would be the back-up for primary burner 1. Suppose I adhered to that aspect and would not budge. If burner 3 went suddenly out and I reverted to using burner 4 as the back-up, but then somehow burner 4 went out, should I go ahead and use burner 2 at that juncture? If I was insistent that burner 2 would only and always be a back-up exclusively for burner 1, presumably I would want the load balancer to refuse to now use burner 2, even though burners 3 and 4 are kaput. Maybe that’s a good idea, maybe not. These are the kinds of considerations that go into establishing an appropriate load balancer. You need to try and decide what the rules are for the load balancer. Different circumstances will dictate different aspects of how you want the load balancer to do its thing. Furthermore, you might not just setup the load balancer entirely in-advance, such that it is acting in a static manner during the load balancing, but instead might have the load balancer figuring out what action to take dynamically, in real-time. When using load balancing for resiliency or redundancy purposes, there is a standard nomenclature of referring to the number of resources as N, and then appending a plus sign along with an integer value that ranges from 0 to some number M. If I say that my system is setup as N+0, I’m saying that there are zero or no redundancy devices. If I say it is N+1, then that implies there is 1 and only 1 such redundancy device. And so on. You might be thinking that I should always have a plentiful set of redundancy devices, since that would seem the safest bet. But, there’s a cost associated with the redundancy. Why was my stove top limited to just four burners? Because I wasn’t willing to shell out the bigger bucks to get the model that had eight. I had assumed that for my cooking needs, the four sized stove was sufficient, and actually ample. For computer systems, the same kind of consideration needs to come to play. How many devices do I need and how much redundancy do I need, which has to be considered in light of the costs involved. This can be a significant decision in that later on it can be harder and even costlier to adjust. In the case of my stove top, the kitchen was built in such a manner that the four burner sized stove top fits just right. If I were to now decide that I want the eight burner version, it’s not just a simple plug-and-play, instead they would need to knock out my kitchen counters, and likely some of the flooring, and so on. The choice I made at the start has somewhat locked me in, though of course if I want to have the kids doing cooking more of the time, it might be worth the dough to expand the kitchen accordingly. In computing, you can consider load balancing for just about anything. It might be the CPU processors that underlie your system. It could be the GPUs. It could be the servers. You can load balance on an actual hardware basis, and you can also do load balancing on a virtualized system. The target resource is often referred to as an endpoint, or perhaps a replica, or a device, or some other such wording. Those in computing that don’t explicitly consider the matter of load balancing are either unaware of the significance of it or are unsure of what it can achieve. For many AI software developers, they figure that it’s really a hardware issue or maybe an operating system issue, and thus they don’t put much of their own attention toward the topic. Instead, they hope or assume that those OS specialists or hardware experts have done whatever is required to figure out any needed load balancing. Similar to my example about my four burner stove, the problem with this kind of thinking is that if later on the AI application is not running at a suitable performance level and all of a sudden you want to do something about load balancing, the horse is already out of the barn. Just like my notion of possibly replacing the four burner stove with an eight burner, it can take a lot of effort and cost to retrofit for load balancing. AI Autonomous Cars And Load Balancing The On-Board Systems What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. One key aspect of an AI system for a self-driving car is its ability to perform responsively in real-time. On-board of the self-driving car you have numerous processors that are intended to run the AI software. This can also include various GPUs and other specialized devices. Per my overall framework of AI self-driving cars, here are some the key driving tasks involved:         Sensor data collection and interpretation         Sensor fusion         Virtual world model updating         AI action planning         Car controls command issuance For my framework, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ For my article about real-time performance aspects, see: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/ For aspects about AI developers, see my article: https://aitrends.com/ai-insider/developer-burnout-and-ai-self-driving-cars/ For the dangers of Groupthink, see my article: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/ You’ve got software that needs to run in real-time and direct the activities of a car. The car will at times be in motion. There will be circumstances wherein the AI is relatively at ease and there’s not much happening, and there will be situations whereby the AI is having to work at a rip-roaring pace. Imagine going on a freeway at 75 miles per hour, and there’s lots of other nearby traffic, along with foul weather, the road itself has potholes, there’s debris on the roadway, and so on. A lot of things, all happening at once. The AI holds in its automation the key to whether the self-driving car safely navigates and avoids getting into a car accident. This is not just a real-time system, it is a real-time system that can spell life and death. Human occupants in the AI self-driving car can get harmed if the AI can’t operate in time to make the proper decision. Pedestrians can get harmed. Other cars can get hit, and thus the human occupants of those cars can get harmed. All in all, this is quite serious business. To achieve this, the on-board hardware generally has lots of computing power and lots of redundancy. Is it enough? That’s the zillion dollar question. Similar to my choice of a four burner stove, when the automotive engineers for the auto maker or tech firm decide to outfit the self-driving car with whatever number and type of processors and other such devices, they are making some hard choices about what the performance capability of that self-driving car will be. If the AI cannot run fast enough to make sound choices, it’s a bad situation all around. Imagine too that you are fielding your self-driving car. It seems to be running fine in the roadway trials underway. You give the green light to ramp up production of the self-driving car. These self-driving cars start to roll off the assembly line and the public at large is buying them. Suppose after this has taken place for a while, you begin to get reports that there are times that the AI seemed to not perform in time. Maybe it even froze up. Not good. Some self-driving car pundits say that it’s easy to solve this. Via OTA (Over The Air) updates, you just beam down into the self-driving cars a patch for whatever issue or flaw there was in the AI software. I’ve mentioned many times that the use of OTA is handy, important, and significant, but it is not a cure all. Let’s suppose that the AI software has no bugs or errors in this case. Instead, it’s that the AI running via the on-board processors is exhausting the computing power at certain times. Maybe this only happens once in a blue moon, but if you are depending upon your life and the life of others, even a once in a blue moon is too much of a problem. It could be that the computing power is just insufficient. What do you do then? Yes, you can try to optimize the AI and get it to somehow not consume so much computing power. This though is harder than it seems. If you opt to toss more hardware at this problem, sure, that’s’ possible, but now this means that all of those AI self-driving cars that you sold will need to come back into the auto shop and get added hardware. Costly. Logistically arduous. A mess. For my article about the freezing robot problem and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/ For my article about bugs and errors in AI self-driving cars, see: https://aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/ For my article about automobile recalls and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/auto-recalls/ For product liability claims against AI self-driving cars, see my article: https://aitrends.com/ai-insider/product-liability-self-driving-cars-looming-cloud-ahead/ Dangers Of Silos Among Autonomous Car Components Some auto makers and tech firms find themselves confronting the classic silo mentality of the software side and the hardware side of their development groups. The software side developing the AI is not so concerned about the details of the hardware and just expect that their AI will run in proper time. The hardware side puts in place as much computing power as it seems can be suitably provided, depending on cost considerations, physical space considerations, etc. If there is little or no load balancing that comes to play, in terms of making sure that both the software and hardware teams come together on how to load balance, it’s a recipe for disaster. Some might say that all they need to know is how much raw speed is needed, whether it is MIPS (millions of instructions per second), FLOPS (floating point operations per second), TPU’s (tensor processing units), or other such metrics. This though doesn’t fully answer the performance question. The AI software side often doesn’t really know what kind of performance resources they’ll need per se. You can try to simulate the AI software to gauge how much performance it will require. You can create benchmarks. There are all sorts of “lab” kinds of ways to gauge usage. Once you’ve got AI self-driving cars in the field for trials, you should also be pulling stats about performance. Indeed, it’s quite important that their be on-board monitoring to see how the AI and the hardware are performing. For my article about simulations and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/ For my article about benchmarks and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/ For my article about AI self-driving cars involved in accidents, see: https://aitrends.com/selfdrivingcars/accidents-happen-self-driving-cars/ With proper load balancing on-board the self-driving car, the load balancer is trying to keep the AI from getting starved, it is trying to ensure that the AI runs undisrupted by whatever might be happening at the hardware level. The load balance is monitoring the devices involved. When saturation approaches, this can be potentially handled via static or dynamic balancing, and thus the load balancer needs to come to play. If an on-board device goes sour, the load balancer hopefully has a means to deal with the loss. Whether it’s redundancy or whether it is shifting over to have another device now do double-duty, you’ve got to have a load balancer on-board to deal with those moments. And do so in real-time. While the self-driving car is possibly in motion, on a crowded freeway, etc. Fail-Safe Aspects To Keep In Mind Believe it or not, I’ve had some AI developers say to me that it is ridiculous to think that any of the on-board hardware devices are going to just up and quit. They cannot fathom any reason for this to occur. I point out that the on-board devices are all prone to the same kinds of hardware failures as any piece of hardware. There’s nothing magical about being included into a self-driving car. There will be “bad” devices that will go out much sooner than their life expectancy. There will be devices that will go out due to some kind of in-car issue that arises, maybe overheating or maybe somehow a human occupant manages to bust it up. There are bound to be recalls on some of that hardware. Also, I’ve seen some of them deluded by the aspect that during the initial trials of self-driving cars, the auto maker or tech firm is pampering the AI self-driving car. After each journey or maybe at the end of the day, the tech team involved in the trials are testing to make sure that all of the hardware is still in pristine shape. They swap out equipment as needed. They act like a race car team, continually tuning and making sure that everything on-board is in top shape. There’s nearly an unlimited budget of sorts during these trials in that the view is do whatever it takes to keep the AI self-driving car running. This is not what’s going to happen once the real-world occurs. When those self-driving cars are being used by the average Joe or Samatha, they will not have a trained team of self-driving car specialists at the ready to tweak and replace whatever might need to be replaced. The equipment will age. It will suffer normal wear and tear. It will even be taxed beyond normal wear and tear since it is anticipated that AI self-driving cars will be running perhaps 24×7, nearly non-stop. For my article about non-stop AI self-driving cars, see: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/ For repairs of AI self-driving cars, see my article: https://aitrends.com/ai-insider/towing-and-ai-self-driving-cars/ Conclusion For those auto makers and tech firms that are giving short shrift right now to the importance of load balancing, I hope that this might be a wake-up call. It’s not going to do anyone any good, neither the public and nor the makers of AI self-driving cars, if it turns out that the AI is unable to get the performance it needs out of the on-board devices. A load balancer is not a silver bullet, but it at least provides the kind of added layer of protection that you’d expect for any solidly devised real-time system. Presumably, there aren’t any auto makers or tech firms that opted to go with the four burner stove when an eight burner stove was needed. Copyright 2019 Dr. Lance Eliot This content is originally posted on AI Trends.   Read more »
  • $15M Global Learning XPRIZE Names Two Grand Prize Winners
    XPRIZE, the global leader in designing and operating incentive competitions to solve humanity’s grand challenges, announced two grand prize winners in the $15M Global Learning XPRIZE. The tie between Kitkit School from South Korea and the United States, and onebillion from Kenya and the United Kingdom, was revealed at an awards ceremony hosted at the Google Spruce Goose Hangar in Playa Vista, where supporters and benefactors including Elon Musk, celebrated all five finalist teams for their efforts. Launched in 2014, the Global Learning XPRIZE challenged innovators around the globe to develop scalable solutions that enable children to teach themselves basic reading, writing and arithmetic within 15 months. After being selected as finalists, five teams received $1M each and went on to field test their education technology solution in Swahili, reaching nearly 3,000 children across 170 villages in Tanzania. To help ensure anyone, anywhere can iterate, improve upon, and deploy the learning solutions in their own community, all five finalists’ software are open source. All five learning programs are currently available in both Swahili and English on GitHub, including instructions on how to localize into other languages. The competition offered a $10 million grand prize to the team whose solution enabled the greatest proficiency gains in reading, writing and arithmetic in the field test. After reviewing the field test data, an independent panel of judges found indiscernible results between the top two performers, and determined two grand prize winners would split the prize purse, receiving $5M each: Kitkit School (Berkeley, United States and Seoul, South Korea) developed a learning program with a game-based core and flexible learning architecture aimed at helping children independently learn, irrespective of their knowledge, skill, and environment. onebillion (London, United Kingdom and Nairobi, Kenya) merged numeracy content with new literacy material to offer directed learning and creative activities alongside continuous monitoring to respond to different children’s needs. Currently, more than 250 million children around the world cannot read or write, and according to data from the UNESCO Institute for Statistics, about one in every five children are out school – a figure that has barely changed over the past five years. Compounding on the issue, there is a massive shortage of teachers at the primary and secondary levels, with research showing that the world must recruit 68.8 million teachers to provide every child with primary and secondary education by 2030. Before the Global Learning XPRIZE field test, 74% of the participating children were reported as never attending school, 80% reported as never being read to at home and over 90% of participating children could not read a single world in Swahili. After 15 months of learning on Pixel C tablets donated by Google and preloaded with one of the five finalists learning software, that number was cut in half. Additionally, in math skills, all five software were equally as effective for girls and boys. Collectively over the course of the competition, the five finalist teams invested approximately $200M in research, development, and testing for their software, a total that rises to nearly $300M when including all 198 registered teams. “Education is a fundamental human right, and we are so proud of all the teams and their dedication and hard work to ensure every single child has the opportunity to take learning into their own hands,” said Anousheh Ansari, CEO of XPRIZE. “Learning how to read, write and demonstrate basic math are essential building blocks for those who want to live free from poverty and its limitations, and we believe that this competition clearly demonstrated the accelerated learning made possible through the educational applications developed by our teams, and ultimately hope that this movement spurs a revolution in education, worldwide.” The grand prize winners and the following finalist teams were chosen from a field of 198 teams from 40 countries: CCI (New York, United States) developed structured and sequential instructional programs, in addition to a platform seeking to enable non-coders to develop engaging learning content in any language or subject area. Chimple (Bangalore, India) created a learning platform aimed at enabling children to learn reading, writing and mathematics on a tablet through more than 60 explorative games and 70 different stories. RoboTutor (Pittsburgh, United States) leveraged Carnegie Mellon’s research in reading and math tutors, speech recognition and synthesis, machine learning, educational data mining, cognitive psychology, and human-computer interaction. See the source release XPRIZE.org. Read more »
  • Microsoft and Sony Become Partners Around Gaming and AI
    Microsoft and Sony announced an unusual partnership on May 16, allowing the two rivals to partner on cloud-based gaming services. “The two companies will explore joint development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services,” Microsoft said in a statement. Sony’s existing game and content-streaming services will also be powered by Microsoft Azure in the future. Microsoft says “these efforts will also include building better development platforms for the content creator community,” which sounds like both Sony and Microsoft are planning to partner on future services aimed at creators and the gaming community. Both companies say they will “share additional information when available,” but the partnership means Microsoft and Sony will collaborate on cloud gaming. That’s a pretty big deal, and it’s a big loss for Microsoft’s main cloud rival, Amazon. It also means Google, a new gaming rival to Microsoft and Sony, will miss out on hosting Sony’s cloud services. Google unveiled its Stadia game streaming service earlier this year, and the company will use YouTube to push it to the masses. Stadia is a threat to both Microsoft and Sony, and it looks like the companies are teaming up so Sony has some underlying infrastructure assistance to fight back. Stadia will stream games from the cloud to the Chrome browser, Chromecast, and Pixel devices. Sony already has a cloud gaming service, but Microsoft is promising trials of its own xCloud gaming service later this year. Microsoft’s gaming boss, Phil Spencer, has also promised the company will “go big” for E3 [Electronic Entertainment Expo]. As part of the partnership, Sony will use Microsoft’s AI platform in its consumer products. Read the source article in The Verge. Read more »
  • Arkansas Government Moving Aggressively to Shore Up Cybersecurity
    Arkansas will soon launch an ambitious initiative to include AI to bolster the state’s cybersecurity stance, while developing a scalable defense model that others can use in the future. Senate Bill 632, recently signed into law by Gov. Asa Hutchinson, authorizes the state’s Economic Development Commission (AEDC) to create a Cyber Initiative. This initiative will be responsible for working to mitigate the cyber-risks to Arkansas; increasing education relative to threats and defense; providing the public and private sectors with threat assessments and other intelligence; and fostering growth and development around tech including AI, IT and defense. The initiative will also create a “cyber alliance” made up of partnerships with a variety of insitutitions like “universities, colleges, government agencies and the private business sector,” all of which will work in a unified fashion toward realizing the initiative’s priorities. The bill also gives the program a potentially extensive financing framework, establishing a special fund that will consist of all money appropriated by the General Assembly, as well as “gifts, contributions, grants, or bequests received from federal, private, or other sources,” according to the text of the legislation. That money will go toward a wide variety of activities conducted through its myriad partnerships — including research, training of officials at public and private institutions in best defense, business and academic opportunities. The initiative will also have a considerable privacy component, as it will be exempt from the Freedom of Information Act (FOIA) if the request is deemed a “security risk,” according to bill text. Much of the initiative’s work will be centered around finding more effective methods to ferret out bad actors and identifying where and what those actors are looking to target within the state, said retired Col. Rob Ator, who serves as the director of Military Affairs for the AEDC. Arkansas, Ator said, is an attractive target to potential hackers because — as the bill notes — it is “home to national and global private sector companies that are considerable targets in the financial services, food and supply chain and electric grid sectors.” “For the first time in our nation’s history, the outward-facing defense for our critical infrastructure is no longer the folks in uniform and it’s no longer the government — it’s our private industry,” Ator said, adding that, as potential targets for cyberattacks, companies are now responsible for their own defense like never before. Read the source article in Government Technology. Read more »
WordPress RSS Feed Retriever by Theme Mason