What would you do with unlimited traffic?

There are 5 important elements for you to succeed in business:

  1. Traffic
  2. Product
  3. Price
  4. Presentation
  5. Closing

Traffic

Our proprietary technology allows us to offer you unprecedented services:

  1. Unlimited traffic
  2. Fast reaction, weeks instead of months or years
  3. And best of all, we deliver or it is free

The advantage of using technology to generate traffic is the fact that now you have the opportunity to work on the other four factors that determine your success.

So even if you fail more than once and even alienate your prospects, our technology keeps on bringing you more and more people so you can keep on correcting your mistakes over and over until you get it right and succeed in your business. This is what we do for you, you do the rest.

This is what we do for you; provide you with unlimited traffic as much as you want!

Just contact us for a free consultation.

Product

Your products or services are of paramount importance; your visitors must want what you have to offer. It is obvious, right? But even if you fail here, because we keep on bringing you people you can correct the issue. How good is it that you have the best product but no one comes?

Price

All these visitors found you online, so if your price is not competitive, they can just as easily find your competitors.

Presentation

You must be able to explain quickly and clearly what you offer.

Closing

How will you deliver, get paid, etc, this is probably the easiest but just as important.

Latest News

  • Why AI Has Yet to Reshape Most Businesses
    true
    The art of making perfumes and colognes hasn’t changed much since the 1880s, when synthetic ingredients began to be used. Expert fragrance creators tinker with combinations of chemicals in hopes of producing compelling new scents. So Achim Daub, an executive at one of the world’s biggest makers of fragrances, Symrise, wondered what would happen if he injected artificial intelligence into the process. Would a machine suggest appealing formulas that a human might not think to try? Daub hired IBM to design a computer system that would pore over massive amounts of information—the formulas of existing fragrances, consumer data, regulatory information, on and on—and then suggest new formulations for particular markets. The system is called Philyra, after the Greek goddess of fragrance. Evocative name aside, it can’t smell a thing, so it can’t replace human perfumers. But it gives them a head start on creating something novel. Daub is pleased with progress so far. Two fragrances aimed at young customers in Brazil are due to go on sale there in June. Only a few of the company’s 70 fragrance designers have been using the system, but Daub expects to eventually roll it out to all of them. However, he’s careful to point out that getting this far took nearly two years—and it required investments that still will take a while to recoup. Philyra’s initial suggestions were horrible: it kept suggesting shampoo recipes. After all, it looked at sales data, and shampoo far outsells perfume and cologne. Getting it on track took a lot of training by Symrise’s perfumers. Plus, the company is still wrestling with costly IT upgrades that have been necessary to pump data into Philyra from disparate record-­keeping systems while keeping some of the information confidential from the perfumers themselves. “It’s kind of a steep learning curve,” Daub says. “We are nowhere near having AI firmly and completely established in our enterprise system.” The perfume business is hardly the only one to adopt machine learning without seeing rapid change. Despite what you might hear about AI sweeping the world, people in a wide range of industries say the technology is tricky to deploy. It can be costly. And the initial payoff is often modest. It’s one thing to see breakthroughs in artificial intelligence that can outplay grandmasters of Go, or even to have devices that turn on music at your command. It’s another thing to use AI to make more than incremental changes in businesses that aren’t inherently digital. AI might eventually transform the economy—by making new products and new business models possible, by predicting things humans couldn’t have foreseen, and by relieving employees of drudgery. But that could take longer than hoped or feared, depending on where you sit. Most companies aren’t generating substantially more output from the hours their employees are putting in. Such productivity gains are largest at the biggest and richest companies, which can afford to spend heavily on the talent and technology infrastructure necessary to make AI work well. This doesn’t necessarily mean that AI is overhyped. It’s just that when it comes to reshaping how business gets done, pattern-recognition algorithms are a small part of what matters. Far more important are organizational elements that ripple from the IT department all the way to the front lines of a business. Pretty much everyone has to be attuned to how AI works and where its blind spots are, especially the people who will be expected to trust its judgments. All this requires not just money but also patience, meticulousness, and other quintessentially human skills that too often are in short supply. Looking for unicorns Last September, a data scientist named Peter Skomoroch tweeted: “As a rule of thumb, you can expect the transition of your enterprise company to machine learning will be about 100x harder than your transition to mobile.” It had the ring of a joke, but Skomoroch wasn’t kidding. Several people told him they were relieved to hear that their companies weren’t alone in their struggles. “I think there’s a lot of pain out there—inflated expectations,” says Skomoroch, who is CEO of SkipFlag, a business that says it can turn a company’s internal communications into a knowledge base for employees. “AI and machine learning are seen as magic fairy dust.” Among the biggest obstacles is getting disparate record-keeping systems to talk to each other. That’s a problem Richard Zane has encountered as the chief innovation officer at UC Health, a network of hospitals and medical clinics in Colorado, Wyoming, and Nebraska. It recently rolled out a conversational software agent called Livi, which uses natural-­language technology from a startup called Avaamo to assist patients who call UC Health or use the website. Livi directs them to renew their prescriptions, books and confirms their appointments, and shows them information about their conditions. Zane is pleased that with Livi handling routine queries, UC Health’s staff can spend more time helping patients with complicated issues. But he acknowledges that this virtual assistant does little of what AI might eventually do in his organization. “It’s just the tip of the iceberg, or whatever the positive version of that is,” Zane says. It took a year and a half to deploy Livi, largely because of the IT headaches involved with linking the software to patient medical records, insurance-billing data, and other hospital systems. Similar setups bedevil other industries, too. Some big retailers, for instance, save supply-chain records and consumer transactions in separate systems, neither of which is connected to broader data storehouses. If companies don’t stop and build connections between such systems, then machine learning will work on just some of their data. That explains why the most common uses of AI so far have involved business processes that are siloed but nonetheless have abundant data, such as computer security or fraud detection at banks. Even if a company gets data flowing from many sources, it takes lots of experimentation and oversight to be sure that the information is accurate and meaningful. When Genpact, an IT services company, helps businesses launch what they consider AI projects, “10% of the work is AI,” says Sanjay Srivastava, the chief digital officer. “Ninety percent of the work is actually data extraction, cleansing, normalizing, wrangling.” Those steps might look seamless for Google, Netflix, Amazon, or Facebook. But those companies exist to capture and use digital data. They’re also luxuriously staffed with PhDs in data science, computer science, and related fields. “That’s different than the rank and file of most enterprise companies,” Skomoroch says. Indeed, smaller companies often require employees to delve into several technical domains, says Anna Drummond, a data scientist at Sanchez Oil and Gas, an energy company based in Houston. Sanchez recently began streaming and analyzing production data from wells in real time. It didn’t build the capability from scratch: it bought the software from a company called MapR. But Drummond and her colleagues still had to ensure that data from the field was in formats a computer could parse. Drummond’s team also got involved in designing the software that would feed information to engineers’ screens. People adept at all those things are “not easy to find,” she says. “It’s like unicorns, basically. That’s what’s slowing down AI or machine-learning adoption.” Fluor, a huge engineering company, spent about four years working with IBM to develop an artificial-intelligence system to monitor massive construction projects that can cost billions of dollars and involve thousands of workers. The system inhales both numeric and natural-language data and alerts Fluor’s project managers about problems that might later cause delays or cost overruns. Data scientists at IBM and Fluor didn’t need long to mock up algorithms the system would use, says Leslie Lindgren, Fluor’s vice president of information management. What took much more time was refining the technology with the close participation of Fluor employees who would use the system. In order for them to trust its judgments, they needed to have input into how it would work, and they had to carefully validate its results, Lindgren says. To develop a system like this, “you have to bring your domain experts from the business—I mean your best people,” she says. “That means you have to pull them off other things.” Using top people was essential, she adds, because building the AI engine was “too important, too long, and too expensive” for them to do otherwise. Read the source article at MIT Technology Review. Read more »
  • DOD Unveils Its AI Strategy Following White House Executive Order
    true
    The Defense Department launched its artificial intelligence strategy on Feb. 12 in concert with the White House executive order that created the American Artificial Intelligence Strategy. “The [executive order] is paramount for our country to remain a leader in AI, and it will not only increase the prosperity of our nation, but also enhance our national security,” Dana Deasy, DOD’s chief information officer, said in a media roundtable today. The CIO and Air Force Lt. Gen. Jack Shanahan, first director of DOD’s Joint Artificial Intelligence Center, discussed the strategy’s launch with reporters. The National Defense Strategy recognizes that the U.S. global landscape has evolved rapidly, with Russia and China making significant investments to modernize their forces, Deasy said. “That includes substantial funding for AI capabilities,” he added. “The DOD AI strategy directly supports every aspect of the NDS.” As stated in the AI strategy, he said, the United States — together with its allied partners — must adopt AI to maintain its strategic position to prevail on future battlefields and safeguard a free and open international order. Speed and Agility Are Key Increasing speed and agility is a central focus on the AI strategy, the CIO said, adding that those factors will be delivered to all DOD AI capabilities across every DOD mission. “The success of our AI initiatives will rely upon robust relationships with internal and external partners. Interagency, industry, our allies and the academic community will all play a vital role in executing our AI strategy,” Deasy said. “I cannot stress enough the importance that the academic community will have for the JAIC,” he noted. “Young, bright minds continue to bring fresh ideas to the table, looking at the problem set through different lenses. Our future success not only as a department, but as a country, depends on tapping into these young minds and capturing their imagination and interest in pursuing the job within the department.” Reforming DOD Business The last part of the NDS focuses on reform, the CIO said, and the JAIC will spark many new opportunities to reform the department’s business processes. “Smart automation is just one such area that promises to improve both effectiveness and efficiency,” he added. AI will use an enterprise cloud foundation, which will also increase efficiencies across DOD, Deasy said. He noted that DOD will emphasize responsibility and use of AI through its guidance and vision principles for using AI in a safe, lawful and ethical way. JAIC: A Focal Point of AI “It’s hard to overstate the importance of operationalizing AI across the department, and to do so with the appropriate sense of urgency and alacrity,” JAIC director Shanahan told reporters. The DOD AI strategy applies to the entire department, he said, adding the JAIC is a focal point of the strategy. The JAIC was established in response to the 2019 National Defense Authorization Act, and stood up in June 2018 “to provide a common vision, mission and focus to drive department-wide AI capability delivery.” Mission Themes The JAIC has several critical mission themes, Shanahan said. — First is the effort to accelerate delivery and adoption of AI capabilities across DOD, he noted. “This underscores the importance of transitioning from research and development to operational-fielded capabilities,” he said. “The JAIC will operate across the full AI application lifecycle, with emphasis on near-term execution and AI adoption.” — Second is to establish a common foundation for scaling AI’s impact, Shanahan said. “One of the JAIC’s most-important contributions over the long term will be establishing a common foundation enabled by enterprise cloud with particular focus on shared data repositories for useable tools, frameworks and standards and cloud … services,” he explained. — Third, to synchronize DOD AI activities, related AI and machine-learning projects are ongoing across the department, and it’s important to ensure alignment with the National Defense Strategy, the director said. — Last is the effort to attract and cultivate a world-class AI team, Shanahan said. Two pilot programs that are national mission initiatives – a  broad, joint cross-cutting AI challenge – comprise preventive maintenance and humanitarian assistance and disaster relief, the director said, adding that “initial capabilities [will be] delivered over the next six months.” Read the source coverage at US Department of Defense. Read more »
  • GoTo Fail and AI Brittleness: The Case of AI Self-Driving Cars
    true
    By Lance Eliot, the AI Trends Insider I’m guessing that you’ve likely heard or read the famous tale of the Dutch boy that plugged a hole in a leaking dam via his finger and was able to save the entire country by doing so. I used to read this fictional story to my children when they were quite young. They delighted in my reading of it, often asking me to read it over and over. One aspect that puzzled my young children was how a hole so small that it could be plugged by a finger could potentially jeopardize the integrity of the entire dam. Rather astute of them to ask. I read them the story to impart a lesson of life that I had myself learned over the years, namely that sometimes the weakest link in the chain can undermine an entire system, and incredibly too the weakest link can be relatively small and surprisingly catastrophic in spite of its size. I guess that’s maybe two lessons rolled into one. The first part is that the weakest link in a chain can become broken or severed and thus the whole chain no longer exists as a continuous chain. By saying it is the weakest link, we’re not necessarily saying its size, and it could be a link of the same size as the rest of the chain. It could be even a larger link or perhaps even the largest link of the chain. Or, it could be a smaller link or possibly the smallest sized link of the chain. The point being that by size alone, it is not of necessity the basis for why the link might be the weakest. There could be a myriad of other reasons why the link is subject to being considered “the weakest” and for which size might or might not particularly matter. Another perhaps obvious corollary regarding the weakest link aspect is that it is just one link involved. That’s what catches our attention and underlies the surprise about the notion. We might not be quite so taken aback if a multitude of links broke and therefore the chain itself came into ruin. The second part of the lesson learned involves the cascading impact and how severe it can be as a consequence of the weakest link giving way. In the case of the tiny hole in the dam, presumably the water could rush through that hole and the build-up of pressure would tend to crack and undermine the dam at that initial weakest point. As the water pushes and pushes to get through it, the finger-sized hole is bound to grow and grow in size, until inextricably the hole becomes a gap, and the gap then becomes a breech, and the breech then leads to the entire dam crumbling and being overtaken by the madly and punishingly flowing water. If you are not convinced that a single weakest link could undermine a much larger overall system, I’d like to enchant you with the now-famous account of the so-called “goto fail goto fail” saga that played out in February 2014. This is a true story. The crux of the story is that one line of code, a single “Go To” statement in a software routine, led to the undermining of a vital aspect of computer security regarding Apple related devices. I assert that the one line of code is the equivalent to a tiny finger-sized hole in a dam. Via that one hole, a torrent of security guffaws could have flowed.  At the time, and still to this day, there were reverberations that this single “Go To” statement could have been so significant. For those outside of the computer field, it seemed shocking. What, one line of code can be that crucial? For those within the computer field, there was for some a sense of embarrassment, namely that the incident laid bare the brittleness of computer programs and software, along with being an eye opener to the nature of software development. I realize that there were pundits that said it was freakish and a one-of-a-kind, but at the time I concurred with those that said this is actually just the tip of the iceberg. Little do most people know or understand how software is often built on a house of cards. Depending upon how much actual care and attention you devote to your software efforts, which can be costly in terms of time, labor, and resources needed, you can make it hard to have a weakest link or you can make it relatively easy to have a weakest link. All told, you cannot assume that all software developers and all software development efforts are undertaking the harder route of trying to either prevent weakest links or at least catch the weakest link when it breaks. As such, as you walk and talk today, and are either interacting with various computer systems or reliant upon those computer systems, you have no immediate way to know whether there is or is not a weakest link ready to be encountered. In the case of the “Go to” line of code that I’m about to show you, it turns out that the inadvertent use of a somewhat errant “Go to” statement created an unreachable part of the program, which is often referred to as an area of code known as dead code. It is dead code because it will never be brought to life, in the sense that it will never be executed during the course of the program being run. Why would you have any dead code in your program? Normally, you would not. A programmer ought to be making sure that their is code is reachable in one manner or another. Having code that is unreachable is essentially unwise since it is sitting in the program but won’t ever do anything. Furthermore, it can be quite confusing to any other programmer that comes along to take a look at the code. There are times at which a programmer might purposely put dead code into their program and have in mind that at some future time they will come back to the code and change things so that the dead code then becomes reachable. It is a placeholder. Another possibility is that the code was earlier being used, and for some reason the programmer decided they no longer wanted it to be executed, so they purposely put it into a spot that it became dead code in the program, or routed the execution around the code so that it would no longer be reachable and thus be dead code. They might for the moment want to keep the code inside the program, just in case they later decide to encompass it again later on. Generally, the dead code is a human programmer consideration in that if a programmer has purposely included dead code it raises questions about why and what it is there for, since it won’t be executed. There is a strong possibility that the programmer goofed-up and didn’t intend to have dead code. Our inspection of the code won’t immediately tell us whether the programmer put the dead code there for a purposeful reason, or they might have accidentally formulated a circumstance of dead code and not even realize they did so. That’s going to be bad because the programmer presumably assumed that the dead code would get executed at some juncture while the program was running, but it won’t. Infamous Dead Code Example You are now ready to see the infamous code (it’s an excerpt, the entire program is available as open source online at many code repositories). Here it is:    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)        goto fail;    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)        goto fail;    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)        goto fail;    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)        goto fail;        goto fail;    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)        goto fail; err = sslRawVerify(ctx,                    ctx->peerPubKey,                    dataToSign,                /* plaintext */                    dataToSignLen,            /* plaintext length */                    signature,                    signatureLen);    if(err) {     sslErrorLog(“SSLDecodeSignedServerKeyExchange: sslRawVerify “                    “returned %d\n“, (int)err);        goto fail;    } fail: SSLFreeBuffer(&signedHashes); SSLFreeBuffer(&hashCtx);    return err; Observe that there appear to be five IF statements, one after another. Each of the IF statements seems to be somewhat the same, namely each tests a condition and if the condition is true then the code is going to jump to the label of “fail” that is further down in the code. All of this would otherwise not be especially worth discussing, except for the fact that there is a “goto fail” hidden amongst that set of a series of five IF statements. It is actually on its own and not part of any of those IF statements. It is sitting in there, among those IF statements, and will be executed unconditionally, meaning that once it is reached, the program will do as instructed and jump to the label “fail” that appears further down in the code. Can you see the extra “goto fail” that has found its ways into that series of IF statements? It might take a bit of an eagle eye for you to spot it. In case you don’t readily see it, I’ll include the excerpt again here and show you just the few statements I want you to focus on for now:    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)        goto fail;        goto fail;    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)        goto fail; What you have in a more abstract way is these three statements:    IF (condition) goto fail;    goto fail;    IF (condition) go to fail; There is an IF statement, the first of those above three lines, that has its own indication of jumping to the label “fail” when the assessed condition is true. Immediately after that IF statement, there is a statement that says “goto fail” and it is all on its own, that’s the second line of the three lines. The IF statement that follows that “goto fail” which is on its own, the third line, won’t ever be executed. Why? Because the “goto fail” in front of it will branch away and the sad and lonely IF statement won’t get executed. In fact, all of the lines of code following that “goto fail” are going to be skipped during execution. They are in essence unreachable code. They are dead code. By the indentation, it becomes somewhat harder to discern that the unconditional GO TO statement exists within the sequence of those IF statements. One line of code, a seemingly extraneous GO TO statement, which is placed in a manner that it creates a chunk of unreachable code. This is the weakest link in this chain. And it creates a lot of troubles. By the way, most people tend to refer to this as the “goto fail goto fail” because it has two such statements together. There were T-shirts, bumper stickers, coffee mugs, and the like, all quickly put into the marketplace at the time of this incident, allowing the populace to relish the matter and showcase what it was about. Some of the versions said “goto fail; goto fail;” and included the proper semi-colons while others omitted the semi-colons. What was the overall purpose of this program, you might be wondering? It was an essential part of the software that does security verification for various Apple devices like their smartphones, iPad, etc. You might be aware that when you try to access a web site, there is a kind of handshake that allows a secure connection to be potentially established. The standard used for this is referred to as the SSL/TSL, or the Secure Socket Layer / Transport Security Layer. When your device tries to connect with a web site and SSL/TSL is being used, the device starts to make the connection, the web site presents a cryptographic certificate for verification purposes, and your device then tries to verify that the certificate is genuine (along with other validations that occur). In the excerpt that I’ve shown you, you are looking at the software that would be sitting in your Apple device and trying to undertake that SSL/TSL verification. Unfortunately, regrettably, the dead code is quite important to the act of validating the SSL/TSL certificate and other factors. Essentially, by bypassing an important part of the code, this program is going to be falsely reporting that the certificate is OK, under circumstances when it is not. You might find of interest this official vendor declaration about the code when it was initially realized what was happening, and a quick fix was put in place: “Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps.” Basically, you could potentially exploit the bug by tricking a device that was connecting to a web site and place yourself into the middle, doing so to surreptitiously watch and read the traffic going back-and-forth, grabbing up private info which you might use for nefarious purposes. This is commonly known as the Man-in-the-Middle security attack (MITM). I’ve now provided you with an example of a hole in the dam. It is a seemingly small hole, yet it undermined a much larger dam. Among a length chain of things that need to occur for the security aspects of the SSL/TSL, this one weak link undermined a lot of it. I do want to make sure that you know that it was not completely undermined since some parts of the code were working as intended and it was this particular slice that had the issue. There are an estimated 2,000 lines of code in this one program. Out of the 2,000 lines of code, one line, the infamous extra “goto fail” had caused the overall program to now falter in terms of what it was intended to achieve. That means that only 0.05% of the code was “wrong” and yet it undermined the entire program. Some would describe this as an exemplar of being brittle. Presumably, we don’t want most things in our lives to be brittle. We want them to be robust. We want them to be resilient. The placement of just one line of code in the wrong spot and then undermining a significant overall intent is seemingly not something we would agree to be properly robust or resilient. Fortunately, this instance did not seem to cause any known security breeches to get undertaken and no lives were lost. Imagine though that this were to happen inside a real-time system that is controlling a robotic arm in a manufacturing plant. Suppose the code worked most of the time, but on a rare occasion it reached a spot of this same kind of unconditional GO TO, and perhaps jumped past code that checks to make sure that a human is not in the way of the moving robotic arm. By bypassing that verification code, the consequences could be dire. For the story of the Dutch boy that plugged the hole in the dam, we are never told how the hole got there in the first place. It is a mystery, though most people that read the story just take it at face value that there was a hole. I’d like to take a moment and speculate about the infamous GO TO of the “goto fail” and see if we can learn any additional lessons by doing so, including possibly how it go there. Nobody seems to know how it actually happened, well, I’m sure someone does that was involved in the code (they aren’t saying). Anyway, let’s start with the theories that I think are most entertaining but seem farfetched, in my opinion. One theory is that it was purposely planted into the code, doing so at the request of someone such as perhaps the NSA. It’s a nifty theory because you can couple with it that the use of the single GO TO statement makes the matter seem as though it was an innocent mistake. What better way to plant a backdoor and yet if it is later discovered you can say that it was merely an accident all along. Sweet! Of course, the conspiracy theorists say that’s what they want us to think, namely that it was just a pure accident. Sorry, I’m not buying into the conspiracy theory on this. Yes, I realize it means that maybe I’ve been bamboozled. For conspiracy theories in the AI field, see my article: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/ Another theory is that the programmer or programmers (we don’t know for sure if it was one programmer, and so maybe it was several that got together on this), opted to plant the GO TO statement and keep it in their back pocket. This is the kind of thing you might try to sell on the dark web. There are a slew of zero-day exploits that untoward hackers trade and sell, so why not do the same with this? Once again, this seems to almost make sense because the beauty is that the hole is based on just one GO TO statement. This might provide plausible deniability if the code is tracked to whomever put the GO TO statement in there. For my article about security backdoor holes, see: https://www.aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/ For my article about stealing of software code aspects, see: https://www.aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/ For aspects of reverse engineering code, see my article: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/ I’m going to vote against this purposeful hacking theory. I realize that I might be falling for someone’s scam and they are laughing all the way to the bank about it. I don’t think so. In any case, now let’s dispense with those theories and got toward something that I think has a much higher chance of approaching what really did happen. ‘Mistakenly Done’ Theories First, we’ll divide the remaining options into something that was mistakenly done versus something intentionally done. I’ll cover the “mistakenly done” theories first. You are a harried programmer. You are churning out gobs of code. While writing those IF statements, you accidentally fat finger an extra “goto fail” into the code. At the time, you’ve indented it and so it appears to be in the right spot. By mistake, you have placed that line into your code. It becomes part of the landscape of the code. That’s one theory about the mistaken-basis angle. Another theory is that the programmer had intended to put another IF statement into that segment of the code and had typed the “goto fail” portion, but then somehow got distracted or interrupted and neglected to put the first part, the IF statement part itself, into the code. Yet another variation is that there was an IF statement there, but the programmer for some reason opted to delete it, but when the programmer did the delete, they mistakenly did not remove the “goto fail” which would have been easy to miss because it was on the next physical line. We can also play with the idea that there might have been multiple programmers involved. Suppose one programmer wrote part of that portion with the IF statements, and another programmer was also working on the code, using another instance, and when the two instances got merged together, the merging led to the extra GO TO statement. On a similar front, there is a bunch of IF statements earlier in the code. Maybe those IF statements were copied and used for this set of IF statements, and when the programmer or programmers were cleaning up the copied IF statements, they inadvertently added the unconditional GO TO statement. Let’s shift our attention to the “intentional” theories of how the line got in there. The programmer was writing the code and after having written those series of IF statements, took another look and thought they had forgotten to put a “goto fail” for the IF statement that precedes the now known to be wrong GO TO statement. In their mind, they thought they were putting in the line because it needed to go there. Or, maybe the programmer had been doing some testing of the code. While doing testing, the programmer opted to temporarily put the GO TO into the series of IF statements, wanting to momentarily short circuit the rest of the routine. This was handy at the time. Unfortunately, the programmer forgot to remove it later on. Or, another programmer was inspecting the code. Being rushed or distracted, the programmer thought that a GO TO opt to be in the mix of those IF statements. We know now that this isn’t a logical thing to do, but perhaps at the time, in the mind of the programmer, it was conceived that the GO TO was going to have some other positive effect, and so they put it into the code. Programmers are human beings. They make mistakes. They can have one thing in mind about the code, and yet the code might actually end-up doing something other than what they thought. Some people were quick to judge that the programmer must have been a rookie to have let this happen. I’m not so sure that we can make such a judgment. I’ve known and managed many programmers and software engineers that were topnotch, seasoned with many years of complex systems projects, and yet they too made mistakes, doing so and yet at first insistent to the extreme that they must be right, having recalcitrant chagrin afterward when proven to be wrong. This then takes us to another perspective, namely if any of those aforementioned theories about the mistaken action or the intentional action are true, how come it wasn’t caught? Typically, many software teams do code reviews. This might involve merely having another developer eyeball your code, or it might be more exhaustive and involve you walking them through it, including each trying to prove or disprove that the code is proper and complete. Would this error have been caught by a code review? Maybe yes, maybe not. This is somewhat insidious because it is only one line, and it was indented to fall into line with the other lines, helping to mask it or at least camouflage it by appearing to be nicely woven into the code. Suppose the code review was surface level and involved simply eyeballing the code. That kind of code review could easily miss catching this GO TO statement issue. Suppose it was noticed during code review, but it was put to the side for a future look-see, and then because the programmers were doing a thousand things at once, oops it got left in the code. That’s another real possibility. For my article about burned out developers, see: https://www.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/ For the egocentric aspects of programmers, see my article: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/ For my article about the dangers of groupthink and developers, see: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/ You also need to consider the human aspects of trust and belief in the skills of the other programmers involved in a programming team. Suppose the programmer that wrote this code was considered topnotch. Time after time, their code was flawless. On this particular occasion, when it came to doing a code review, it was a slimmer code review because of the trust placed in that programmer. When managing software engineers, they sometimes will get huffy at me when I have them do code reviews. There are some that will say they are professionals and don’t need a code review, or that if there is a code review it should be quick and lite because of how good they are. I respect their skill sets but try to point out that any of us can have something mar our work. One aspect that is very hard to get across involves the notion of egoless coding and code reviews. The notion is that you try to separate the person from the code in terms of the aspect that any kind of critiquing of the code becomes an attack on that person. This means that no one wants to do these code reviews when it spirals downward into a hatred fest. What can happen is the code reviews become an unseemly quagmire of accusations and anger, spilling out based not only on the code but perhaps due to other personal animosity too. Besides code reviews, one could say that this GO TO statement should have been found during testing of the code. Certainly, it would seem at the unit level of testing, you could have setup a test suite of cases that fed into this routine, and you would have discovered that sometimes the verification was passing when it should not be. Perhaps the unit testing was done in a shallow way. We might also wonder what happened at doing a system test. Normally, you put together the various units or multiple pieces and do a test across the whole system or subsystem. If they did so, how did this get missed? Again, it could be that the test cases used at the system level did not encompass anything that ultimately rolled down into this particular routine and would have showcased the erroneous result. You might wonder how the compiler itself missed this aspect. Some compilers can do a kind of static analysis trying to find things that might be awry, such as dead code. Apparently, at the time, there was speculation that the compiler could have helped, but it had options that were either confusing to use, or when used were often mistaken in what they found. We can take a different perspective and question how the code itself is written and structured overall. One aspect that is often done but should be typically reconsidered is that the “err” value that gets used in this routine and sent back to the rest of the software was set initially to being Okay, and only once something found an untoward does it get set to a Not Okay signal. This meant that when the verification code was skipped, the flag was defaulting to everything being Okay. One might argue that this is the opposite of the right way to do things. Maybe you ought to assume that the verification is Not Okay, and the routine has to essentially go through all the verifications to set the value to Okay. In this manner, if somehow the routine short circuits early, at least the verification is stated as Not Okay. This would seem like a safer default in such a case. Another aspect would be the use of curly braces or brackets. Remember that I had earlier stated you can use those on an IF statement. Beside having use for multiple statements on an IF, it also can be a visual indicator for a human programmer of the start and end of the body of statements. Some believe that if the programmer had used the curly braces, the odds are that the extra “goto fail” would have stuck out more so as a sore thumb. We can also question the use of the multiple IF’s in a series. This is often done by programmers, and it is a kind of easy (some say sloppy or lazy) way to do things, but there are other programming techniques and constructs that can be used instead. Ongoing Debate on Dangers of GO TO Statements There are some that have attacked the use of the GO TO statements throughout the code passage. You might be aware that there has been an ongoing debate about the “dangers” of using GO TO statements. Some have said it is a construct that should be banned entirely. Perhaps the debate was most vividly started when Edgar Dijkstra had his letter published in the Communications of the ACM in March of 1968. The debate about the merits versus the downsides of the GO TO have continued since then. You could restructure this code to eliminate the GO TO statements, in which case, the extra GO TO would never have gotten into the mix, presumably. Another aspect involves the notion that the “goto fail” is repeated in the offending portion, which some would say should have actually made it visually standout. Would your eye tend to catch the same line of code repeated twice like this, especially a somewhat naked GO TO statement? Apparently, presumably, it did not. Some say the compiler should have issued a warning about a seemingly repeated line, even if it wasn’t set to detect dead code. You might also point out that this code doesn’t seem to have much built-in self-checking going on. You can write your code to “just get the job done” and it then provides its result. Another approach involves adding additional layers of code that do various double-checks. If that had been built into this code, maybe it would have detected that the verification was not being done to a full extent, and whatever error handling should take place would then have gotten invoked. In the software field, we often speak of the smell of a piece of code. Code-smell means that the code might be poorly written or suspect in one manner or another, and upon taking a sniff or a whiff of it (by looking at the code), one might detect a foul odor, possibly even a stench. Software developers also refer to technical debt. This means that when you right somewhat foul code, your creating a kind of debt that will someday be due. It’s like taking out a loan, and eventually the loan will need to be paid back. Bad code will almost always boomerang and eventually come back to haunt. I try to impart among my software developers that we ought to be creating technical credit, meaning that we’ve structured and written the code for future ease of maintenance and growth. We have planted the seed for this, even if at the time that we developed the code we didn’t necessarily need to do so. As a long-time programmer and software engineer, I am admittedly sympathetic to… Read more »
  • Despite Concerns, AI Making Inroads in Human Resources
    true
    It’s no longer shocking that human resources departments use artificial intelligence. In fact, according to Littler’s 2018 Annual Employer Survey, 49 percent said they use AI and advanced data analytics for recruiting and hiring. They also deploy AI into HR-related activities such as: Making strategic and employee management decisions (31 percent). Analyzing workplace policies (24 percent). Automating certain tasks that were previously done by an employee (22 percent). So, where can HR leaders expect to see significant gains in how AI will support HR-driven uses cases? The experts weigh in. AI Risk of Bias in HR There are some caveats to consider with AI-infused human resources initiatives. For starters, companies should keep a close eye on how these AI tools perform as they risk inadvertently introducing bias, according to Armen Berjikly, head of AI at Ultimate Software. Last year at this time, researchers from MIT and Stanford University found that three commercially released facial-analysis programs from major technology companies demonstrated both skin-type and gender biases. “The most significant risk of AI-enabled recruiting is that AI doesn’t take risks,” Berjikly said. “An AI-enabled hiring process gets extremely good at finding the types of candidates you train it to find, which leaves out many potentially amazing applicants who don’t fit the proverbial mold.” Moving Forward with AI Despite Job-Loss Concerns And surely, there are the natural worries about AI being so efficient for departments like HR that it will eliminate jobs. Their worries are validated, robots are already conducting job interviews. Rohit Chawla, co-founder of Bridging Gaps, said he strongly feels AI will take the load off at least 25 to 30 percent of mundane HR jobs. While that may produce fear of humans losing jobs, it’s not time for companies to back away from time-saving AI initiatives for HR. “HR should [embrace the technology] as currently a lot of customer-facing aspects are being taken care by AI. It’s high time HR takes up the challenge without any fear,” he added. Chawla, who raised questions of using AI in HR scenarios, sees these common areas where AI is helping human resources: Searching right-fit candidates especially for junior-level positions. Similarly conducting AI-based interviews both behavioral and functional. Sharing regret information to rejected candidates with an extent of sharing the reason, also not possible manually. Using chatbots to resolve employee queries. Workforce Data Leads to Predictive Advantage Where else is AI winning in HR? Jayson Saba, senior director of product marketing at Kronos, said AI advancements in HR are helping organizations leverage transactional workforce data to predict employee potential, fatigue, flight risk and even overall engagement. This enables more productive conversations to improve the employee experience, retention and performance. “It’s now possible to leverage AI to build smarter, personalized schedules and to leverage AI to review time-off and shift-swap requests in real-time based on predetermined business rules,” he said. This empowers employees, especially those with front-line/hourly positions, to take more control of their work/life balance. “Using AI for these important but repetitive administrative requests also unburdens managers, allowing them to spend more time on the floor, working with customers and training teams,” Saba added. Intelligent Shift-Swapping Real-time analytics can show managers the impact that absences, open shifts and unplanned schedule changes will have on key performance indicators, allowing them to make more informed decisions that avoid issues before they arise. Similarly, Saba said, using an intelligent solution to automate shift-swapping without manager intervention reduces the number of last-minute call-outs, no-shows and vacant shifts and effectively removes the need to schedule additional labor to cover for anticipated absences. “The future of work in any industry is going to rely heavily on advances in AI for HR,” Saba added, “but it’s important to keep in mind that AI will never replace the manager. Instead, its true value is analyzing the massive amounts of workforce data to provide managers with better informed options to guide their decisions.” Read the source article in CMSwire.com. Read more »
  • How AI and Machine Learning Are Improving Manufacturing Productivity
    true
    Engineers at the Advanced Manufacturing Research Centre’s Factory 2050 in Sheffield, UK are using Artificial Intelligence (AI) to learn what machine utilization looks like on the workshop floor. The aim is to create a demonstrator to show just how accessible Industry 4.0 technologies are, and how they can potentially revolutionize shop-floor productivity. The demonstrator will be the first created under an emerging AI strategy being produced at Factory 2050, which seeks to harness the innovative work being done with AI and machine learning techniques across the Advanced Manufacturing Research Centre (AMRC) and provide real use-cases for these techniques in industrial environments. “Using edge computing devices retrofitted to CNC machines, we have collected power consumption data during the production of automotive suspension components,” said Rikki Coles, AI Project Engineer for the AMRC’s Integrated Manufacturing Group at Factory 2050. “It isn’t a complicated parameter to measure on a CNC machine, but using AI and machine learning, we can actually do a lot with such simple data.” Data from the edge computing devices at from partner Tinsley Bridge was sent to the AMRC’s cloud computing services, and using the latest data science techniques, ran through an AI algorithm to provide new insights for the control and monitoring of manufacturing processes. Analyzing the power signatures from the data, the algorithm looked for repeating patterns or anomalies, working out how many components were machined and deduced that three different types of components were manufactured. Rikki said: “The project demonstrates to industry that with a low cost device collating quite simple data, AI and machine learning can be used to create valuable insights from this data for the manufacturer.” Director of Engineering at Tinsley Bridge, Russell Crow, said: “Interrogating our machine utilization rates means we have better visibility of what was being manufactured and when, and the ability to assess if we are scheduling effectively.  This data will allow us to look at boosting our productivity on the shop floor.” “Rather than investing in significant cost and time for new digitally integrated smart machining centres, we were able to work with the AMRC to retrofit our existing capabilities to achieve the same results and enhance what data we were collecting by fitting a simple current clamp to our machines; an unobtrusive solution that caused no disruption or downtime.” Aiden Lockwood, Chief Enterprise Architect for the AMRC Group said the project demonstrator will show other SMEs how easily and cheaply Industry 4.0 technologies can be accessed: “Traditionally these tools were built into commercial packages which could be out of the reach for some SMEs, so there is a misunderstanding that Industry 4.0 manufacturing techniques are for the big players who handle incredibly complex data collected over a long period of time.” “But AI is evolving and these techniques now give smaller businesses the ability to do so much more with their data. In this project we are using a simple data set, collected over a short period of time to provide real benefits for the company.” Aiden said the AMRC want to show what using AI in manufacturing looks like for small businesses: “The formation of our AI strategy will allow us to lead the way in developing new capabilities and bringing the academic, tech and business communities of the region together to educate and demonstrate AI technologies for manufacturing industries; learning from developments in retail, finance and marketing.” The next phase of the project will see the engineers at the AMRC train the system further so the algorithm can detect non-conforming components whilst in production, or identify a problem when a machine is requiring intervention, such as inconsistent tool wear which affects component quality. “Alongside the power consumption data, the plan is to feed the algorithm with available data about which of the manufactured components were non-conforming. So as well as providing clarity around machine utilization, the algorithm will essentially learn what a ‘good’ manufacturing process looks like and be able to actively monitor on-going manufacturing processes,” said Rikki. Read the source article in Metrology.news. Read more »
  • How AI Can Help Solve Some of Humanity’s Greatest Challenges
    true
    By Marshall Lincoln and Keyur Patel, cofounders of the Lucid Analytics Project In 2015, all 193 member countries of the United Nations ratified the 2030 “Sustainable Development Goals” (SDG): a call to action to “end poverty, protect the planet and ensure that all people enjoy peace and prosperity.” The 17 goals – shown in the chart below – are measured against 169 targets, set on a purposefully aggressive timeline. The first of these targets, for example, is: “by 2030, [to] eradicate extreme poverty for all people everywhere, currently measured as people living on less than $1.25 a day”. The UN emphasizes that Science, Technology and Innovation (STI) will be critical in the pursuit of these ambitious targets. Rapid advances in technologies which have only really emerged in the past decade – such as the internet of things (IoT), blockchain, and advanced network connectivity – have exciting SDG applications. No innovation is expected to be more pervasive and transformative, however, than artificial intelligence (AI) and machine learning (ML). A recent study by the McKinsey Global Institute found that AI could add around 16 per cent to global output by 2030 – or about $13 trillion. McKinsey calculates that the annual increase in productivity growth it engenders could substantially surpass the impact of earlier technologies that have fundamentally transformed our world – including the steam engine, computers, and broadband internet. AI/ML is not only revolutionary in its own right, but also increasingly central to the foundation upon which the next generation of technologies are being built. But the pace and scale of the change it will bring about also creates risks that humanity must take very seriously. Our research has led us to conclude that AI/ML will directly contribute to at least 12 of the 17 SDGs – likely more than any other emerging technology. In this piece, we explore potential use cases in three areas which are central to the Global Goals: financial inclusion, healthcare and disaster relief, and transportation. FINANCIAL INCLUSION Access to basic financial services – including tools to store savings, make and receive payments, and obtain credit and insurance – are often a prerequisite to alleviating poverty. Around 2 billion people around the world have limited or no access to these services. AI/ML is increasingly helping financial institutions create business models to serve the unbanked. For example, one of the biggest barriers to issuing loans is that many individuals and micro businesses have no formal credit history. Start-ups are increasingly running ML algorithms on non-traditional sources of data to establish their creditworthiness – from shopkeepers’ orders and payments history to psychometric testing. Analysis of data on crop yields and climate patterns can be used to help farmers use their land more effectively – reducing risks for lenders and insurance providers. AI/ML is also being used to help service providers keep their costs down in markets where revenue per customer is often very small. These include automated personal finance management, customer service chat-bots, and fraud detection mechanisms. HEALTHCARE AND DISASTER RELIEF The inequality between urban and rural healthcare services is an urgent problem in many developing countries. Rural areas with poor infrastructure often suffer from severe shortages of qualified medical professionals and facilities. Smart phones and portable health devices with biometric sensors bring the tools of a doctor’s office to patients’ homes – or a communal location in a village center for shared use. AI then automates much of the diagnostic and prescriptive work traditionally performed by doctors. This can reduce costs, enable faster and more accurate diagnoses, and ease the burden on overworked healthcare workers. AI is also being used to get medical supplies where they are needed. A start-up called Zipline, for example, is using AI to schedule and coordinate drones to deliver blood and equipment to rural areas in Rwanda (and soon other countries in Africa) which are difficult to access by road. Doctors order what they need via a text messaging system, and AI handles delivery. This dramatically reduces the time it takes to obtain blood in an emergency and eliminates wastage. When it comes to disaster relief, predictive models – based on data from news sources, social media, etc. – can help streamline crisis operations and humanitarian assistance. For example, AI-powered real-time predictions about where earthquakes or floods will cause the most damage can help emergency crews decide where to focus their efforts. Read the source article at KDNuggets.   Read more »
  • Machine Learning Engineer vs. Data Scientist—Who Does What?
    true
    The roles of machine learning engineer vs. data scientist are both relatively new and can seem to blur. However, if you parse things out and examine the semantics, the distinctions become clear. At a high level, we’re talking about scientists and engineers. While a scientist needs to fully understand the, well, science behind their work, an engineer is tasked with building something. But before we go any further, let’s address the difference between machine learning and data science. It starts with having a solid definition of artificial intelligence. This term was first coined by John McCarthy in 1956 to discuss and develop the concept of “thinking machines,” which included the following: Automata theory Complex information processing Cybernetics Approximately six decades later, artificial intelligence is now perceived to be a sub-field of computer science where computer systems are developed to perform tasks that would typically demand human intervention. These include: Decision-making Speech recognition Translation between languages Visual perception Machine learning is a branch of artificial intelligence where a class of data-driven algorithms enables software applications to become highly accurate in predicting outcomes without any need for explicit programming. The basic premise here is to develop algorithms that can receive input data and leverage statistical models to predict an output while updating outputs as new data becomes available. The processes involved have a lot in common with predictive modeling and data mining. This is because both approaches demand one to search through the data to identify patterns and adjust the program accordingly. Most of us have experienced machine learning in action in one form or another. If you have shopped on Amazon or watched something on Netflix, those personalized (product or movie) recommendations are machine learning in action. Data science can be described as the description, prediction, and causal inference from both structured and unstructured data. This discipline helps individuals and enterprises make better business decisions. It’s also a study of where data originates, what it represents, and how it could be transformed into a valuable resource. To achieve the latter, a massive amount of data has to be mined to identify patterns to help businesses: Gain a competitive advantage Identify new market opportunities Increase efficiencies Rein in costs The field of data science employs computer science disciplines like mathematics and statistics and incorporates techniques like data mining, cluster analysis, visualization, and—yes—machine learning. Having said all of that, this post aims to answer the following questions: Machine learning engineer vs. data scientist: what degree do they need? Machine learning engineer vs. data scientist: what do they actually do? Machine learning engineer vs. data scientist: what’s the average salary? Machine Learning Engineer vs. Data Scientist: What They Do As mentioned above, there are some similarities when it comes to the roles of machine learning engineers and data scientists. However, if you look at the two roles as members of the same team, a data scientist does the statistical analysis required to determine which machine learning approach to use, then they model the algorithm and prototype it for testing. At that point, a machine learning engineer takes the prototyped model and makes it work in a production environment at scale. Going back to the scientist vs. engineer split, a machine learning engineer isn’t necessarily expected to understand the predictive models and their underlying mathematics the way a data scientist is. A machine learning engineer is, however, expected to master the software tools that make these models usable. What Does a Machine Learning Engineer Do? Machine learning engineers sit at the intersection of software engineering and data science. They leverage big data tools and programming frameworks to ensure that the raw data gathered from data pipelines are redefined as data science models that are ready to scale as needed. Machine learning engineers feed data into models defined by data scientists. They’re also responsible for taking theoretical data science models and helping scale them out to production-level models that can handle terabytes of real-time data. Machine learning engineers also build programs that control computers and robots. The algorithms developed by machine learning engineers enable a machine to identify patterns in its own programming data and teach itself to understand commands and even think for itself. What Does a Data Scientist Do? When a business needs to answer a question or solve a problem, they turn to a data scientist to gather, process, and derive valuable insights from the data. Whenever data scientists are hired by an organization, they will explore all aspects of the business and develop programs using programming languages like Java to perform robust analytics. They will also use online experiments along with other methods to help businesses achieve sustainable growth. Additionally, they can develop personalized data products to help companies better understand themselves and their customers to make better business decisions. As previously mentioned, data scientists focus on the statistical analysis and research needed to determine which machine learning approach to use, then they model the algorithm and prototype it for testing. What Do the Experts Say? Springboard recently asked two working professionals for their definitions of machine learning engineer vs. data scientist. Mansha Mahtani, a data scientist at Instagram, said: “Given both professions are relatively new, there tends to be a little bit of fluidity on how you define what a machine learning engineer is and what a data scientist is. My experience has been that machine learning engineers tend to write production-level code. For example, if you were a machine learning engineer creating a product to give recommendations to the user, you’d be actually writing live code that would eventually reach your user. The data scientist would be probably part of that process—maybe helping the machine learning engineer determine what are the features that go into that model—but usually data scientists tend to be a little bit more ad hoc to drive a business decision as opposed to writing production-level code.” Shubhankar Jain, a machine learning engineer at SurveyMonkey, said: “A data scientist today would primarily be responsible for translating this business problem of, for example, we want to figure out what product we should sell next to our customers if they’ve already bought a product from us. And translating that business problem into more of a technical model and being able to then output a model that can take in a certain set of attributes about a customer and then spit out some sort of result. An ML engineer would probably then take that model that this data scientist developed and integrate it in with the rest of the company’s platform—and that could involve building, say, an API around this model so that it can be served and consumed, and then being able to maintain the integrity and quality of this model so that it continues to serve really accurate predictions.” Read the source post on the Springboard Blog.  Read more »
  • Boxes-on-Wheels and AI Self-Driving Cars
    true
    By Lance Eliot, the AI Trends Insider Watch out, the rolling boxes are on their way. Many call them a box-on-wheels. That’s referring to the use of AI self-driving car technology to have a vehicle that would be driverless and would deliver goods to you or more. At the Cybernetic AI Self-Driving Car Lab, we are developing AI software for self-driving cars, and are also including into our scope the use of AI systems for boxes-on-wheels. I offer next some salient aspects about the emerging niche of boxes-on-wheels. For my framework on AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ Some falsely believe autonomous shuttles are the same as box-on-wheels designs, not so, see my article: https://www.aitrends.com/selfdrivingcars/brainless-self-driving-shuttles-not-ai-self-driving-cars/ For the grand convergence that’s leading to these AI autonomously driven vehicles, see my article: https://www.aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/ Let’s start with a typical use case for a box-on-wheels. You could potentially order your groceries online from your neighborhood grocer, and a little while later those groceries pull-up in front of your house as contained in a so-called box-on-wheels. You walk outside to the vehicle, enter a special PIN code or some other form of recognition, and inside are your groceries. You happily carry the grocery bags up to your apartment or house and do so without ever having to drive your car. The vehicle drives off to deliver groceries to others that have also made recent orders from that grocery store. Notice that I mentioned that this is considered a use of AI self-driving car technology. It is not the same as what most people think of as an AI self-driving car per se. I say that because the vehicle itself does not necessarily need to look like a passenger car. A box-on-wheels can be a different shape and size than a normal passenger car, since it is not intending to carry humans.  It is intended to carry goods. If you ponder this aspect of carrying goods, you’d likely realize that it would be best to design the vehicle in a manner intended to carry goods rather than carrying humans. Consider first what it’s like to carry goods inside a passenger car. I’m sure you’ve tried to pile your own grocery bags into the backseat of your car or maybe on the floor just ahead of the passenger front seat. The odds are that at some point you had those bags flop over and spill their contents. If you made a quick stop by hitting the brakes of the car, it could be that you’ve had groceries that littered throughout your car and maybe had broken glass from a smashed milk bottle as a result. Not good. Don’t blame it on the passenger car! The passenger car is considered optimized to carry people. There are seats for people. There are armrests for people. There are areas for people to put their feet. All in all, the typical passenger car is not particularly suited to carry goods. Sure, you might place the goods into your trunk or maybe some other baggage carrying spaces of the car, but then you’d be unable to use the passenger seats in any sensible way to carry goods. Nope, don’t try to make a hammer into a screwdriver. If you need a hammer, get yourself a hammer. If you need a screwdriver, get yourself a screwdriver. Thus, I think you can understand the great value and importance of developing a vehicle optimized for carrying goods, of which it is not bound to the design of a passenger carrying car. There are a wide variety of these designs all vying to see which will be the best, or at least become more enduring, as to meeting the needs of delivering goods. Some of these vehicles are the same size as a passenger car. Some of these vehicles are much smaller than a passenger car, of which, some of those are envisioned to go on sidewalks rather than solely on the streets. The ones that go on the sidewalks need to especially be honed to cope with pedestrians and other aspects of driving on a sidewalk, plus there often is the need to get regulatory approval in a particular area to allow a motorized vehicle to go on sidewalks. Having such a vehicle on a sidewalk can be a dicey proposition. If you are wondering why even try, the notion is that it can more readily get to harder to reach places due to its smaller size and overall footprint, and in neighborhoods where they restrict the use of full sized cars it could potentially do the delivery (such as retirement communities), even perhaps right up to the door of someone’s adobe. Some designers are going to the opposite extreme and considering boxes-on-wheels that are the size of a limo or larger. The logic is that you could store even more groceries or other goods in one that is larger in size. This could cut down on the number of trips needed to deliver some N number of goods to Y number of delivery spots. Suppose a “conventional” box-on-wheels allowed for up to 6 distinct deliveries, while the limo version could do say twelve. The box-on-wheels for the six distinct deliveries would need to come all the way back to the grocery store to fill-up the next set of six, meanwhile the limo version would have gotten all twelve put into it at the start of its journey and would be more efficient to deliver them without having to come back mid-way of the twelve. The downside of the limo sized box-on-wheels is whether it can readily navigate the roads needed to do its delivery journey. With a larger size, it might not be able to make some tight corners or other narrow passages to reach the intended recipient of the goods. There’s a trade-off between the size of the box-on-wheels and where it can potentially go. Indeed, let’s be clear that there is no one-size-fits-all solution here. There are arguments about which of the sizes will win out in the end of this evolving tryout of varying sizes and shapes of boxes-on-wheels. I am doubtful there will be only one “right size and shape” that will accommodate the myriad of needs for a boxes-on-wheels. Just as today we have varying sizes of cars and trucks, the same is likely to be true for the boxes-on-wheels. For my article about safety aspects of AI self-driving vehicles, see: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/ For various AI self-driving vehicle design aspects, see my article: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/ For the myth of these vehicles becoming economic commodities, see my myth busting article: https://www.aitrends.com/selfdrivingcars/economic-commodity-debate-the-case-of-ai-self-driving-cars/ Box on Wheels Free-for-All Today That doesn’t though suggest that all of the variants being tried today will survive. I’m sure that many of the designs of today will either morph and be revised based on what seems to function well in the real-world, or some designs will be dropped entirely, or other new designs will emerge once we see what seems to work and what does not work. It’s a free-for-all right now. Large sized, mid-sized, small-sized, along with doors that open upward, downward, or swing to the side, and some with windows and others without windows, etc. Let’s consider an example of a variant being tried out today. Kroger, a major grocer, has teamed up with Nuro, an AI self-driving vehicle company, for the development of and testing of delivery vehicles that would carry groceries. The squat looking vehicle has various separated compartments to put groceries into. There are special doors that can be opened to then allow humans to access the compartments, presumably for the purposes of putting in groceries at the grocery store and then taking out the groceries when the vehicle reaches the consumer that bought the groceries. This kind of design makes a lot of sense for the stated purpose of transporting groceries. You want to have separated compartments so that you could accommodate multiple separate orders. Maybe you ordered some groceries, and Sam that lives two blocks away also ordered groceries. Naturally, you’d not want Sam to mess around with your groceries, and likewise you shouldn’t mess around with Sam’s groceries. Imagine if you could indeed access other people’s groceries – it could be a nightmare of accidentally taking the wrong items (intended for someone else), or accidentally crushing someone else’s items (oops, flattened that loaf of bread), and maybe intentionally doing so (you’ve never liked Sam, so you make sure all the eggs he ordered are smashed). There has to be also be some relatively easy way to access the compartments. Having a lockable door would be essential. The door has to swing or hinge in a manner that it would be simple to deal with and allow you access to the compartment readily and fully. You of course don’t want humans to get confused trying to open or close the doors. You don’t want humans to hurt themselves when opening or closing a door. The locking mechanism has to allow for an easy means of identifying the person that is rightfully going to open the door. And so on. The locking mechanism might involve you entering a PIN code to open the door. The PIN would have been perhaps provided to you when you placed your grocery order. Or, it might be that your smartphone can activate and unlock the compartment door, using NFC or other kinds of ways to convey a special code to the box-on-wheels. It could even be facial recognition or via your eye or fingerprint recognition, though this means that only you can open the door. I say this because you might be unable to physically get to the box-on-wheels and instead have someone else aiding you, maybe you are bedridden with some ailment and have an aid in your home, and so if the lock only responds to you it would limit your allowing someone else to open it instead (possibly, you could instruct the lock via online means as to how you want it to respond). I mention these aspects because the conventional notion is that the box-on-wheels will most likely be unattended by a human. If you had a human attendant that was inside the vehicle, they could presumably get out of the vehicle when it reaches your home, they could open the door to the compartment that contains your groceries, and they might either hand it to you or walk it up to your door. But, if the vehicle is unattended by a human, this means that the everyday person receiving the delivery is going to have to figure out how to open the compartment door, take out the groceries, and then close the compartment door. This seems like a simple task, but do not underestimate the ability of humans to get confused at tasks that might seem simple on the surface, and also be sympathetic towards those that might have more limited physical capabilities and cannot readily perform those physical tasks. Presumably, the compartment doors will have an automated way to open and close, rather than you needing to physically push open and push closed the compartment doors (though, not all designs are using an automated door open/close scheme). This does bring up some facets about these boxes on wheels that you need to consider. First, there’s the aspect of having a human on-board versus not having a human on-board:         Human attendant         No human attendant I’ve carefully phrased this to say human attendant. We don’t need to have a human driver in these vehicles since the AI is supposed to be doing the driving. This though does not imply that the vehicle has to be empty of a human being in it. You might want to have a human attendant in the vehicle. The human attendant would not need to know how to drive. Indeed, even if they knew how to drive, the vehicle would most likely have no provision for a human to drive it (there’d not be any pedals or steering wheel). Why have a human attendant, you might ask? Aren’t we trying to take the human out of the equation by using the AI self-driving car technology? Well, you might want to have a human attendant for the purposes of attending to the vehicle when needed. For example, suppose the grocery carrying vehicle comes up to my house and parks at the curb in front of my house. Darned if I broke my leg in a skiing incident a few weeks ago and I cannot make my way out to the curb. Even if I could hobble to the curb, I certainly couldn’t carry the grocery bags back into the house and hobble at the same time. The friendly attendant instead leaps out of the vehicle when it reaches my curb. They come up to my door, ring the doorbell, and provide me with my grocery bags. I’m so happy that I got my groceries brought to my door and did not have to hassle going out to the vehicle. This could be true too if you were in your pajamas or maybe drunken from that wild party taking place in your home. The “last mile” of having a vehicle pull-up to your curb, or perhaps park in your driveway, or wherever, the AI self-driving car system isn’t going to bridge that gap. Having a human attendant would. Think too that the human attendant does not need to know how to drive a car and doesn’t need a driver’s license. Therefore, the skill set of the human attendant is quite a bit less than if you had to hire a driver. Also, the AI is doing the driving and so you don’t need to worry about whether the human attendant got enough sleep last night to properly drive the box-on-wheels. Essentially, this human attendant is the equivalent of the “box boy” (or “box girl”) that boxes up your groceries in the store (well, that’s in stores that still do so). Having a human attendant can be a handy “customer service” aspect. They can aid those getting a delivery, they can serve to showcase the humanness of the grocer, they can answer potential questions that the human recipient might have about the delivery, and so on. The downside is that by including the human attendant, you are adding cost to the delivery process, and you’ll also need to deal with the whole aspect of hiring (and firing) of the attendants. It could make deliveries a positive thing to have a human attendant, but it can also be a negative. If the human attendant is surly to the person receiving the goods, the humanness of things could backfire on the grocery store. Some say that the box-on-wheels should have a provision to include a human attendant, but then it would be up to the grocer to decide when to use human attendants or not. In other words, if the vehicle has no provision for a human attendant to ride on-board, the grocer then has no viable option to have the human attendant go along on the delivery. If you have the provision, you can then decide whether to deploy the human attendant or not, perhaps offering during certain hours of the day the human attendant goes along and at other times does not. Or, maybe that for an added fee, your grocery delivery will include an attendant and otherwise not. So, why not go ahead and include a space in the box-on-wheels to accommodate a human attendant? We’re back to the question of how to best design the vehicle. If you need to include an area of the vehicle that accommodates a human attendant, you then are sacrificing some of the space that could otherwise be used for the storing of the groceries. You also need to consider what must the requirements of this space consist of. For example, should it be at the front of the vehicle, akin to if the human was in the driver’s seat, or can it be in the back of someplace else. You would likely need to have a window for the person to see out of. There are various environmental conditions that the vehicle design would need to incorporate for the needs of a human. For future job roles as a result of the advent of AI self-driving vehicles, see my article: https://www.aitrends.com/selfdrivingcars/future-jobs-and-ai-self-driving-cars/ For my article on how Gen Z is going to shape the timeline of the advent of AI self-driving vehicles, see: https://www.aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/ For the potential of pranking AI self-driving vehicles, see my article: https://www.aitrends.com/selfdrivingcars/pranking-of-ai-self-driving-cars/ For my article about the public shaming of AI self-driving vehicles, see: https://www.aitrends.com/selfdrivingcars/public-shaming-of-ai-systems-the-case-of-ai-self-driving-cars/ This brings up another aspect about the box-on-wheels design, namely whether it can potentially do driving in a manner that would be beyond what a human would normally do. Assuming that the groceries are well secured and packaged into the compartments, the box-on-wheels could make sharp turns and brake suddenly, if it wanted or needed to do so. If there’s a human attendant on-board, those kinds of rapid maneuvers could harm the human, including perhaps some kind of whiplash or other injuries. Also, if the box-on-wheels somehow crashes or gets into an accident, if you have a human attendant on-board there needs to be protective mechanisms for them such as air bags and seat belts, while otherwise the only danger is to the groceries. I think we’d all agree that some bumped or smashed groceries is not of much concern, while a human attendant getting injured or maybe killed is a serious matter. Thus, another reason to not have a human attendant involves the risks of injury or death to the human, which if you are simply doing grocery delivery is adding a lot of risk to the attendant and to the grocer. Let’s shift attention now to the nature of the compartments that will be housing the goods. Grocery Bags in Compartmetns of the Box-on-Wheels For the delivery of groceries, it is so far assumed that the groceries will be placed into grocery bags and that in turn those grocery bags will be placed into the compartment of the box-on-wheels. This convention of our using grocery bags goes back many years (some say that the Deubner Shopping Bag invented in 1912 was the first modernized version) and seems to be a suitable way to allow humans to cart around their groceries (rather than perhaps cardboard boxes or other such containers). The grocery bags are quite handy in that they are something we all accept as a means of grouping together our groceries. It has a familiar look to it. Assuming that the grocery bag has some kind of straps, the manner in which you carry the grocery bag allows you to either carry it by the straps or you can carry the whole bag by picking it up from the bottom or grasping the bag in a bear hug.  In that sense, the grocery bag is a simple way allowing for multiple options as to how to carry it. This is mainly important for purposes of the human recipient and how they are to remove their groceries and then transport them into their adobe. For the moment, assume that the grocery store will indeed use a grocery bag for these purposes. You would want the grocery bag to be sturdy and not readily tear or fall apart – imagine if the box-on-wheels has no human attendant, arrives at the destination, and the human recipient pulls out their bag of groceries and it rips apart and all of their tangerines and other goods spill to the ground. The human recipient will be irked and likely not to order from that grocer again. Therefore, the odds are that the grocery bag being used for this purpose has to be as sturdy if not even more sturdy than getting a simple plastic bag or brown bag at your local grocery store. The odds are that the grocery store will use some kind of special cloth bag or equivalent which is durable and can safely hold the groceries and be transported. Likely the grocery store would brand the bags so that it is apparent they came from the XYZ grocery store. The twist to all of this is the cost of those bags and also what happens to them. The cost is likely high enough that it adds to the cost of the delivery overall. Also, if every time you receive a delivery you get and presumably keep the bags, it means that the grocer is going to be handing out a lot of these bags over time. Suppose I get about four bags of groceries every week, and I keep the bags, thus by the end of a year I’ve accumulated around 200 of these grocery bags! That’s a lot of grocery bags. You might say that the human recipient should put the grocery bags back into the box-on-wheels after emptying the grocery bags of their goods. That’s a keen idea. But, you probably don’t want the box-on-wheels to be sitting at the curb while the human recipient goes into their home, takes the groceries about of the bags, and then comes out to the box-on-wheels to place the empty grocery bags into it. This would be a huge delay to the box-on-wheels moving onward to deliver goods to the next person. So, this notion of the empty bag return would more likely need to be done when the human recipient gets their groceries, in that perhaps they might have leftover empty bags from a prior delivery and place those into the compartment when they remove their latest set of groceries. Then, when the box-on-wheels gets back to the grocery store, a clerk there would take out the empty grocery bags and perhaps credit the person with having returned them. This shifts our attention then to another important facet of the box-on-wheels, namely the use of the compartments. We’ve concentrated so far herein on the approach of delivering goods to someone. That’s a one-way view of things. The one-way that we’ve assumed in this discussion is that the grocery store is delivering something to the person that ordered the groceries. The human recipient removes their groceries from the compartment and the compartment then remains empty the rest of the journey of the box-on-wheels for the deliveries it is making in this round. Suppose though that the compartments were to be used for taking something from the person that received delivery goods. Or, maybe the compartment never had anything in it at all and arrived at the person’s home to pick-up something. The pick-up might be intended to then be delivered to the grocery store. Or, it could be that the pick-up is then delivered to someone else, like say Sam. As mentioned earlier, Sam lives some blocks away from you, and perhaps you have no easy means to send over something to him, and thus you use the grocery store box-on-wheels to do so. The possibilities seem endless. They also raise concerns. Do you really want people to put things into the compartments of the box-on-wheels? Suppose someone puts into a compartment a super stinky pair of old shoes, and it is so pungent that it mars the rest of the groceries in the other compartments? Or, suppose someone puts a can of paint in the compartment, fails to secure the lid of the paint can, and while the box-on-wheels continues its journey the paint spills all over the inside of the compartment. As you can see, allowing the recipient to put something into the compartment will be fraught with issues. Some grocers are indicating that the recipients will not be allowed to put anything into the compartments. This is perhaps the safest rule, but it also opens the question of how to enforce it. A person might put something into a compartment anyway. They might try to trick the system into carrying something for them. Ways to try and prevent this include the use of sensors in the compartment to try and detect whether anything is in the compartment, such as by weight or by movement. This does bring up an even more serious concern. There are some that are worried that these human unattended box-on-wheels could become a kind of joy ride for some. Imagine a teenager that “for fun” climbs into the compartment to go along for a ride. Or, maybe a jokester puts a dog into a compartment. Or, worse still, suppose someone puts their baby down into the compartment to lift out the grocery bag, and somehow forgets that they left their baby in the compartment (I know this seems inconceivable, but keep in mind there are a number of hot-car baby deaths each year, which illustrates that people can do these kinds of horrifying absent minded things). Besides having sensors in the compartments, another possibility involves the use of cameras on the box-on-wheels. There could be a camera inside each of the compartments, thus allowing for visual inspection of the compartment by someone remotely monitoring the box-on-wheels. You can think of this like the cameras these days that are in state-of-the-art refrigerators. Those cameras point inward into the refrigerator and you can while at work via your smartphone see what’s in your refrigerator (time to buy some groceries when the only thing left is a few cans of beer!). We can enlarge the idea of using cameras and utilize the cameras on the box-of-wheels that are there for the AI self-driving car aspects. Thus, once the box-on-wheels comes to a stop at the curb, it might be handy to still watch and see what happens after stopping. Presumably, you could see that someone is trying to put a dog into a compartment. The box-on-wheels might be outfitted with speakers and a remote operator could tell the person to not put a dog into the compartment. The use of remote operators raises added issues to the whole concept of the delivery of the goods. You are now adding labor into the process. How many remote operators do you need? Will you allow them to actually operate the box-on-wheels, or are they solely for purposes of acting like a human attendant? There are costs involved and other facets that make this a somewhat less desirable addition to the process. On the topic of remote operators, here’s another twist for you. Suppose the box-on-wheels arrives at the destination address. Turns out that the curb is painted red and presumably the box-on-wheels cannot legally stop there. The street is jam packed with parked cars. There is no place to come to a legal stop. What should the AI of the box-on-wheels do? We all know that a human driver would likely park temporarily at the red curb or might double-park the delivery vehicle. But, do we want the AI to act in an illegal manner? How else though will it solve the problem? You might say it needs to find a legal place to park, but that might be blocks away. You might say that people receiving the delivery will need to arrange for a legal place for the box-on-wheels to stop, but that’s a pretty tall order in terms of having to change the infrastructure of the street parking and dealing with local parking regulations, etc. For my article about the illegal driving aspects of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/ For the parking of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/parallel-parking-mindless-ai-task-self-driving-cars-time-step/ Some believe that with a remote human operator you might be able to deal with this parking issue by having the remote operator decide what to do. The remote operator, using the cameras of the AI self-driving vehicle, might be able to see and discern where to park the box-on-wheels. Would the remote operator directly control the vehicle? Some say yes, but if that’s the case then the question arises as to whether they need to be licensed to drive and opens another can of worms. Some therefore would say no, and that all the remote operator can do is make suggestions to the AI of where to park (“move over to that space two cars ahead”). This though can be a kind of splitting of hairs, since it might be interpreted that a remote operator giving parking instructions is no different than themselves actually driving the vehicle. For my article about remote operators of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/remote-piloting-is-a-self-driving-car-crutch/ Here’s another facet to consider. How long will the box-on-wheels be at a stopped position and allow for the removal of the goods? From the grocer viewpoint, you would want the stopped time to be the shortest possible. For every minute that the box-on-wheels sits at the curb and is waiting for the delivery to be completed, it is using up time to get to the next destination. Those further along in the delivery cycle are all waiting eagerly (or anxiously) for the box-on-wheels to get to them. Suppose a person comes out to the box-on-wheels, opens the compartment designated for their delivery, and for whatever reason rummages around in the grocery bag, maybe doing an inspection to make sure the bag contains what they ordered. They decide to then slowly remove the bag and slowly walk up to their home and slowly put the bag inside the home. Meanwhile, they have four other bags yet to go that are sitting in the compartment. They walk out slowly to get the next bag. And so on. If the system had calculated beforehand that it should take about four minutes to remove the bags by the recipient, it could be that this particular stop takes 20 minutes or even longer. How can you hurry along the recipient? If you had a human attendant, you’d presumably have a better chance of making the deliveries occur on a timelier basis. Without the human attendant, you could possibly use a remote human operator to urge someone to finish… Read more »
  • Thought Leadership: Tom Hebner of Nuance Communications
    true
    Nuance Leveraging AI, Offers New Project Pathfinder Tool for Crafting Intelligent Conversation Apps Tom Hebner, Head of Product Innovation, Voice Technology and AI at Nuance Communications runs the team focused on creating new products and solutions for Voice Technology and Conversational AI. He recently took a few minutes to speak with AI Trends Editor John P. Desmond. Q. How is Nuance doing AI today? I know you have a long history in voice recognition. What makes what you do there now true AI? A. Good question, especially since we’re in an interesting world right now where it’s the perception of AI versus the reality of AI. What many people might not know is that speech recognition is AI technology. Natural language understanding is AI technology. Machine learning is applied AI technology. And just because it was around before the “AI revolution” doesn’t mean that it’s not AI. So, there’s a lot of buzz and hype right now. Tom Hebner of Nuance Along with that comes a lot of startups and even some of our friends in the big technology arena that are coming out and talking about conversational AI and AI in general, saying that there are these new capabilities and new things happening. We have speech recognition, as well as natural language processing. In fact, we have technologies that have been around for 20 years–technologies that have been tuned and optimized over the past two decades. Essentially, we have a very mature process around how to bring this AI technology to market. There is a perception that AI should be a single brain that knows and learns. However, the technology is not there yet. No one has delivered a single brain that totally understands somebody’s enterprise and converses with that organization’s customers because, frankly, that’s science fiction. It doesn’t exist yet. This all said, it doesn’t mean there isn’t strong AI-based technology that can be used to deliver solutions. Q. We do have a lot of buzz around AI these days, as you mentioned. What’s your view on where the industry needs to go? And what is Nuance doing to make it happen? A. Along with the buzz came a lot of “easy-to-use tools,” and claims like: “You can build a bot in a minute. You can build a bot in the afternoon. We’ll make it easier for you to get your bot up and running and doing its thing.” Well, those bots that you can make in the afternoon are very, very simple bots. They are question-answer bots. They are bots that aren’t necessarily bringing real value to businesses or consumers. For example, let’s say you contact your auto insurance company about a rock that came up from the road and cracked your windshield. A whole business process must be followed to address that. If you are going to build conversational AI around that conversation, you have to design it with the expertise required to craft a conversation around that business process. The buzz has made people think this is very easy, but it’s not as simple as plug and play. And it’s not something that just anyone can do. What we’re seeing with some companies is that they are trying to do this on their own. They’re saying, “Hey, this is so easy. We don’t need to hire the experts. We can build this on our own.” And what they are delivering are poor and unconnected experiences. And that makes the technology look bad because the reality with conversational AI is that the technology itself doesn’t provide the solution. You have to build the solution on top of the technology. The technology is just an enabler that requires an expert skill. We coined a term at Nuance called VUI (Voice User Interface) design, which is also sometimes called Conversation Experience design. Our people working on it have psychology backgrounds, literature backgrounds, linguistics backgrounds. We even have a handful of PhD’s in psycholinguistics, which is the psychology of language. (Yes, that field does exist.) So, the AI buzz can make people think this is super easy when the reality is that it’s an expert skill. We see two directions this has to go, both of which we are taking right now at Nuance. One is we must make that pro skill, that VUI design skill, more data-driven and require less effort. Right now, it’s a totally manual job where a VUI designer has to sit down with the subject-matter expert. Whatever we learn from the subject-matter expert gets written down into a conversation flow. That is reviewed, and then the system ultimately gets built. That is all a process involving humans. We’ve just announced Project Pathfinder. Already, we’ve done some proof-of-concept work with our customers, taking conversations happening today in the contact center between two humans, ingesting those and graphically exposing the entire call flow. We can auto-generate that dialog flow – the conversation between two humans – and use that as our basis for building out a bot. That’s an advancement that no one else has right now. We have advanced our technology over 20 years, and now we are turning our attention to the design process. Pathfinder is aimed at making the design process a lot more data-driven. Our second direction right now is to actually deliver on this promise of AI as a single brain that knows all about customers. Enterprises have huge volumes of data about their customers. They can leverage that data when they are having a conversation with a customer to make predictions with less effort. We are working on that key area, which we think will bring even more value to enterprises and their customers. What is making the biggest impact from a customer ROI perspective? Are there any industries where it’s really making a positive impact to the bottom line? Are there any notable customer deployments that you’d like to mention? A. The reason why there’s such an AI revolution now is that compute power and data are a lot cheaper. These technologies are now available to the masses, where previously they had only been available to large enterprises and some niche players that had the money and the volume to really use them. We have several customers that are saving a lot of money every day using AI technology. In our phone channel, we have large customers getting up to 30 million incoming phone calls a month. It’s the high-cost channel for all our customers. The traditional solution had been to offshore customer support, which tended to have a negative impact on customer satisfaction. Customers often want a human touch and to have their problem solved quickly and easily, and that’s where AI technology is really making a huge impact on the bottom line: by being able to understand and solve problems quickly and efficiently. We did a study with one customer that showed it got the highest satisfaction rating from use of self-service automation using AI technology. So, people may complain about using a bot, but when the bot has been well designed, they are delighted using it. Many companies today are offering conversational AI. Is there anything specifically Nuance is doing to stand out? The way we stand out is twofold. One, the technology itself doesn’t deliver the ROI. That’s the black-and-white reality. New entrants to the market today are coming with technology only and saying, “Hey, we have a better way of doing natural language understanding. We have a better way of doing speech recognition. We have a better way of doing these things.” And the reality is, many of them are doing things exactly the same way we’ve been doing it. Some are doing things slightly differently, but the reality is, that’s just technology. The way natural language is used in the enterprise space is mainly to get the intent: “What is it you’re contacting us for?” And that functionally solves the problem. The accuracy of recognizing an intent today is in the low- to mid-90 percentage range. Being able to get to that level of accuracy is really, really close to human capability. While achieving this accuracy does involve technology, it also requires a knowledge base and understanding. Therefore, while many of these folks are coming to market with their conversational AI technology, they’re bringing technology without the knowledge and expertise, and actually not solving the problem or request. At Nuance, we take pride in our technology and our large solutions team that builds conversational AI-based custom applications directly with our customers. We understand their business processes. We sit with them to bring our experts in healthcare, banking, airlines, etc. – whatever industry they are in. We bring in the VUI designers to help craft these intelligent conversation applications on top of AI technology. One of our big differentiators is that we’re not just selling our world-class technology (although there’s no one out there with differentiating technology above ours). We are also selling a solutions team that can actually deliver these solutions. The second major differentiator is that, because we’ve been in this space for so long, we are good at understanding the challenges, which extend well beyond technology. Project Pathfinder, for example, is taking in unstructured, unlabeled, conversational logs and creating a graph from them. No one else is doing that. No one else even knows they have to do that because they’re so new to the space. So, technology and solutions are keeping us ahead of the competition. Thank you Tom! For more information, go to Nuance. Read more »
  • Guide to Your Personal Data and Who Is Using It – From Wired
    true
    ON THE INTERNET, the personal data users give away for free is transformed into a precious commodity. The puppy photos people upload train machines to be smarter. The questions they ask Google uncover humanity’s deepest prejudices. And their location histories tell investors which stores attract the most shoppers. Even seemingly benign activities, like staying in and watching a movie, generate mountains of information, treasure to be scooped up later by businesses of all kinds. Personal data is often compared to oil—it powers today’s most profitable corporations, just like fossil fuels energized those of the past. But the consumers it’s extracted from often know little about how much of their information is collected, who gets to look at it, and what it’s worth. Every day, hundreds of companies you may not even know exist gather facts about you, some more intimate than others. That information may then flow to academic researchers, hackers, law enforcement, and foreign nations—as well as plenty of companies trying to sell you stuff. What Constitutes “Personal Data”? The internet might seem like one big privacy nightmare, but don’t throw your smartphone out the window just yet. “Personal data” is a pretty vague umbrella term, and it helps to unpack exactly what it means. Health records, social security numbers, and banking details make up the most sensitive information stored online. Social media posts, location data, and search-engine queries may also be revealing but are also typically monetized in a way that, say, your credit card number is not. Other kinds of data collection fall into separate categories—ones that may surprise you. Did you know some companies are analyzingthe unique way you tap and fumble with your smartphone? All this information is collected on a wide spectrum of consent: Sometimes the data is forked over knowingly, while in other scenarios users might not understand they’re giving up anything at all. Often, it’s clear something is being collected, but the specifics are hidden from view or buried in hard-to-parse terms-of-service agreements. Consider what happens when someone sends a vial of saliva to 23andme. The person knows they’re sharing their DNA with a genomics company, but they may not realize it will be resold to pharmaceutical firms. Many apps use your location to serve up custom advertisements, but they don’t necessarily make it clear that a hedge fund may also buy that location data to analyze which retail stores you frequent. Anyone who has witnessed the same shoe advertisement follow them around the web knows they’re being tracked, but fewer people likely understand that companies may be recording not just their clicks but also the exact movements of their mouse. In each of these scenarios, the user received something in return for allowing a corporation to monetize their data. They got to learn about their genetic ancestry, use a mobile app, or browse the latest footwear trends from the comfort of their computer. This is the same sort of bargain Facebook and Google offer. Their core products, including Instagram, Messenger, Gmail, and Google Maps, don’t cost money. You pay with your personal data, which is used to target you with ads. Who Buys, Sells, and Barters My Personal Data? The trade-off between the data you give and the services you get may or may not be worth it, but another breed of business amasses, analyzes, and sells your information without giving you anything at all: data brokers. These firms compile info from publicly available sources like property records, marriage licenses, and court cases. They may also gather your medical records, browsing history, social media connections, and online purchases. Depending on where you live, data brokers might even purchase your information from the Department of Motor Vehicles. Don’t have a driver’s license? Retail stores sell info to data brokers, too. The information data brokers collect may be inaccurate or out of date. Still, it can be incredibly valuable to corporations, marketers, investors, and individuals. In fact, American companies alone are estimated to have spent over $19 billion in 2018 acquiring and analyzing consumer data, according to the Interactive Advertising Bureau. Data brokers are also valuable resources for abusers and stalkers. Doxing, the practice of publicly releasing someone’s personal information without their consent, is often made possible because of data brokers. While you can delete your Facebook account relatively easily, getting these firms to remove your information is time-consuming, complicated, and sometimes impossible. In fact, the process is so burdensome that you can pay a service to do it on your behalf. Amassing and selling your data like this is perfectly legal. While some states, including California and Vermont, have recently moved to put more restrictions on data brokers, they remain largely unregulated. The Fair Credit Reporting Act dictates how information collected for credit, employment, and insurance reasons may be used, but some data brokers have been caught skirting the law. In 2012 the “person lookup” site Spokeo settled with the FTC for $800,000 over charges that it violated the FCRA by advertising its products for purposes like job background checks. And data brokers that market themselves as being more akin to digital phone books don’t have to abide by the regulation in the first place. There are also few laws governing how social media companies may collect data about their users. In the United States, no modern federal privacy regulation exists, and the government can even legally request digital data held by companies without a warrant in many circumstances (though the Supreme Court recently expanded Fourth Amendment protections to a narrow type of location data). The good news is, the information you share online does contribute to the global store of useful knowledge: Researchers from a number of academic disciplines study social media posts and other user-generated data to learn more about humanity. In his book, Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are, Seth Stephens-Davidowitz argues there are many scenarios where humans are more honest with sites like Google than they are on traditional surveys. For example, he says, fewer than 20 percent of people admit they watch porn, but there are more Google searches for “porn” than “weather.” Personal data is also used by artificial intelligence researchers to train their automated programs. Every day, users around the globe upload billions of photos, videos, text posts, and audio clips to sites like YouTube, Facebook, Instagram, and Twitter. That media is then fed to machine learning algorithms, so they can learn to “see” what’s in a photograph or automatically determine whether a post violates Facebook’s hate-speech policy. Your selfies are literally making the robots smarter. Congratulations. Read the source article in Wired. Read more »
WordPress RSS Feed Retriever by Theme Mason