There are 5 important elements for you to succeed in business:
Our proprietary technology allows us to offer you unprecedented services:
- Unlimited traffic
- Fast reaction, weeks instead of months or years
- And best of all, we deliver or it is free
The advantage of using technology to generate traffic is the fact that now you have the opportunity to work on the other four factors that determine your success.
So even if you fail more than once and even alienate your prospects, our technology keeps on bringing you more and more people so you can keep on correcting your mistakes over and over until you get it right and succeed in your business. This is what we do for you, you do the rest.
This is what we do for you; provide you with unlimited traffic as much as you want!
Just contact us for a free consultation.
Your products or services are of paramount importance; your visitors must want what you have to offer. It is obvious, right? But even if you fail here, because we keep on bringing you people you can correct the issue. How good is it that you have the best product but no one comes?
All these visitors found you online, so if your price is not competitive, they can just as easily find your competitors.
You must be able to explain quickly and clearly what you offer.
How will you deliver, get paid, etc, this is probably the easiest but just as important.
- Bringing AI to Business Users With Microsoft Power Platform, Acquisitions of Tableau by Salesforce, Looker by GoogleBy AI Trends Staff Three big recent tech moves – one announcement and two acquisitions — exemplify the trend of making data more available to business users, especially for applications such as dashboards incorporating visualization tools that tap into AI intelligence. Abstracting away the complexity of building machine learning models, is the aim of Microsoft with its Power Platform announcements made on June 11. The company tried to bring low-code simplicity to building applications last year when it announced PowerApps. Now it believes by combining PowerApps with Microsoft Flow and its new AI Builder tool, it can allow folks building apps with PowerApps to add a layer of intelligence very quickly, as reported in TechCrunch. Microsoft’s Data Connector tool gives users access to more than 250 data connectors, including Salesforce, Oracle, Adobe and Microsoft services such as Office 365 and Dynamics 365. Richard Riley, senior director for Power Platform marketing, says this is the foundation for pulling data into AI Builder. “AI Builder is all about making it just as easy in a low-code, no-code way to go bring artificial intelligence and machine learning into your Power Apps, into Microsoft Flow, into the Common Data Service, into your data connectors, and so on,” Riley told TechCrunch. While Microsoft admits AI Builder won’t be something everyone uses, they do see a kind of power user who might have been previously unable to approach this level of sophistication on their own, building apps and adding layers of intelligence without a lot of coding. If it works as advertised, it will bring a level of simplicity to tasks that were previously well out of reach of business users without requiring a data scientist. Regardless, all of this activity shows data has become central to business, and vendors are going to build or buy to put it to work. Salesforce Buys Tableau to Boost Analytics Salesforce announced on June 6 the acquisition of Tableau in an all-stock deal valued at $15.3 billion, as reported in Fortune, in a bid to build its analytic offering. This will be the largest acquisition to date for Salesforce, according to data compiled by Bloomberg. Tableau will remain headquartered in Seattle and will continue to be led by CEO Adam Selipsky, a former Amazon.com Inc. executive who has been transitioning Tableau’s software tools to cloud-based subscriptions. Known for its chart applications and analytics dashboards, Tableau has been broadening its product line to include data cleanup and machine learning tools, enabling it to compete in the wider data-warehousing. Google Buys Looker for $2.6 Billion in Cash Also on June 6, Google’s parent Alphabet announced that it plans to pay $2.6 billion in cash for Looker, a data analytics tool maker. Among the reasons Google made the move according to an account in Forbes, is to better participate in the large, fast-growing data analytics market. In August 2018, IDC said that “Worldwide revenue for big data and business analytics (BDA) solutions was $166 billion, up 11.7% over 2017 and [would] reach $260 billion in 2022, with a compound annual growth rate (CAGR) of 11.9%.” Also, Looker has been growing faster than the market. As CEO Frank Bien told Forbes last December, “Looker is doing really well. We raised $103 million in a Series E bringing our total capital raised to $280 million. We have 1,600 customers, our revenue run rate is over $100 million and we are enjoying 70% year over year growth.” Looker was growing as IT departments sought a better solution to their data analytics problems. “People are tackling the problem of data. Technology is a mess. There are tools for visualization, cataloging, and data preparation. We reconstitute these functions in a single platform. We bring business information such as revenue, bookings, and the lifetime value of the customer into the hands of users and companies,” said Bien. Also, the move could help drive Google’s cloud business. Former Oracle president of product development Thomas Kurian became Google Cloud CEO in November 2018. He has been quoted as saying he wants to expand the business and accelerate its growth. Google is far behind the cloud leaders. According to Canalys, Amazon has 32% of the Cloud market, Microsoft has 13.7% and Google had 7.6% at the end of 2018. CNBC reported that Google “does not break out revenue for its cloud business but has said it brings in more than $1 billion per quarter between its public cloud and cloud apps.” Google — whose venture capital arm invested in Looker — said in its press release that it already shared over 350 customers — BuzzFeed, Yahoo and Hearst — through an existing partnership. It appears that Kurian does not intend to force Looker to stop working with its non-Google-cloud clients. As CNBC reported, he told journalists on a conference call that “Looker will continue to work with other products, like Amazon Web Services’ Redshift and Microsoft’s Azure SQL Server. Looker competitors include Domo and Tableau, along with Microsoft’s Power BI and AWS’ QuickSight.” If Google can keep Looker’s talent and sustain its growth, this deal may help Google gain market share in the Cloud. Read the source articles in TechCrunch, Fortune and Forbes. Read more »
- Procrastination As A Human Behavior Imbued Into AI Systems: The Case For Autonomous CarsBy Lance Eliot, the AI Trends Insider Procrastination. Procrastinator. As the old joke goes, when someone asks you what the word procrastination means, you are supposed to say “I’ll get back to you about that.” I’m sure we’ve all felt like a procrastinator at one time or another. Often considered a negative aspect of human behavior, some liken procrastination with being lazy, careless, and otherwise less desirable than being prompt and proactive. We might though be somewhat making a false generalization about procrastination. Does being a procrastinator always have to be bad? It is said that prolific and ingenious inventor and artist Leonardo da Vinci was known for dragging out the works he owed his patrons and often taking nearly forever to get done what he had been obligated to produce. Charles Darwin had acknowledged that he often put off things that he was supposed to do or wanted to do. Few realize that he took years to write his acclaimed “On the Origin of Species” and was at times using his time to instead study barnacles. If these greats had bouts with procrastination, can it really be that bad a thing? If you find yourself falling into the procrastination trap and don’t want to be a procrastinator, you can always call upon Saint Expedite. For those of you have been to New Orleans, or have certain religious interests, you likely know that Saint Expedite is considered the patron for those that want to avert being a procrastinator, and you can ask for his assistance in finding rapid solutions to nagging problems. I suppose too, there are some that say you just need to kick yourself in the you-know-where, but anyway it can be hard to stop the urge to procrastinate however you try to prevent it. Psychologically, it is theorized that people often procrastinate because they fear the act of doing something that might fail. As such, in order to avoid failure, they postpone it. Presumably, by not trying, you convince yourself that you are better off. You might even use trivial items to help yourself be a procrastinator. Should I go to the dentist, or maybe instead I need to wash my socks and clean the latrine? One might think that dealing with the health of your teeth would be a high priority in comparison to those other tasks. It’s amazing how we can allow seemingly low-priority items to aid us in avoiding dealing with meatier issues or topics. For some people, they are an occasional procrastinator. Perhaps most of the time they get things done on a timely basis. A particular situation might cause them to go into a procrastination mode. A personal example is that the other day my car dashboard lit up with an indicator that my low-beam left headlight bulb was burnt out. I probably should have taken my car right away to have the lamp replaced. Instead, I sheepishly admit that for several weeks I drove at night with the high-beams on, rather than the low beams, so that I’d have both headlamps working. I truly meant to go over to have the headlight replaced, but seemed to let other matters take higher priority. Sometimes a procrastination can have potentially dangerous consequences. You could say that my example about driving around with my high-beams on, rather than using my low beams, made my driving circumstances a bit less safe. Not much less, I’d argue, but certainly a little bit. On the other hand, shortly after I had the left headlight replaced, a few days later my left rear brake light suddenly went out. It was a Murphy’s Law kind of curse because had it gone out just a few days earlier, I could have had it replaced at the same time as the headlight. But, no, it had to wait and then force me to make a second trip to the car repair shop. In theory, I could have driven for many days or weeks with the left rear brake light out, but this for sure would have reduced the safety factor of my driving. Theories About Our Procrastination There are the perennial or serial procrastinators. These are the types that just seem to shove everything off into the future. No reason to get something done today, if you can hand it off to the future, they believe. They might overtly have this belief and relish it. Others find themselves gradually getting immersed into this approach, happening almost without them consciously realizing they are doing so. It can be a deathly kind of spiral. You beat yourself up for having procrastinated. It happens again. You beat yourself up again. You then become convinced that you are a “failure” and destined to procrastinate. Nothing is going to get you out of that spiral, other than some kind of direct intervention. There’s a well-known theory that somewhat covers this, called Temporal Motivation Theory (TMT). You might find of interest a core formula often used to express TMT: Motivation = (Expectancy x Value) / (1 + Impulsiveness x Delay) Your “Motivation” is the amount of desire that you have to achieve a particular outcome. If your motivation score is low, you are more likely to procrastinate. If your motivation score is high, such as if you realize that your brake light being out is putting you in grave danger, you are more likely to take action about it. We can calculate your motivation for a given circumstance. The “Expectancy” is the probability of achieving success on the matter at hand. The “Value” is the reward that you personally will gain by achieving the desired outcome. By multiplying the Expectancy by the Value, the formula is saying that if one of those variable is low it is going to bring down the combination of them, while if they are both high it will make their combination higher. I think that my expectancy of fixing my brake light is quite high (just need to get the car over to the repair shop), and the value of increasing my safety while driving my car is high. The “Impulsiveness” is the person’s sensitivity to delay. Some people are very impulsive and need to do things right away. Other people are more prone to taking their time or at least considering that they are willing to take time and don’t need to immediately handle the matter. There’s the “Delay” which is considered the time to realize the needed achievement. Anyway, it’s kind of an interesting formula because it tries to mathematically express something that we all seem to know is happening, but don’t have at hand a tangible way to calculate why we do what we do. Using the formula, you can become more reflective when faced with a situation that you are possibly going to procrastinate on. You can ask yourself, why is your motivation so low, and whether it is due to the expectancy, the value, the impulsiveness, or the delay, or possibly some combination of those several factors. People often make excuses for why they procrastinate. When I refer to these aspects as excuses, I should clarify that maybe they are valid. We often react to the word “excuse” and think it is a made-up aspect or an attempt to deflect blame. Sometimes an excuse is quite valid. It doesn’t have to be a cover story or a deflection. Here’s some of the traditional excuses or coping responses: Avoidance of the matter Denial about the matter Trivialization of the matter Distraction about the matter Mocking of the matter Blaming about the matter Procrastination Can Be A Manifest Strategy This quick introduction to the topic of procrastination then brings us to a major final point before I move onto using this foundation for other purposes herein. I claim that procrastination can occur perchance, but it can also be a manifest strategy. In the case of my left headlight, I was well aware that I was “procrastinating” about taking in the car to have the headlight replaced. I won’t try to convince you that I was so busy that I couldn’t take it in (that’s a potential “excuse” or coping avoidance). If I had really thought the headlight being out for the low-beam was important, I would have gotten to the car repair promptly. Instead, I made an explicit “procrastination” decision that I would delay taking the car in, and that the use of the high-beam was sufficiently acceptable in the interim. Someone outside of my situation, looking at what I had done, might label my actions as those of a procrastinator. Okay, if you want to label me that way, for this situation, I’ll take it. Is the word “procrastination” in this instance a showing that I am a bad person that is careless and lazy? I don’t think so. It merely shows that I had calculated that the delay in doing something was in my view a proper thing to do. You might wonder if I now have gotten bitten by the procrastination virus and everything I do is infected with procrastination? No, that hasn’t happened. Indeed, as mentioned, I opted to right away take care of the brake light, even though it was a hassle as I had just already been the repair shop and so had to go there a second time (wasting my time, in a sense, other than to effect the repair, which could have been presumably done in one visit, had I known the brake light was about to go out too). Applying Procrastination As Part Of An AI System What does all of this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are using the core aspects of “procrastination” for two purposes, one is serving as a direct strategy of the AI driving the car, and the other is to deal with what we believe will be a human foible regarding AI self-driving cars. Let’s tackle the human foible topic first. As I’ve said many times, an AI self-driving car is still a car. By this I mean that some people are getting into their heads that an AI self-driving car will magically work 24×7 and will never have any mechanical problems or breakdowns. This utopian view of the world assumes that there is some kind of magical fairy dust that we are going to sprinkle onto AI self-driving cars that keeps them from wearing out and from having parts that break. Let’s get real. A car is a car. The brake lights are going to go out, just like on a regular car. The oil will need to be changed. The transmission will need to get overhauled. This is going to actually happen more frequently and with deeper impact since we are expecting AI self-driving cars to be running all the time, versus today the average car is unused nearly most of the day – see my article about this: https://aitrends.com/ai-insider/non-stop-ai-self-driving-cars-truths-and-consequences/ Also, take a look at my overall framework about AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ Not only will the conventional parts of the self-driving car breakdown, but you can bet that the specialized add-on parts are going to breakdown too. The specialized processors to run the AI systems will eventually start to falter and need to be replaced. The sonar devices will eventually need to be replaced. The radar devices will need to be replaced. And so on. See my article about kits and AI self-driving cars: https://aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/ I hope you now agree that an AI self-driving car is a car. It will have all sorts of mechanical problems over time. This will happen to the conventional parts of the car. This will happen to the specialized parts of the car. We don’t hear anything about it today because the few AI self-driving cars on the roadways are living pampered lives. They are like horses that are thoroughbreds. The auto maker or tech firm caters to their every need. These self-driving cars are continually getting primped and revived. The odds of having a part breakdown during one of their journeys is very remote. Now imagine an AI self-driving car that Joe Smith has purchased and he’s using it for himself, for his family, for his friends, and renting it out too for ride sharing. That self-driving car is busy. When something snaps or breaks on the AI self-driving car, we need to consider these ramifications: Does the AI self-driving car realize that something is broken or amiss? Can the AI self-driving car continue safely operating? Is there anything the AI self-driving car can do to get itself repaired? We cannot assume that the AI will even know that something on the self-driving car is broken. We’re developing our AI self-driving car software to purposely try to detect that something has gone afoul. Many of the auto makers or tech firms consider this to be an “edge” problem. An edge problem is something that you don’t consider core to what you are doing. Most of the auto makers and tech firms just want an AI self-driving car to deal with driving the car. Right now, since these self-driving cars are pampered, there’s no need to deal with detecting anomalies automatically and having to deal with them immediately and directly. Example In The Use Case Of Autonomous Cars Here’s how procrastination comes to play. Suppose the human owner becomes aware that some aspect of their AI self-driving car has gone afoul. Maybe an occupant riding in it called the owner to complain. Maybe the AI detected that something was amiss and texted the owner to indicate that the self-driving car is having a problem. What will the human owner do? You might assume that the human owner will promptly make sure that the AI self-driving car gets repaired. This would happen in the utopian world. In the real-world, we’re betting that the human owner is possibly going to procrastinate. If there are say ten cameras on the AI self-driving car, and it’s been designed to be able to operate with just nine, though not as good as it could with ten cameras, the human owner might decide to keep the AI self-driving car going and not take it in for repair. Is this safe? Maybe yes, maybe not. Will the next occupant that gets into that AI self-driving car even know that it is only operating with nine cameras? Possibly not. Should we ensure that the AI of the self-driving car warns any passengers about any anomalies? The owner of the self-driving car might not like that idea, and be worried that as a ride sharing rental that they’ll lose money if the AI starts to blab about what is wrong with the self-driving car. There aren’t any regulations that force the AI to reveal what’s going on. The owner likewise is not under any direct law to do so, though you could construe various aspects about safety and the public that might turn this into a crime. For the moment, the use of AI for self-driving cars is so new that we have yet to figure out what twists and turns will happen, and nor do we yet know what kinds of regulations and new laws might be required. See my article about lawsuits and AI self-driving cars: https://aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/ See my article about regulations and AI self-driving cars: https://aitrends.com/ai-insider/assessing-federal-regulations-self-driving-cars-house-bill-passed/ The overall point is that we need to anticipate the dangers of potential “procrastination” based on human foibles and how it might adversely impact AI self-driving cars and their safety on our roadways. Our focus is for the moment aimed at the technical side of things. The AI needs to be able to detect that something is amiss. When this occurs, the AI has to be sophisticated enough to know how to try and overcome the aspect that went afoul, if it can, and be aware of any new limitations that arise (maybe the self-driving car can only go 5 miles per hour and no faster, or maybe it can make right turns but not any left turns, etc.). And, it has to be aware of whom to contact about the matter. There are some that say this is “easily” solved because once the AI self-driving car detects that something is amiss, it can just route itself to the nearest repair shop. Problem solved. Well, not quite. Maybe the AI self-driving car is unfit to be able to get to the nearest repair shop. Maybe it is fit to do so, but perhaps it’s Okay for it to instead continue on its journey and later on go to the repair shop. Maybe it cannot detect itself that itself is in trouble. As such, the AI should allow for the human occupants to possibly tell it that something is amiss (“we see smoke coming out of the engine compartment”), and possibly even communicate via V2V (vehicle-to-vehicle communications) and be told by another AI self-driving car that something is wrong (self-driving car X12345 transmits a warning to self-driving car Y87654 that there is smoke coming from the engine compartment). Leveraging Procrastination In Driving The second part of the procrastination aspect related to AI self-driving cars involves using procrastination as a driving strategy. I know that you might be somewhat surprised or shocked at this idea of using procrastination purposefully. Remember though that I earlier stated that procrastination can occur by happenstance, or it can be used as a directed strategy. Suppose I’m driving on the freeway. My exit is up ahead. I do the right thing and long before the exit make my way over to the rightmost lane. I sit there in the slow lane, maybe a mile ahead of my exit. I’m going bumper-to-bumper but it’s Okay because at least I know I am securely in my needed exit lane. Do humans really drive this way? Some do, many do not. What’s actually more likely is that I’ll “procrastinate” and put off getting into the slow lane until the last moment, just in time to make that exit. This is more efficient driving, from the perspective of most drivers (I realize that you traffic researchers out there would argue that this is lousy driving and worsens traffic, and it is unsafe driving, but anyway that’s a different debate for another day). We are building “procrastination” into the AI of self-driving cars. Rather than being the goody two shoes kind of driver, our view is that if self-driving cars are supposed to be able ultimately drive as good as a human, they should be adopting human driving techniques. The utopians out there are going to go ape, since they believe that all AI self-driving cars will be perfectly civil and obey all laws and be sweet and kind to all other cars on the roads. Maybe. I don’t think so. See my article about defensive driving practices and AI self-driving cars: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/ Conclusion Don’t misjudge what I am saying. We are not intending to make AI self-driving cars that are daredevils. Our focus is AI self-driving cars that drive as reasonable humans do. There are many driving situations wherein “procrastination” (the good kind) is actually handy to consider as part of the driving repertoire. Shouldn’t be used all the time. Shouldn’t be used indiscriminately. Should be used in the right situation, at the right time, in the right way. Some want us to achieve AI self-driving cars that can pass a variation of the Turing Test. This means that AI self-driving cars would be able to drive a car in the same manner that a human does. If we were standing outside and watching two cars drive along, and we couldn’t see into the cars, and we had to try and say which one was being driven by the human and which by the AI – if we couldn’t discern which was which, in a small way the AI has passed a type of Turing Test. See my article about Turing Tests and AI self-driving cars: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/ Today’s AI self-driving cars are programmed to drive a car like a teenage novice driver, and indeed less so since the AI of today does not have common-sense reasoning and lacks a myriad of other human-thinking qualities. This state of affairs is not going to get us to the vaunted Level 5, which is a true AI autonomous car. Using the technique of purposeful “procrastination” is one of the many ways in which an AI self-driving car can drive more like a human can. Oh, you’ll need to excuse me, I’ve avoided doing more development work on the Machine Learning algorithm for our AI system because I opted to instead write this column — I guess that was my allotted “procrastination” act for the day. Copyright 2019 Dr. Lance Eliot This content is originally posted on AI Trends. Read more »
- AI Still Far Away from Mission-Critical Role, DoD’s Porter SaysDr. Lisa Porter, Deputy Under Secretary of Defense for Research and Engineering, had a lot of good things to say about the promise of artificial intelligence (AI) technologies at the GEOINT Symposium on June 4, with one important caveat: AI isn’t ready for prime time in Department of Defense (DoD) critical applications, and likely won’t be for some time. Speaking on June 4, Porter spoke about AI and DoD apps, and made it clear that the best way to take advantage of AI is to put significant effort into finding a problem that the technology can really help solve. What is most useful, she said, “is a well-structured problem that is suitable to AI … otherwise AI is just a shiny tool.” Dr. Lisa Porter, Deputy Under Secretary of Defense for Research and Engineering “Not every problem is ideal for AI,” she said, and advised attendees to “understand the problem better” as a first step. “Take more time to understand what problem you are trying to solve,” she said. “Then see if it’s really possible to generate the right kind of AI data.” Porter further urged technologists to spend a lot of time at the beginning of the process with end-users evaluating whether a potential AI project features the right data, reasonable outcomes, and proper metrics to evaluate results. As for mission-critical DoD applications, Porter ticked off a list of problems with the current state of AI development that she said collectively constitute a “very big problem” for using the technology in vital situations. Those include: The “brittle” nature of many AI algorithms; Problems with reproducing consistent results; Vulnerability to spoofing; and A lack of built-in security features. She also said agencies still seeking to move past legacy IT systems to AI-ready systems are facing a “very hard, heavy lift” in that process. To companies looking to pitch the Federal government on AI applications, she strongly urged them to “explain why your product is effective,” and to fully discuss data sources, algorithms, and how applications produce consistently repeatable results. Companies that can’t show enough evidence on that front might win a pilot project from DoD, but “people who try to take short cuts get caught in pilot purgatory,” and aren’t likely to win more lucrative contracts, she said. Reporting on AI efforts already underway within DoD, Porter said the agency’s Joint Artificial Intelligence Center (JAIC) “is just starting to get going” after being created a year ago. She said the effort “really has the right focus” on “the impact of AI at scale.” “They realize this is very hard,” Porter said, adding, “It’s all about how we do AI at enterprise level.” “There’s nothing very smart about today’s AI tools … That’s what we need to improve,” she said. The achievement of “common sense” in human-machine teaming would be “nirvana,” Porter added. “That team could be very powerful … All of these things require some degree of cognition.” On the DARPA front, Porter said about one-third of the organization’s current projects involve AI “to some degree.” While advanced technology development remains a daunting task, “we will always be ahead if we play to our strengths,” she said. “Those who cheat and steal from us will never win if we play to our strengths,” including adhering to the rule of law, Porter said. There are probably few Federal officials better positioned to judge the capabilities timeline for AI than Porter. In her current position, she oversees research, development and prototyping activities across the DoD enterprise, along with the activities of the Defense Advanced Research Projects Agency (DARPA), the Missile Defense Agency, the Strategic Capabilities Office, the Defense Innovation Unit, and the DoD Laboratory and Engineering Center enterprise. Before her current post, she was executive vice president at In-Q-Tel, and was the first director of the Intelligence Advanced Research Projects Activity (IARPA). Dr. Porter holds a bachelor’s degree in nuclear engineering from the Massachusetts Institute of Technology and a doctorate in applied physics from Stanford University. She received the Office of the Secretary of Defense Medal for Exceptional Public Service in 2005, the NASA Outstanding Leadership Medal in 2008, the National Intelligence Distinguished Service Medal in 2012, and the Presidential Meritorious Rank Award in 2013. See the source article at MeriTalk. Read more »
- What No One Will Tell You About Data Science Job ApplicationsBy Edouard Harris, CEO and Founder, SharpestMinds I’m a physicist who works at a Y Combinator startup. Because of what our company does, I get lots of emails that ask me for data science career advice. Many of those emails ask me very similar questions. Over time, I’ve developed a few stock answers to the questions I usually get. A few days ago, I got an email that asked most of the questions I usually get in a single message. I wrote an answer and sent it back, but then I realized this was a great chance to expand on my answer and post it in public for anyone who’s having trouble finding a data science job, but who doesn’t understand why. For every person who has a question, and asks it, there are ten people who have the same question, but don’t ask it. If you’re one of those ten, then this post is for you. Hopefully you’ll find it helpful. Here’s the email I got, edited for length: The Email Subject: Trying to get a job in data science I am a college drop out (I start with that because apparently if you don’t come out of the womb with a phd in theoretical physics and 15 years of data science experience something must have gone wrong with the birth). […] As I drifted through marketing I found I that I liked the data portion the best. I originally skilled up A/B testing getting certified by Google in Google Analytics & by Optimizely in their testing platform. Then from there I went into Python, SQL, etc. I just graduated from [well-known data science bootcamp] and I am struggling to even get interviews. I have sent out over 100 applications (even in other cities) and I have had very few interviews. To keep “skilling up” I am doing a Udacity NanoDegree & Dataquest.io Here is my LinkedIn if you want to check it out: [redacted] I think my lack of academic pedigree is really what is killing me. It is not really skills (though they really need a lot of work and I am doing that). I am not even getting the interviews to show my skills so that’s why I say that. I had an in person with [BigCo] and it was my first time ever doing in person coding or a whiteboard so that didn’t go well. I had a take home that was a survival analysis from [big startup] and I had never studied that so that didn’t go well I had a take home from [BigCo] that got me an in person and they passed because of my education. My Answer Hi Lonnie — thanks for reaching out. Here’s the truth: depending on where you’re submitting your job applications, a 2–3% interview rate could be normal. There are two reasons why, neither of which you’d have any way to know about. Getting Lost in the Crowd The first reason is that most hiring teams use something called an applicant tracking system to tell them where their best candidates are coming from. If you apply through a channel that’s given them poor results in the past, they’ll spend less time looking at you. Applicant tracking systems can also screen out resumes automatically based on keywords. But I’ve found most people already know to include the right keywords in their resume, so I won’t expand on that here. For example, if you apply for a technical job through Indeed, you’re very unlikely to get anywhere. Everybody knows Indeed, and it’s easy to apply that way. That means the average person who applies to a job on Indeed is, most likely, an average person. So a hiring manager will spend less time looking at a resume that comes from Indeed, because she expects it to be average. You can get around this problem by applying on websites that most people don’t know about yet. Key Values and Y Combinator’s Work At A Startup job page are good places to begin. (I know that by posting links to these job boards here, I’m ensuring that more people will know about them. But it’s unlikely that either of these will be as big as Indeed in the near future.) By using websites that most people don’t know about yet, you’re marking yourself as someone who seeks out opportunities intentionally. The average person who applies to a job on those websites is, most likely, above average. That’s why companies pay more attention to applicants who apply through less known channels. Office Politics There’s a second reason why applying to generic job boards doesn’t work very well. It might be hard for you to believe this, but a lot of the companies that post to job boards (especially big ones) don’t actually do it to find people they want to hire. Of course some do, and of course even those that don’t sometimes find good candidates that way by accident. But if you’re the one who’s job hunting, treat those companies as a pleasant surprise. That sounds insane: why the heck would they post to job boards at all, then? What you need to understand is that in most big companies, there’s a sharp division between the human resources team (HR) and the engineers. The HR team is usually the one that’s posting the job on Indeed. Unfortunately, HR isn’t staffed with engineers, so they can’t really tell which candidates are truly talented and which ones aren’t. HR only knows how to screen for credentials, which means checking if you went to a good school (did Stanford think you were good enough to get in?) or worked at a good company (did Google think you were good enough to work there?). So here’s the dark truth of why your hit rate is 2–3%: HR can’t tell the difference between good bootcamps and bad ones. So they have to default to saying “No”, because they don’t want to waste the engineering team’s time looking at bootcamp grads who might not be any good. I’ve seen this hundreds of times. Luckily there’s good news: most engineering teams understand that their HR can’t screen for talent. So the best engineering teams hire through networks and back channels instead of job boards. As a result, my best advice to you is: start embedding yourself in engineer-driven machine learning meetups.There’s no magic here: just go to meetup.com, find relevant-looking meetups and start going to them. You’ll catch on pretty quick to which ones are valuable and which aren’t. Networking has compounding benefits, so ask smart questions, try to have rewarding conversations, and listen for the inevitable “we’re hiring” announcements when folks introduce themselves. The Interview Ritual A quick word about this: the job interview is a dark, mystical ritual. Every company’s process is different, and every company thinks its job interview is The One True Job Interview. There are lots of ways to get better at interviewing, but the best way is to do it a lot. So my advice to go to meetups will help you here, too: the more you interview, the better you’ll get at it. Even if you bomb your first ones, it’s a skill like anything else and you’ll pick it up as you go. Lastly, I’m sorry this system is so badly broken. I know it’s especially hard for beginners. It isn’t fair, but there’s light at the end of the tunnel: after you have 1–2 years’ experience, the companies start chasing you. The reward is worth the effort. Learn more at SharpestMinds. Read the source post at Towards Data Science. Read more »
- Beware of Plato’s Data Cave, a Key Takeaway from AI in Commerce Business BreakfastGlobal e-commerce is among the fastest growing industries globally, experiencing 18% growth in 2018. Worldwide, consumers purchased $2.86 trillion worth of e-goods in 2018, compared to $2.43 trillion in 2017. Because digital commerce is data-driven, the industry is ripe territory for AI. However, lack of knowledge and uncertainty remain the most prominent obstacles to this technology gaining a stronger foothold. To address these obstacles, deepsense.ai and Google Cloud co-organized a business breakfast to discuss the challenges and opportunities and share their remarks on artificial intelligence (AI) in e-commerce. Joining Google and deepsense.ai were experts from BeeCommerce.pl, Sotrender, and iProspect, all companies deliver sophisticated tools for digital business. Stuck in Plato’s Data Cave “When it comes to building AI applications, it’s all about the data,” said Paweł Osterreicher, Director of Strategy & Business Development at deepsense.ai, during his presentation. He pointed out that the simplest analytics in smaller businesses can be done within an Excel spreadsheet or pen and paper. Preparing a simple segmentation within a client group or spotting best-performing products is not a huge challenge. But those are only the tip of the iceberg. “The more sophisticated insights we gain, the more complicated the task becomes. And that’s where specialized software comes in,” he said. “The greatest challenge is a lack of flexibility. There is no jack-of-all-trades among the popular tools, and each has its limitations. The problem is when a tool doesn’t fit a company’s needs. And, to be honest, that’s a common situation,” Osterreicher continued. Companies thus often need to tweak the tools at their disposal to make them fit or get used to missing insights from their data. “Most companies process only a fraction of their data and operate with only half the picture. They are like the prisoners in ‘Plato’s cave’, watching only the shadows customers cast on the wall, with no access to or true grasp of their real form.” [Ed. Note: See Allegory of the Cave.] The only way to analyze data in a convenient and cost-effective way is to leverage machine learning models. Machines are able to effectively spot patterns even in seemingly insignificant details. “Sometimes information about how long customers hover over a button or how they go about filling in an online form is a first step to obtaining meaningful information. The model is only as good as the data it was built on,” concludes Osterreicher. Image Capture to the Cloud Helping to Reinvent Retail In another presentation, Jakub Skuratowicz focused on the technical aspects of how companies use AI. There are numerous ways for companies to benefit from AI, be it building engagement, personalizing the user experience or detecting fraud. Google’s expert showed a new application of image search for omnichannel commerce. First applied by the Nordstrom clothing company, the app-enabled users to take a photograph of an item and then search for it in the shop’s database. Thus, the customer could quickly buy the product online or check its availability. “By using Google Cloud Platform-delivered machine learning tools, the company reached 95% accuracy in recognizing an item shown in a photograph” It also thrives in recommendation engines. “It was common to recommend the user another version of the product – a different size of a dress, for example. That’s pointless. Why would one need another of the same dress, only slightly bigger?” Skuratowicz asked. Instead, the AI-powered model recommended products that complemented the one that had been searched for, like sunglasses or a scarf to go with the dress. Skuratowicz also showed how AI spots fraudulent transactions in international e-commerce. “Manual or semi-automatic checking can be effective, but machine learning makes it more scalable,” he said. By applying AI-based solutions, the international logistics provider Pitney Bowes boosted the accuracy of its fraudulent transaction detection by 80% while reducing false-positives by 50%. Read the source post at deepsense.ai. Read more »
- US Senate Produces Bipartisan National AI Strategy Proposal; More Time to Comment for NISTBy AI Trends Staff More than $2 billion in federal spending and several policy initiatives are the cornerstones of a new bipartisan bill that would create a government strategy for developing artificial intelligence technology. The Artificial Intelligence Initiative Act is the latest legislation to emerge from the new Senate AI Caucus, which is one of several congressional and executive branch groups focusing on the topic, as reported in fedscoop. Two founders of the caucus, Martin Heinrich, D-N.M., and Rob Portman, R-Ohio, are joined by another member, Brian Schatz, D-Hawaii, in sponsoring the bill. Filed on May 21, it aims to “organize a coordinated national strategy for developing AI” to the tune of $2.2 billion in federal investment over the next five years. The provisions include: Establishing a National AI Coordination Office to coordinate federal AI efforts. Asking the National Institute of Standards and Technologies to establish standards for testing AI algorithms and their effectiveness. Getting the National Science Foundation to set “educational goals” for things like data bias, privacy, accountability and more. Requiring the Department of Energy to build an AI research program for government and academia. “In order to take full advantage of AI technology, we have to invest in it,” Schatz said in a statement. “Our bill will give researchers and innovators the resources to study and further develop AI technology so that we can use it in smart and effective ways.” Statements of support for the bill came from technology leaders including units of Amazon, IBM, Microsoft, the University of New Mexico, New Mexico State University and Carnegie Mellon University. The idea of a national AI strategy has been popular in recent months. And in December 2018 the Center for Data Innovation released a report arguing that “absent an AI strategy tailored to the United States’ political economy, U.S. firms developing AI will lose their advantage in global markets and U.S. organizations will adopt AI at a less-than-optimal pace.” Meanwhile, Still Time to Weigh In on Government AI Standards The Trump administration is giving the public more time to weigh in on a national framework that would guide the growth of artificial intelligence in the coming years. The National Institute of Standards and Technology on Tuesday extended the deadline for submitting comments on technical standards the government should consider for advancing AI technology. Proposals were originally due May 31, but groups now have until June 10 to offer up their ideas, according to an account in Nextgov. NIST granted the extension to accommodate “multiple interested parties” who wanted extra time to draft their frameworks, according to a post in the Federal Register. The agency has received 35 comments so far and expects more to trickle in over the next few days, NIST spokeswoman Jennifer Huergo told Nextgov. Under its executive order on advancing artificial intelligence, the Trump administration charged NIST with exploring technical standards the government could put in place to support “reliable, robust and trustworthy” AI tools. Ultimately, NIST plans to use the comments it receives to help the government ensure the tech is developed responsibly and maintain the country’s edge in the global race for AI dominance. Over the last decade, the government has largely permitted artificial intelligence to develop unobstructed by federal regulations or standards. While the tech advanced significantly under this hands-off approach, critics are now questioning AI’s impact on privacy and civil liberties, as well as the transparency and security of its underlying algorithms. NIST is expected to release a draft plan for federal AI standards around mid-August Read the source articles at fedscoop and at Nextgov. Read more »
- Use Cases Of Wrong Way Driving and Needed AI Coping Strategies For Autonomous CarsBy Lance Eliot, the AI Trends Insider I sheepishly admit that I have been a wrong way driver. There have been occasions whereby I drove the wrong way, doing so luckily without leading to any undesirable outcome, and for which I certainly regret having mistakenly gone astray. It has happened on several instances inside parking garages or parking lots. I’d dare say that many have made the same error and were likely as confounded by poor signage and convoluted paths as I had been. Fortunately, I didn’t go up a down alley and nor did I go down an up alley. I just ended-up going against the alignment of cars in parking spots and quickly realized that I must be going in the wrong direction. When you suddenly realize that you are heading in the wrong direction, it can be relatively disorienting. How did I get mixed up? Did I miss seeing a sign that warned about going in this direction? The next thought that you have is what to do about the situation. Should you continue forward, even though there is now a chance that you’ll come head-to-head with a car that is going in the correct direction? Or, should you back-up, which at least then has your then car going in the proper direction, but a lengthy effort of backing up can have its own dangers. You can also potentially stop the car wherever you happen to be. At least a stopped car would hopefully be less chancy of sparking a car accident than one that is in-motion and going the wrong way. I’m not suggesting that being stopped is necessarily a safe idea and it could still put you and other cars in danger. Even if you do come to a stop, you obviously cannot just sit there until the cows come home and will ultimately need to decide what to do about the situation. For most people, I’d bet that they usually are quick to consider turning around. In essence, as soon as practical, try to get your car turned around and headed toward the proper direction. You might do so by coming to a stop first, and then progressively try to make a U-turn by going back-and-forth, assuming that the space in which turning around is tight. If there is abundant room to turnaround, the matter of doing so becomes quite simplified and involves making a U-arch in as swift a movement as you can. It always seems that just as you start to turnaround the car, another car will come toward you. They then wait for you to make your turnaround. You can usually feel the eyes of the other driver boring at you as you “waste” their time while turning around. The other driver probably thinks that you are quite a clod to have gotten yourself into such a predicament. I even had one driver that honked their horn at me when I was in one instance of turning around – I failed to understand the value of honking the horn since I obviously already knew that I was going in the wrong direction and was trying to rectify the circumstance. Maybe the driver was honking their horn in appreciation for my valiant efforts of turning around (I realize that is the glass-is-half-full perspective of the universe). Fortunately, I’ve not personally gone the wrong way on a freeway, nor on a highway or a regular street. Deadly Serious Cases Of Wrong Way Driving I’ve certainly known of such wrong way instances that were committed by others. Just about month or so ago, a wrong way driver at 2:00 a.m. got onto two of the major freeways here in Southern California, the I-5 and the I-110, and proceeded to drive at speeds of 60 to 70 miles per hour. The crazy driver side swiped some other vehicles during the ordeal. The police were brave and actually chased after the driver. It is one thing to be a police officer chasing a speeding car that is going in the proper direction, which already includes a lot of danger, but imagine the heightened risks of chasing after a driver that is going the wrong way and at high speeds. The late hour was fortunate since there wasn’t much traffic on the freeways and the driver ultimately was caught (they were DUI, plus driving a stolen car). I’ve personally confronted situations involving a wrong way driver coming at me. One of the most scary and vivid such memories involved a vacation trip to Hawaii with my family. We had rented a car on Maui and were driving around to see the beauty of the island. Going along one of the major highways, Haleakala Highway, there was a grass median that separated the westward side from the eastward side of the road. The grass median was banked and the other side of the road was several feet higher, allowing therefore for seeming protective coverage from anyone veering into the other side. There wasn’t any fence or structural barrier dividing the two directions. The kids were having a great time in the back of the car and relished our being in Hawaii. As I attentively watched the road up ahead, all of a sudden, I saw a car from the upper banked roadway that erratically veered across the grassy median and was now entering into my stretch of road, coming straight at me, barreling along at around 50 to 60 miles per hour toward me. Since I was going the same rate of speed, we were quite rapidly approaching each other, completely going head-to-head. This is one game of chicken you never want to be involved in. It was one of those moments in life where time seems to nearly standstill. It was happening so fast that I wasn’t even mentally able to digest it fully. My instincts were to try and avoid the car by myself veering onto the grassy median, figuring maybe that was the safest place to be. I could have veered to my right into the slow lane of the highway, but I thought I’d still be a target of the wayward driver. I guessed that maybe the nutty driver might opt to switch into the other lane, perhaps desperately trying to avoid the head-on collision of our cars, and so the grassy median might have been clear. I doubted that we would both meet in the grassy median and was guessing that the other driver would stay on the highway, even if going in the wrong direction. Just as I was about to make a “panic” swerve up onto the grassy median, in a split second of amazement, I observed the other driver doing the same. I decided to therefore stay in my lane and veer toward my slow lane, aiming to provide as much space between me and the other driver. Sure enough, we zipped past each other with just a few feet to spare. He was on the grassy median and then proceeded further upward and returned to his proper lanes. The whole matter transpired in a few seconds and I almost doubted my own sanity that it even happened at all. There wasn’t any other traffic nearby and so there weren’t any other third-party witnesses. The other driver had utterly threatened my life and the lives of my family. Meanwhile, the kids in the backseat were oblivious to the ordeal and had kept laughing and singing throughout those highly tense brow-sweating moments. I’ll never know what was in the mind of the other driver. Why did they come down onto my stretch of the highway? What made them opt to go back onto the grassy median, rather than somehow trying to stay going on the highway in the wrong direction? Did this all happen by “accident” in that the driver somehow just messed-up, or was this some kind of intentional act for “fun” or “sport” that the driver had in mind? It only took a few seconds for the entire sequence to reveal itself, and yet to this day I remember it as though it took hours to occur and forever will be one of the scariest driving moments of my life. Wrong Way On A Runway There was one other notable wrong way “incident” that I was directly involved in, though it turns out that I was not in imminent peril, including that my luck held true and nothing untoward happened. This one is rather incredible and certainly beyond the norm. Years ago, I was doing research on the cognitive capabilities of air traffic controllers as part of a research grant focusing on the Human Computer Interface (HCI), also sometimes referred to as Human Machine Interaction (HMI). The questions being explored involved how the air traffic controllers made use of their radar systems for tracking air traffic. How much did the air traffic controller need to keep in their mind? To what degree did the radar scopes aid or hinder their ability to route air traffic? What kinds of improvements could be made in the radar systems and the interface so that it would enhance the abilities of the air traffic controllers? I had at first had air traffic controllers come to our research lab at the university and take various cognitive tests. It was impressive how much of a 3D mental model they could create in their minds, unaided by any system at all. I would tell an air traffic controller that a plane was entering into their air space at such-and-such speed and going in such-and-such direction at such-and-such height. I would continue to add more such flights into the airspace, all imaginary, and wanted to see how many such flights they could mentally handle. The twist too was that the air traffic controller was to imagine where the planes are as time ticks along. It is now say five seconds since those planes each entered into your air space, and I’d ask them where each plane was and whether there was any danger of planes colliding. Eventually, I realized that it would be advantageous to go observe the air traffic controllers in action. I got permission to go watch the air traffic controllers at LAX (Los Angeles International Airport), considered one of the busiest airports in the United States. These air traffic controllers were considered the top echelon of air traffic controllers, often having worked their way up from other much smaller airports that had much less air traffic and complexity. I wanted to contrast the top air traffic controllers with those that were still working their way up the controller ladder. So, I got permission to visit a relatively small airport and observe the air traffic controllers there. A fellow researcher and I drove out to the airport together. It was a very foggy night and when we arrived at the airport the fog cloaked most of the airport. We arrived at the airport gate and the security guard told us we could drive directly out to the airport tower. He cautioned us to make sure that we obey all traffic signs and drive at a slow speed. This seemed prudent to us and we agreed to do so. My fellow researcher was driving the car at the time. Well, before I say what happened next, allow me to offer my personal “excuse” about what was taking place so that you won’t judge me too harshly. It was so foggy that you could hardly see your hand in front of your face. We drove along at some snail-like speed of maybe 3-5 miles per hour and kept our eyes peeled. We had rolled down the windows of the car in hopes of being better able to see through the fog. The headlights were bouncing their light off the fog particles and we really could not see much of what was ahead of us. While crawling along, we began to see a colored light embedded in the roadway just a few feet up ahead of us. We could also see some painted lines on the roadway. Turns out, we were driving on a runway! That’s a rather stunning wrong way story, I believe. How many people do you know that have driven their cars onto a runway? When we realized that we were on a runway, you can imagine that the blood drained from our faces and we both looked at each other in shock. The fog was so thick that we hadn’t realized we had meandered onto a runway and we also had no idea which direction would get us clearly off the runway. It turns out too that it was considered an “active” runway that planes could take-off or land upon. Fortunately, the thick fog had temporarily closed-off any flights from landing or taking off. Of course, I’m alive today to tell the story, and we were able to eventually find the road that led to the airport tower. For a few moments though, we had an encounter of a frightful nature and agreed not to tell anyone about it at the time. Our personal code of a “statute of limitations” on speaking of the matter has run its course and so I am able to tell the story now. I chock the whole experience to the braze nous of youth. Our Collective Fascination With Wrong Way Driving One last quick aspect about driving the wrong way. As a society, we seem to have a fascination with wrong way driving. There are numerous movies and TV shows that depict driving the wrong way. It seems that nearly any cop related movie or spy related movie that is a blockbuster has to have its own car chase that involves going the wrong way. One of my favorite such scenes occurred in the movie Ronin, encompassing an elaborately staged and lengthy sequence of going the wrong way on freeways and in tunnels, etc. In terms of why people drive the wrong way, here’s some reasons: Drunk driving Confused driver Inattentive driver Shortcut driver Thrill-seeker driver Etc. There has been extensive research about how to design off-ramps and on-ramps to try and prevent confused or inattentive drivers from going the wrong way. It can be relatively easy to get confused when driving in an area that you are unfamiliar with and inadvertently go up an off-ramp. Going down an on-ramp is usually a less likely circumstance since the car driver would need to make some sizable contortions to get their car positioned to do so. Going the wrong way on a one-way street would be another common means of wrong way driving. I knew one fellow student in college that used to take a one-way street the wrong way in order to get to campus faster. He loudly complained that the right-way was more convoluted and added at least ten to fifteen minutes to his driving commute. According to him, the one-way was rarely used by other drivers and so he felt comfortable going the wrong way on it. In this case, he was convinced that there was nothing wrong with his shortcut and the “problem” was that the city improperly allocated the street as a one-way in the wrong direction. As far as I know, he lucked out and never got into a car accident on that one-way. He was proud of the fact that he drove that wrong way for several years and never once got a ticket (well, he never got caught). The point being that there are some cases whereby a driver goes the wrong way by intent. My fellow student did so as a shortcut, though I always suspected that maybe he was a bit of a thrill-seeker and got a kick out of going the wrong way. His efforts were completely illegal. He endangered not only himself, but anyone else that was in his car during his trickery and could have endangered any cars that were driving the right-way on that one-way street. When I was with my family in Hawaii, we had another “wrong way” circumstance arise, though it was thankfully much less eventful than my head-to-head situation. We were heading up to a remote waterfall and we had to take a winding road that made its way through a thick jungle. I had rented a jeep, just in case the road became difficult to drive on. There was one road that was a one-way up to the waterfall, and a second road that was a one-way down from the waterfall (each being one lane only). The rental agent handed me the keys to the jeep and then offered a word of advice. She told me that portions of the winding road were washed out by recent storms. As such, there would be areas that I would have to drive on the other road, the one that went in the opposite direction. I was a bit dismayed at this bit of news. I clarified that she was telling me to illegally drive, doing so by going the wrong way. She shrugged it off and said that everyone knew about it and it was usable and practical advice. Statistics About Wrong Way Driving Related Deaths There are a mixture of circumstances involving drivers that go the wrong way by mistake while other situations involving a driver that intentionally goes the wrong way. Those that are intentionally going the wrong way might do so under-the-table and without any authority to do so, while in other instances it is conceivable that a driver might be purposely instructed to go the wrong way. According to statistics by the NHTSA (National Highway Traffic Safety Administration), there are about 350 or so deaths per year in the United States due to wrong way driving. Any such number of deaths is regrettable, though admittedly it is a relatively smaller number of deaths than by other kinds of driving mistakes (there are about 35,000 car related deaths per year in the U.S.). There doesn’t seem to be any reliable numbers about how many wrong way instances there are per year and such instances are usually unreported unless there is a death involved. The total number of miles driven in the United States is estimated at around 3.2 trillion miles per year. One would guess that driving the wrong way happens daily and amounts to perhaps some notable percentage of that enormous number of driving miles. Fortunately, it would seem that the number of actual accidents due to wrong way driving is quite small, but this is likely due to the wrong way driver quickly getting themselves out of their predicament and also the reaction of right-way drivers to help avoid a collision. In essence, it might not be happenstance that the wrong way driving doesn’t produce more problems. It seems more likely that it is due to human behavior of trying to overt problems when a wrong way instance occurs. AI Autonomous Cars And The Matter Of Wrong Way Driving What does this have to do with AI self-driving cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving driverless autonomous cars. There are two key aspects to be considered related to the wrong way driving matter, namely how to avoid having the AI self-driving car go the wrong way, and secondly what to do if the AI self-driving car encounters a wrong way driver. Plus, a bonus topic, namely what about having an autonomous car go the wrong way, on purpose, if needed (which, for some, seems outright wrong, since they ascribe to a belief that driverless cars should never “break the law”). Allow me to elaborate. I’d first like to clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/ For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/ For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/ Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion. Here’s the usual steps involved in the AI driving task: Sensor data collection and interpretation Sensor fusion Virtual world model updating AI action planning Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/ See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/ For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/ Use Case: Autonomous Cars Goes The Wrong Unintentionally Returning to the topic of driving the wrong way, let’s first consider the possibility of an AI self-driving car that happens to go the wrong way. Some pundits insist that there will never be the case of an AI self-driving car that goes the wrong way. These pundits seem to think that an AI self-driving car is some kind of perfection machine that will never make any mistakes. I suppose in some kind of utopian world this might be the case, or perhaps for a TV or movie plot it might the case, but in the real-world there are going to be mistakes made by AI self-driving cars. You might be shocked to think that an AI self-driving car could somehow go the wrong way. How could this happen, you might be asking. It seems incredible perhaps to imagine that it could happen. The reality is that it could readily happen. Suppose there is an AI self-driving car, dutifully using its sensors, and is scanning for street signs, but fails to detect a street sign that indicates the path ahead is considered a wrong way direction. There are lots of reasons this could occur. Maybe the street sign is not there at all and it has fallen down, or vandals had taken it down a while ago. Or, it might be that the street sign is obscured by a tree branch or maybe it is so banged up and has graffiti that the AI system cannot recognize what the sign is. Maybe the sign can be only partially seen and does not present itself sufficiently to get a match to the Machine Learning (ML) that was used to be able to spot such signs. Perhaps the weather conditions are such that it is heavily raining, and the sign cannot be detected or perhaps it is snowing and there is a layer of snow obscuring the signs. And so on. I assure you, there are lots of plausible and probable reasons that the AI might not detect a street sign that warns that the self-driving car is about to head the wrong way. For my article about AI and street signs detection, see: https://aitrends.com/selfdrivingcars/making-ai-sense-of-road-signs/ For my article about street scenes detection, see: https://aitrends.com/selfdrivingcars/street-scene-free-space-detection-self-driving-cars-road-ahead/ For my article about defensive driving for AI, see: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/ You might be thinking that it doesn’t matter if the AI is able to detect a sign, since it would certainly have a GPS and map of the area and would realize that the road ahead is one that would involve going the wrong way. Though it is certainly handy for the AI to have a map of an area and a GPS capability, you cannot assume that a map will always be available and also that the GPS won’t necessarily have anything to do about warning of a wrong way up ahead. Currently, the focus for the auto makers and tech firms involves developing elaborated maps of localized areas and then having their trials of the AI self-driving cars take place in a geofenced area. Once we have widespread AI self-driving cars, I don’t think we should be basing their emergence on having mapped every square inch of the world in which they are driving. There are many that are trying to do so, but I am saying that a true Level 5 self-driving car should not be dependent upon having a prior map of wherever it is going. I assert that humans drive in places whereby the human driver has no map at all beforehand, and yet they are still able to sufficiently drive a car. That’s the target of a Level 5, in my opinion, namely being able to drive a car in the manner that a human can drive a car. For my article about robotic navigation without maps, see: https://aitrends.com/selfdrivingcars/simultaneous-localization-mapping-slam-ai-self-driving-cars/ For more about the cartographic efforts taking place, see my article: https://aitrends.com/ai-insider/cartographic-trade-offs-self-driving-cars-map-no-map/ For the importance of LIDAR and maps, see my article: https://aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/ In short, I am claiming that there are going to be circumstances in which an AI self-driving car is going to end-up going the wrong way. This would happen due to the AI not being able to discern the roadway situation and not having a prior map that would otherwise forewarn that a wrong way is up ahead. You might still fight me about this notion, but I’ll add another twist to see if I can convince you of the possibility of an AI self-driving car getting caught up going the wrong way. Remember earlier that I mentioned I have gone the wrong way in various parking structures and parking lots? I’d be willing to bet that the same kind of wrong way heading could happen to an AI self-driving car. I doubt that parking structures and parking lots will be mapped to the degree that our freeways, streets, and highways are. As such, the AI self-driving car when encountering a parking lot or parking structure, might well end-up failing to spot signs about the proper direction and could get itself mired in going the wrong way. A techie might respond by saying that the parking structure or parking lot opt to have some form of electronic communication that would provide directions to the AI self-driving car. I agree that we might well see such electronics being added into all kinds of structures or buildings into which an AI self-driving car might be able to drive. But, I wouldn’t bet on it always being available, and furthermore even if it happens the odds are that it will take place slowly over time, and meanwhile there will be structures that do not have such a communications setup. I’ll offer one other comment about this notion of going the wrong way. Are you willing to bet that there will never be a situation involving an AI self-driving car that finds itself going the wrong way? I ask because if the AI self-driving car is not ready to tackle such a predicament, and it is because you are so sure that it will never happen, well, I’d not like to be in or near that AI self-driving car that has gotten itself into such a fix and then is unaware of it or does not know what to do about it. The auto makers and tech firms are so busy trying to get AI self-driving cars to simply drive the right way on roads, they generally have considered this aspect of dealing with driving the wrong way to be an edge problem. An edge problem is one that is not considered at the core of what you trying to solve. We’re not quite so convinced that this should considered an edge per… Read more »
- Autonomous Ships of the Future: Run by AI Instead of a CrewBy AI Trends Staff Efforts are underway especially by builders of cargo ships to use AI to deliver on the promise of autonomous ships. A fully autonomous ship would be considered a vessel that can operate on its own without a crew. Remote ships are those that are operated by a human from shore, and an automated ship runs software that manages its movements. As the technology matures, more types of ships will likely transition from being manned to having some autonomous capabilities, according to an account in Forbes. Autonomous ships might be used for some applications, but very likely some crew will still be onboard ships even if all hurdles to acquiring a fully autonomous fleet are crossed. Autonomy in Ships As we saw with the collaboration between Rolls-Royce and Finferries, the state-owned ferry operator of Finland, the first autonomous ships will be deployed on simple inland or coastal liner applications where waters are calm, the route is simple, and there isn’t much traffic. An inland electric container ship, Yara Birkeland, is under construction and is expected to be completed in 2020; fully autonomous by 2022. Some companies are building fully autonomous ships from scratch, while other start-ups are developing semi-autonomous systems to be used on existing vessels. When Rolls-Royce sold its autonomous maritime division to Kongsberg for $660 million in 2018, it gave the Norwegian company a boost in its goal of being a leader in the autonomous shipping industry. Samsung is another company that uses machine learning, augmented reality, analytics, and more to create a smart shipping platform through its Samsung Heavy Industries division. Existing cargo ships have the chance to get retrofitted with autonomous technologies thanks to the efforts of start-ups such as San Francisco-based Shone. Shone’s technology helps crews with piloting assistance and to detect and predict the movement of other vessels in the waterway. The Benefits of Autonomous Ships Several major players in the industry are predicting when they will have autonomous ships. Here is a look published by emerj at underlying economic and safety factors driving adoption of this new technology. Last year Mikael Makinen, president of Rolls-Royce Marine, declared that, “Autonomous shipping is the future of the maritime industry. As disruptive as the smart phone, the smart ship will revolutionize the landscape of ship design and operations” Globalization and international commerce is built on seaborne trade because it is often the most cost effective way to move large volumes of goods from one country to another. The United Nations Conference on Trade and Development estimated that in 2015, total seaborne trade volume surpassed 10 billion tons for the first time — roughly a four-fold increase since 1970. Cargo ships are normally a much slower option than cargo planes or even trucks, so their core advantage is usually being a much lower cost option. This is why the shipping industry is always trying to find ways to bring down operating expenses. To keep international volume increasing, the industry needs to make shipping as cheap as possible. Autonomous boats can obviously offer the advantage of reducing/eliminating the expense of salaries and benefits for crew members. This is more important for smaller vessels, where crew costs make up a bigger share of total costs, but less important on larger ships. For large ships, the other potential cost savings go beyond mere reductions in personnel costs. Efficiencies of Ships Without a Crew Once the need for having humans on board is eliminated, the entire vessel can be radically redesigned to improve efficiency in new ways. For example, systems once needed to make the vessel livable for the crew can be removed entirely, simplifying the design. The deckhouse that currently sits above the deck of ships, holding the crew and allowing them to steer the vessel, would no longer be required. This could open up more space for cargo, possibly making loading easier, or allow for a more aerodynamic profile. When automation becomes viable, the industry isn’t planning to just make the same cargo ships they currently do minus crew. They are planning on making a whole new class of vessels re-envisioned from the ground up. It seems likely that crew reduction will occur before total crew replacement. Until robots become dextrous enough to fix engines or complete other routine onboard tasks, humans may need to be in the loop – even if just in the case of emergencies. Reducing Human Error and Risk Autonomy also holds the promise of reducing human error and therefore bringing down costs related to accidents and insurance. According to Allianz Global Corporate & Specialty, between 75% and 96% of all accidents in the shipping sector can be attributed to human error. These incidents rank as the top cause of liability loss. The Costa Concordia disaster is perhaps the most famous example of how much damage human error can cause when dealing with massive ocean-going vessels. The vessel ran aground and overturned after striking an underwater rock off Isola del Giglio, Tuscany, resulting in 32 deaths. This isn’t to say that machines would never make mistakes, but we might imagine that in time machines will make docking and navigation overall (just as automation plays a critical role for aircraft). Predicted Timelines for Autonomous Boats to Hit the Water Here are selected current projects and predicted timelines for autonomous shipping adoption: 1) Rolls-Royce Marine – Short Runs by 2020, Ocean Going by 2025 2) Kongsberg and Yara – 2020 3) Japanese Consortium – 2025 Read the source articles in Forbes and in emerj. Read more »
- Twitter Buys Artificial-Intelligence Startup to Help Fight Spam, Fake News and Other AbuseBy AI Trends Staff Twitter, as part of ongoing efforts to improve the “health” of discussions on the platform, announced that it has acquired U.K.-based artificial-intelligence startup Fabula AI. Terms of the deal weren’t disclosed, according to an account in Variety. The initial focus for Fabula as part of Twitter “will be to improve the health of the conversation, with expanding applications to stop spam and abuse and other strategic priorities in the future,” according to Twitter chief technology officer Parag Agrawal, who announced the acquisition in a blog post on June 3. Fabula has developed the ability to analyze “very large and complex data sets” to detect network manipulation and can identify patterns that other machine-learning techniques can’t, according to Agrawal. The startup has created a “truth-risk scoring platform” to identify misinformation, using data from sources including Snopes and PolitiFact. Twitter, in addition to improving detection of spam and other violations of its policies, plans to use Fabula’s technology to enhance products, including the timeline, recommendations and the explore tab, as well as the process for how users sign up for the service. Spam and bogus accounts continue to be a big problem for the social platform. According to Twitter’s estimates, in the first quarter of 2019, fake and/or spam accounts represented fewer than 5% of its active user base. Fabula’s team will join the Twitter Cortex machine-learning team. Twitter said it has created a research group led by Sandeep Pandey, head of machine learning/AI engineering, to focus on such areas as natural-language processing, reinforcement learning, machine-learning ethics, recommendation systems, and graph deep learning. Founded in April 2018, Fabula is led chief scientist Michael Bronstein and chief technologist Federico Monti, who began collaborating together while at Switzerland’s University of Lugano. Bronstein is currently the chair in machine learning and pattern recognition at Imperial College London and will remain in that position while leading graph deep learning research at Twitter. Twitter in the past has acquired other AI startups, including image-search specialist Madbits in 2014, machine-learning configuration developer Whetlab in 2015 and visual-processing startup Magic Pony in 2016. The acquisition will underpin a research group at Twitter led by Pandey that will work toward finding new ways to leverage machine learning across natural language processing (NLP), recommendations systems, reinforcement learning, and graph deep learning. The group will also address ML ethics, according to an account in VentureBeat. Fake News Spreads Faster than Real News “Fake news” has become an umbrella buzzword to describe the deliberate spread of misinformation, but Fabula AI is really about helping identify the authenticity of any information that circulates on social media — regardless of intent. Studies have shown that false news spreads faster than real news online, a pattern that can be used to help spot misinformation. This is what Fabula focuses on: detecting differences in how content is spreading on social media and allocating an authenticity score. “As this technology detects the spread pattern, it is language and locale independent; in fact, it can be used even when the content is encrypted,” the company says on its homepage. “We also believe that such an approach, given it is based on the propagation pattern through huge social networks, is far more resilient to adversarial attacks.” As with most of the major social media platforms, Twitter has faced its share of criticism for the way it is used to spread misinformation. This latest move is designed to “improve the health of the conversation” on Twitter, according to CTO Parag Agrawal, and will expand over time to help stop other forms of spam and platform abuse. “By studying and understanding the Twitter graph, comprised of the millions of tweets, retweets, and likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products, including the timeline, recommendations, the Explore tab, and the onboarding experience,” Agrawal said. Last year, Facebook snapped up Bloomsbury AI, a startup building NLP smarts that could also be used to help combat fake news. With the 2020 U.S. presidential election on the horizon, social media firms will be under intense scrutiny for their handling of fake news — which is partly why Twitter is looking to invest in automation to weed out the bad eggs. “Machine learning plays a key role in powering Twitter and our purpose of serving the public conversation,” Agrawal said. Read the source articles in Variety and in VentureBeat. Read more »
- More Summer Reading: 100 Best Artificial Intelligence Books of All TimeBy AI Trends Staff As featured on CNN, Forbes and Inc – BookAuthority identifies and rates the best books in the world, based on public mentions, recommendations, ratings and sentiment. Here is a selection from BookAuthority’s list of the 100 Best AI Books of All Time: Robot is the Boss How to do Business with Artificial Intelligence By Artur Kiulian, 2017 Robot Is The Boss is not about how Artificial Intelligence (AI) will destroy humanity or how machines will rebel against us. Instead, it explains the best way to get benefits from using machine learning in your business today. It’s not technical; it’s simple. You will learn about: Why Artificial Intelligence is becoming so important The benefits of using Artificial Intelligence and the long-term effects of neglecting automation trends Transformative Artificial Intelligence Driverless Self-Driving Cars Practical Advances in AI and Machine Learning By Dr. Lance Eliot, 2018 Transformative advances in Artificial Intelligence are spurring the advent of true driverless self-driving cars. This incredible book provides original innovative approaches to explain how these autonomous vehicles are becoming increasingly viable and practical. Dr. Eliot is a renowned AI Insider and known for his practical application of AI. His popular podcast is heard globally. A serial entrepreneur and high-tech executive, he serves as the Executive Director of the Cybernetic Self-Driving Car Institute. Visit his web site www.ai-selfdriving-cars.guru for further information. Machine Learning Algorithms For Supervised and Unsupervised Learning The Future Is Here!: Second Edition By William Sullivan, 2018 I listened carefully to feedback from customers for my original book, and revamped this new edition. I’m excited to present you the second edition with various high quality diagrams, explanations, extensive information and so much more value packed within. You Will Learn About: Supervised Learning Unsupervised Learning Machine Learning for Beginners Absolute Beginners Guide, Learn Machine Learning and Artificial Intelligence from Scratch By Chris Sebastian, 2018 Bonus: Buy the Paperback version of this book, and get the kindle eBook version included for FREE** Machine Learning is changing the world. You use Machine Learning every day and probably don’t know it. In this book, you will learn how ML grew from a desire to make computers able to learn. Trace the development of Machine Learning from the early days of a computer learning how to play checkers, to machines able to beat world masters in chess and go. Deep Learning A Technical Approach To Artificial Intelligence For Beginners By Leonard Eddison, 2018 Are you interested in Deep Learning and you want to know what you can achieve with it? Have you ever stopped to wonder how it is that Google, Twitter, or one of the countless other sites or apps has the capacity to comprehend the complex algorithms of day to day life, slang terminology, or even high math and science functionality? If so, you might already be a bit familiar with the contents of this book, and are sure to find in its pages a comprehensive and thorough insight into the science behind the scenes. See the full list at BookAuthority. Read more »