AI

  • Get ready to have your face scanned at the airport
    true
    Awesome, not awesome.#Awesome“In most hospitals and clinics around the world, trained physicians make this diagnosis, examining a patient’s eyes and identifying the tiny lesions, hemorrhages and discoloration that anticipate diabetic blindness. But Aravind is trying to automate the process. Working with a team of Google artificial intelligence researchers based in California, the hospital is testing a system that can recognize the condition on its own…Researchers hope this A.I. system will help doctors screen more patients in a country where diabetic retinopathy is increasingly prevalent. Nearly 70 million Indians are diabetic, according to the World Health Organization, and all are at risk of blindness. But the country does not train enough doctors to properly screen them all.” — Cade Metz, Technology correspondent Learn More from The New York Times >#Not Awesome“People’s faces are being used without their permission, in order to power technology that could eventually be used to surveil them, legal experts say…“People gave their consent to sharing their photos in a different internet ecosystem,” said Meredith Whittaker, co-director of the AI Now Institute, which studies the social implications of artificial intelligence. “Now they are being unwillingly or unknowingly cast in the training of systems that could potentially be used in oppressive ways against their communities.”” — Olivia Solon, Tech investigations editor. Learn More from NBC News >What we’re reading.1/ The US government will start scanning faces at the busiest airports int the US, laying the foundation for government controls of the population that could be similar to the ones used in China. Learn More from BuzzFeed News >2/ Facebook announces that it built an AI tool to detect “revenge porn,” or explicit images that are shared without one party’s consent. Learn More from TechCrunch >3/ Some of the world’s sharpest minds are working to build AGI and “solve intelligence,” but if they do, will Google control it? Learn More from 1843 Magazine >4/ The AAA runs a study showing that many of the people who are fearful of self-driving cars become trustful after experiencing assistive-driving features. Learn More from Axios >5/ TikTok, one of the most popular apps in the world, is re-writing how breakout social networks are built — and much of its success can be attributed to its algorithms. Learn More from The New York Times >6/ Despite the lack of conversation about it, one of AI technologies’ most impressive feats so far has been their ability to convince humans to spend more time staring at our screens. Learn More from TechCrunch >7/ Ahead of its IPO, Uber is considering selling a stake in its its self-driving car unit to investors at SoftBank that would value that part of the business somewhere $5-$10 billion. Learn More from The New York Times >Links from the community.“A.I. Is Your New Design Material” submitted by Avi Eisenberger (@aeisenberger). Learn More from Big Medium >“Stop the Bots: Practical Lessons in Machine Learning” submitted by Samiur Rahman (@samiur1204). Learn More from Cloudflare >“Checklist for debugging neural networks” submitted by Cecelia Shao (@ceceliashao). Learn More from Towards Data Science >“How I outperformed CTG Experts with 15 years of experience in… Read more »
  • Uber won’t be charged for fatal self-driving car crash
    true
    Awesome, not awesome.#Awesome“…the right kind of AI can improve the way humans relate to one another… For instance, the political scientist Kevin Munger directed specific kinds of bots to intervene after people sent racist invective to other people online. He showed that, under certain circumstances, a bot that simply reminded the perpetrators that their target was a human being, one whose feelings might get hurt, could cause that person’s use of racist speech to decline for more than a month.” — Nicholas A. Christakis, physician and sociologist Learn More from The Atlantic >#Not Awesome“The single most important way to instill trust in A.I.s is to align their business models with the ecology of stakeholders, and by ensuring that they operate through internalizing possible risks and harms into their decision-making. Techno-utopians frequently make the mistake of imagining benevolent A.I.s that mysteriously pop out of labs to solve our problems, while ignoring how the business models of the companies that created them are constrained by bad incentives. If Facebook’s negative impacts over the last year have taught us anything, it’s that we should never underestimate the damage created by misaligned business models.” — Tristan Harris, Co-founder. Learn More from The New York Times >What we’re reading.1/ Prosecutors in Arizona don’t plan to file charges against Uber for the fatal accident caused by one of its cars driving in autonomous mode — a sign that there’s still no general consensus as to criminal liability in the case of self-driving collisions. Learn More from The New York Times >2/ Congress plans artificial intelligence regulation that would enable audits on algorithms when the FTC audits receives complaints of demonstrated systematic bias. Learn More from The Intercept >3/ Artificial intelligence will pave the way for automated game design that can be “constantly redesigning itself and refreshing itself” — making each experience of playing a game feel like the first time. Learn More from The Verge >4/ A recent study done by an investment firm shows that 40% of European startups claiming to use artificial intelligence are not. Learn More from MIT Technology Review >5/ Tech workers speaking out against unethical AI practices seem to be working more effective in curbing companies’ actions than regulation to this point. Learn More from The New York Times >6/ AI “artists” are turning the world of art on its head, much like the renowned artist Banksy has. Learn More from The Atlantic >7/ Attendees at the New York Times’ New Work Summit lay out 10 ground rules in an attempt to shape the future of ethical AI design. Learn More from The New York Times >What we’re building.Enjoying Machine Learnings? We think you’ll love our new newsletter called Noteworthy too. Wake up every Sunday morning to the week’s most noteworthy Tech stories, opinions, and news waiting in your inbox. Get the newsletter >Links from the community.“Business Leaders Love AI. That Doesn’t Mean They Use It.” submitted by Avi Eisenberger (@aeisenberger). Learn More from Bloomberg >“The best resources in Machine Learning & AI” submitted by Samiur Rahman (@samiur1204). Learn More from Best of Machine Learning >“Leveraging ML techniques for Used… Read more »
  • Think twice before feeding your data to AI algorithms
    true
    Awesome, not awesome.#Awesome“In search of a solution to this problem [of the variable nature of wind as a renewable energy source], last year DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power capacity in the central United States. These wind farms — part of Google’s global fleet of renewable energy projects — collectively generate as much electricity as is needed by a medium-sized city… Although we continue to refine our algorithm, our use of machine learning across our wind farms has produced positive results. To date, machine learning has boosted the value of our wind energy by roughly 20 percent, compared to the baseline scenario of no time-based commitments to the grid.” — Carl Elkin, Sims Witherspoon and Will Fadrhonc of DeepMind and Google Learn More from DeepMind >#Not Awesome“Researchers at the Georgia Institute of Technology found that state-of-the-art object recognition systems are less accurate at detecting pedestrians with darker skin tones…The researchers tested eight image-recognition systems (each trained on a standard data set) against a large pool of pedestrian images. They divided the pedestrians into two groups for lighter and darker skin tones according to the Fitzpatrick skin type scale, a way of classifying human skin color…The detection accuracy of the systems was found to be lower by an average of five percentage points for the group with darker skin. This held true even when controlling for time of day and obstructed view.” — Karen Hao, Reporter. Learn More from MIT Technology Review >What we’re reading.1/ The biggest (easiest) thing individuals can do to prevent society from racing towards a dystopian AI future is to think twice before handing their most precious data over to a Tech product. Learn More from MIT Technology Review >2/ A leading expert in artificial intelligence believes that machine learning techniques are not on the trajectory to displace jobs that require compassion and or creativity. Learn More from Andreessen Horowitz >3/ AI researchers are studying the brains of young children to in hopes of finding new ways to make machines smarter. Learn More from Vox >4/ The US’ largest nonprofit dedicated to advocating for vaccinations had to stop posting their educational videos on YouTube — because each time they did, YouTube’s recommendation algorithm would suggest anti-vaccination conspiracy theories alongside them. Learn More from NBC News >5/ For “AI-first” startups to grow, they should act like a normal startup — and not get too wrapped up in doing deeply technical research. Learn More from Matt Turck’s Blog >6/ Agree with their positions or not, it’s great to see government leaders drafting policies in order to help people whose jobs are displaced by AI technologies. Learn More from Brookings >7/ To preserve “our constitutional and legal freedoms,” we need to think trough the ehtical complexities that AI will introduce into our society. Learn More from Princeton University >What we’re building.Enjoying Machine Learnings? We think you’ll love our new newsletter called Noteworthy too. Wake up every Sunday morning to the week’s most noteworthy Tech stories, opinions, and news waiting in your inbox. Get the newsletter >Links from the community.“Machine Learning For Stronger Military… Read more »
  • YouTube’s algorithms and the spread of pedophilia
    true
    Awesome, not awesome.#Awesome“Applying just a bit of strain to a piece of semiconductor or other crystalline material can deform the orderly arrangement of atoms in its structure enough to cause dramatic changes in its properties, such as the way it conducts electricity, transmits light, or conducts heat. Now, a team of researchers at MIT and in Russia and Singapore have found ways to use artificial intelligence to help predict and control these changes, potentially opening up new avenues of research on advanced materials for future high-tech devices.” — David L. Chandler, Writer Learn More from MIT Technology Review >#Not Awesome“Rice University statistician Genevera Allen issued a grave warning at a prominent scientific conference this week: that scientists are leaning on machine learning algorithms to find patterns in data even when the algorithms are just fixating on noise that won’t be reproduced by another experiment… The problem, according to Allen, can arise when scientists collect a large amount of genome data and then use poorly-understood machine learning algorithms to find clusters of similar genomic profiles.” — Jon Christian, Journalist. Learn More from Futurism >What we’re reading.1/ YouTube’s algorithms are “enabling the production and distribution of paedophilic content,” and major brands are running ads right alongside it. Learn More from WIRED >2/ To make sure future generations have an amicable relationship with AI technologies, industry giants should invest heavily in re-training displaced workers and partnerships with educational institutions. Learn More from TechCrunch >3/ In an attempt to avoid the tech backlash that’s bringing Facebook and Google under fire, AI researchers are trying to establish procedures to make sure their inventions aren’t abused by bad actors. Learn More from Axios >4/ One researcher argues that in order to keep AI research from being abused, it must be published openly so that all are aware of it. Learn More from The Gradient >5/ There are virtually not limits to the ways AI technologies will revolutionize the medical field — the question is, “how quickly should we adopt them?” Learn More from The New York Times >6/ If a machines doesn’t understand the reason for why it’s doing something, can it actually be considered creative? Learn More from MIT Technology Review >7/ Aid agencies that use AI to predict when and where food shortages are likely to emerge may be able to stop wars before they happen. Learn More from BBC >Links from the community.“Let’s make some molecules with machine learning!” by Avi Eisenberger (@flawnsontong1). Learn More from Noteworthy >“Tutorial: Machine Learning Data Set Preparation, Part 1” submitted by Samiur Rahman (@samiur1204). Learn More from Sean McWillie’s blog >🤖 First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >YouTube’s algorithms and the spread of pedophilia was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more »
  • The Surprisingly Slow Spread of AI
    true
    #Awesome“…A group of researchers in the United States and China has tested a potential remedy for all-too-human frailties: artificial intelligence. In a paper published on Monday in Nature Medicine, the scientists reported that they had built a system that automatically diagnoses common childhood conditions — from influenza to meningitis — after processing the patient’s symptoms, history, lab results and other clinical data. The system was highly accurate, the researchers said, and one day may assist doctors in diagnosing complex or rare conditions..” — Cade Metz, Technology Correspondent Learn More from The New York Times >#Not Awesome“Predictive policing algorithms are becoming common practice in cities across the US. Though lack of transparency makes exact statistics hard to pin down, PredPol, a leading vendor, boasts that it helps “protect” 1 in 33 Americans. The software is often touted as a way to help thinly stretched police departments make more efficient, data-driven decisions. But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a think tank that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.” — Karen Hao, Reporter. Learn More from MIT Technology Review >What we’re reading.1/ Machine learning has become a fundamental technology at some of the largest Tech companies in the world, but it’s taking longer to spread to other industries than many predicted. Learn More from MIT Technology Review >2/ President Trump signs an executive order this week to spur the development of AI technologies — it does not set aside any new funds for research or development. Learn More from The New York Times >3/ The Defense Department publishes an AI strategy document of its own, making a point to curry favor with tech employees who’d be hesitant to work with the Pentagon. Learn More from Axios>4/ Separating pricing algorithms collude without leaving the trace of “concerted action,” raising fears that consumers will be harmed by unfair pricing. Learn More from MIT Technology Review >5/ Amazon Prime Video and Hulu’s algorithms recommend conspiracy theory videos to people that experts warn can “spread misinformation, and even encourage people to commit violence.” Learn More from Business Insider >6/ An AI research lab builds an algorithm that creates convincing fake news, and they aren’t sharing their dataset with the public of out fear of it being abused. Learn More from Bloomberg >7/ The California DMV requires that any company testing autonomous vehicles in the state reports the number of autonomous miles driven by the vehicles and the number of disengagements (when a human has to take the wheel) — Waymo is besting the competition on both measures. Learn More from The Atlantic >Links from the community.“Better Together: Humanity + Machine Learning” submitted by Avi Eisenberger (@aeisenberger). Learn More from YouTube >“Audio AI: isolating vocals from stereo music using Convolutional Neural Networks” submitted by Samiur Rahman (@samiur1204). Learn More… Read more »
  • AI, unlivable wages, and a split workforce
    true
    Awesome, not awesome.#Awesome“DeepMind specializes in “deep learning,” a type of artificial intelligence that is rapidly changing drug discovery science. A growing number of companies are applying similar methods to other parts of the long, enormously complex process that produces new medicines. These A.I. techniques can speed up many aspects of drug discovery and, in some cases, perform tasks typically handled by scientists.” — Cade Metz, Technology Correspondent Learn More from The New York Times >#Not Awesome“By merely relying on historical data and current definitions of fairness, we will lock in the accumulated unfairnesses of the past, and our algorithms and the products they support will always trail the norms, reflecting past norms rather than future ideals and slowing social progress rather than supporting it.” — Joi Ito, Director of the MIT Media Lab. Learn More from WIRED >What we’re reading.1/ Certain “categories of labor” may not be able to earn livable wages because of task automation facilitated by machine learning. Learn More from The New York Times >2/ Many people believe that financial firms will start to act more like tech firms over time, but if companies like Google and Facebook have the data and technological prowess to make investment decisions, maybe they’ll start looking more like financial firms. Learn More from Logic >3/ Thanks to machine learning, your camera will begin to understand the pictures you take, help you to discover more images like them, and remember the ones you forget. Learn More from Benedict Evans >4/ Major news organizations are experimenting with using machine generated text throughout their journalistic process — from writing articles themselves to transcribing interviews to personalizing newsletters. Learn More from The New York Times >5/ When it comes to all the video footage captured by self-driving cars, autonomous vehicle companies say they take privacy ‘very seriously,’ but privacy experts worry that they aren’t taking bystanders’ (people outside of the car) rights seriously. Learn More from Axios >6/ A San Francisco lawmaker wants to ban local government agencies from using facial recognition software — but won’t ask private companies to do the same. Learn More from The Atlantic >7/ Inertia within the Defense Department may keep the US military from making truly breakthrough leaps in AI technology. Learn More from Axios >Links from the community.“Why Captchas have gotten so difficult” submitted by Cecelia Shao (@ceceliashao). Learn More from The Verge >“Can we build AI without losing control over it? | Sam Harris” submitted by Will Jessop (@willjessop). Learn More from YouTube >“Machine Learning for Everyone” submitted by Avi Eisenberger (@aeisenberger). Learn More from vas3k Blog >“Our Extended Minds” by KS Abhinav (@ksabhinav38). Learn More from Noteworthy >“Stock Market Prediction by Recurrent Neural Network on LSTM Model” by Aniruddha Choudhury (@aniruddha.choudhury94). Learn More from Noteworthy >“Swift Text-To-Speech tool AVSpeechSynthesizer” by Myrick Chow (@myrickchow32). Learn More from Noteworthy >“Quantifying Accuracy and SoftMax Prediction Confidence For Making Safe and Reliable Deep Neural Network Based AI System” by AiOTA Labs (@aiotalabs). Learn More from Noteworthy >“AI Technology & Effect.ai | Easy” by Melicio Sergio de Bel (@meliciosergiobel). Learn More from Noteworthy >First time reading Machine Learnings? Subscribe to get the… Read more »
  • Using artificial intelligence to study religious violence
    true
    SourceAwesome, not awesome.#Awesome“The Library Innovation Lab at the Harvard Law School Library has completed its Caselaw Access Project, an endeavour to digitize every reported state and federal US legal case from the 1600s to last summer. The process involved scanning more than 40 million pages…One of the biggest hurdles to developing artificial intelligence for legal applications is the lack of access to data. To train their software, legal AI companies have often had to build their own databases by scraping whatever websites have made information public and making deals with companies for access to their private legal files…Now that millions of cases are online for free, a good training source will be easily available.” — Erin Winick, Editor Learn More from MIT Technology Review >#Not Awesome“I don’t think necessarily that there are people at Amazon saying, ‘let’s not deliver to black people in Roxbury,” Gilliard said. “What typically happens is there’s an algorithm that determines that for some reason not delivering there made the most sense algorithmically, to maximize profit or time… And there are often very few people at companies that have the ability or the willingness or the knowledge to look at these things and say, ‘hey, wait a minute. While these decisions are often made by AI algorithms, that doesn’t mean humans aren’t responsible for the results. Gilliard said that when he sees the sort of AI algorithms Amazon and others use, “…my antenna sort of go up, because much of that is based on training data that… probably [reflects] the biases that are already built into society.” — Learn More from CBC Radio-Canada >What we’re reading.1/ Researchers use AI models to run simulations of real-life social problems to better understand how religious violence can break out. Learn More from Motherboard >2/ If we don’t have a global conversation about how to build algorithms that make “split-second decisions that will result in life or death,” we will introduce software that changes the physical world in ways that violate the value systems of different regions. Learn More from Quartz >3/ When companies optimize algorithms to extract as money from consumers who are most willing to pay, anyone can becomes a victim — from elderly people with dementia to the “rich and busy” who don’t monitor their receipts. Learn More from Tim Harford >4/ The days of self-driving cares are almost upon us — when they finally arrive we’ll have tens of thousands of algorithm trainers in Kenya to thank. Learn More from BBC News >5/ Deep learning algorithms make it possible for researchers to pinpoint signals of natural selection within specific regions of people’s genomes in ways that it’s never been studied before. Learn More from Nature >6/ One of the biggest potential risks of unchecked AI algorithms is the stripping of people’s political agency. Learn More from openDemocracy >7/ For autonomous vehicles to make it onto the roads, they’ll need to prove to federal regulators that they’re much better than humans at driving in every possible situation. Learn More from WIRED >Links from the community.“If I Can You Can (and you should!)” submitted… Read more »
  • MIT goes all in on AI
    true
    Using a $1 billion investment to build a new college for artificial intelligenceSourceAwesome, not awesome.#Awesome“A.I. presents the challenge of reckoning with our skewed histories, while working to counterbalance our biases, and genuinely recognizing ourselves in each other. This is an opportunity to expand — rather than further homogenize — what it means to be human through and alongside A.I. technologies. This implies changes in many systems: education, government, labor, and protest, to name a few. All are opportunities if we, the people, demand them and our leaders are brave enough to take them on.” — Stephanie Dinkins, Artist & associate professor of art Learn More from The New York Times >#Not Awesome“An autonomous missile under development by the Pentagon uses software to choose between targets. An artificially intelligent drone from the British military identifies firing points on its own. Russia showcases tanks that don’t need soldiers inside for combat. A.I. technology has for years led military leaders to ponder a future of warfare that needs little human involvement. But as capabilities have advanced, the idea of autonomous weapons reaching the battlefield is becoming less hypothetical…defense contractors, identifying a new source of revenue, are eager to build the next-generation machinery.” — Adam Satariano, Tech Correspondent Learn More from The New York Times >What we’re reading.1/ MIT takes a huge step to the advancement of artificial intelligence, using a $1 billion investment to teach bilinguals of the future — people who are highly skilled both in their field of expertise and in machine learning. Learn More from The New York Times >2/ Expect to hear the term “Data minimalism,” used more often — it’s used to describe a problem that doesn’t spit off enough data for it to be solved with machine learning techniques. Learn More from Axios >3/ Algorithms aren’t developed in a vacuum free of bias, they’re developed in the real world by real people — expect the bias to be built into them. Learn More from Quartz >4/ Many of the world’s top artists use AI tools to reimagine landscapes and design new, more “interactive visual experiences.” Learn More from The New York Times >5/ Large tech companies must be expected to work closely with civil rights group and researchers to ensure that their algorithms don’t violate human rights. Learn More from MIT Technology Review >6/ Journal raised a $1.5 million seed round, using start of the art machine learning to help people do more with their information and combat information overload. Learn More from TechCrunch >7/ Chatbots are starting to replace *part* of the role doctors play — like providing initial diagnoses and prescribing medicine — but they won’t help people cope with the treatments that are handed out. Learn More from MIT Technology Review >Links from the community.“At Google, we’ve been getting a better understanding of issues of bias & fairness in machine learning models” submitted by Samiur Rahman (@samiur1204). Learn More from Twitter >“Researchers call for more humanity in artificial intelligence” submitted by Avi Eisenberger (@aeisenberger) . Learn More from WIRED >Join 40,000 people who read Machine Learnings to understand how AI is shaping our world. Get our newsletter >MIT goes all in on AI was originally… Read more »
  • A new investment thesis for AI-first SaaS startups
    true
    We’re now at a turning point when it comes to the use of AI in B2B applications…SourceAwesome, not awesome.#Awesome“…Today Iron Ox is opening its first production facility in San Carlos, near San Francisco. The 8,000-square-foot indoor hydroponic facility — which is attached to the startup’s offices — will be producing leafy greens at a rate of roughly 26,000 heads a year. That’s the production level of a typical outdoor farm that might be five times bigger. The opening is the next big step toward fulfilling the company’s grand vision: a fully autonomous farm where software and robotics fill the place of human agricultural workers, which are currently in short supply…bringing automation to both indoor and outdoor farming is necessary to help a wider swath of the agricultural industry solve the long-standing labor shortage.” — Erin Winick, Editor Learn More from MIT Technology Review >#Not Awesome“Kai-Fu Lee, a prominent investor and entrepreneur based in Beijing, has been talking up China’s artificial intelligence potential for a while. Now he’s got a message for the United States. The real threat to American preeminence in AI isn’t China’s rise, he says — it’s the US government’s complacency…Rather than competition from China, Lee says, the real risk for the US is in failing to invest in and prioritize fundamental AI research — a problem that’s being exacerbated as big US companies suck up much of the top talent in the field. In general, tech firms focus less on fundamental breakthroughs than does academia, which struggles to compete with the private sector in retaining researchers.” — Will Knight, Editor Learn More from MIT Technology Review >Where we’re going.The AI-first SaaS Funding Napkin by Louis CoppeyWhat does it take to raise seed funding as an AI-first SaaS startup?At Point Nine, we have been focused on investing in SaaS companies and have been fortunate to work with several generations of successful businesses in this segment over the years. Since joining the firm 24 months ago, I’ve spent a significant share of my time looking at SaaS companies using machine learning to change industries.From my vantage point, it appears that we’re now at a turning point when it comes to the use of AI in B2B applications. I published this post to that effect 18 months ago and I thought it was time to dive deeper into the topic with the benefit of hindsight and data. To that end, I’ve attempted to consolidate our thoughts on investing in AI-first SaaS businesses by creating a customary Point Nine napkin! Why? Because this is the format we typically use at Point Nine to consolidate our thoughts.This post consists of 3 parts:1. First, I will share a quick historical perspective on the broader SaaS industry and use it to explain some of the key success factors of each of these generations at a (very) high level,2. Second, I define what I call an “AI-first SaaS business” and outline a work-in-progress investment thesis for seed stage startups in this category,3. Third, I try to explain why AI-first SaaS is an exciting category based on the analysis of some of their intrinsic characteristics.Read… Read more »
  • The AI-first SaaS Funding Napkin
    true
    What does it take to raise seed funding as an AI-first SaaS startup? (Seed Edition —the 0.9 version )At Point Nine, we have been focused on investing in SaaS companies and have been fortunate to work with several generations of successful businesses in this segment over the years. Since joining the firm 24 months ago, I’ve spent a significant share of my time looking at SaaS companies using machine learning to change industries.From my vantage point, it appears that we’re now at a turning point when it comes to the use of AI in B2B applications. I published this post to that effect 18 months ago and I thought it was time to dive deeper into the topic with the benefit of hindsight and data. To that end, I’ve attempted to consolidate our thoughts on investing in AI-first SaaS businesses by creating a customary Point Nine napkin! Why? Because this is the format we typically use at Point Nine to consolidate our thoughts (see our SaaS and Marketplace funding napkin here and there) ;)This post consists of 3 parts:First, I will share a quick historical perspective on the broader SaaS industry and use it to explain some of the key success factors of each of these generations at a (very) high level,Second, I define what I call an “AI-first SaaS business” and outline a work-in-progress investment thesis for seed stage startups in this category,Third, I try to explain why AI-first SaaS is an exciting category based on the analysis of some of their intrinsic characteristics.I believe that positioning AI-first SaaS companies within the broader SaaS landscape provides an interesting perspective but if you’re just interested in understanding our investment criteria for AI-first SaaS startups, just go directly to part 2!A quick historical perspective on the SaaS industry — categories and investment strategiesAt a high level, we can observe 6 different innovation trends that have shaped the SaaS industry over the past 15 years, as described in the graph below:I believe that AI-first companies are forming a seventh category that started shaping up some months ago and I will try to explain why in this post. This categorisation is more exemplary than exhaustive as it tries to cover a vast majority of successful SaaS companies, from Salesforce to Typeform via Stripe… These categories are also not mutually exclusive and some companies fit in several ones.For the sake of simplicity, I will not go into details of each category but at a high level, here’s how we could look at it:Initially, SaaS companies built products that consisted of a “workflow app + a database” to disrupt the on-premise software industry. Why? Because i) they were accessible from anywhere, ii) could be constantly updated, iii) required low implementation fees, iv) were cheaper to manage … and had lots of other now well-understood advantages of SaaS over on-premise. Some of the most successful companies of this trend include Salesforce, Veeva or Workday.A second category covers the “workflow-only apps”. They’re less defensible companies because they’re not actual Systems of Records and mainly improved user experiences. Even so,… Read more »
WordPress RSS Feed Retriever by Theme Mason

Leave a Reply