• Using your face to train AI
    Awesome, not awesome.#Awesome“A group of researchers have now used [unsupervised machine learning] to munch through 3.3 million scientific abstracts published between 1922 and 2018 in journals that would likely contain materials science research. The resulting word relationships captured fundamental knowledge within the field, including the structure of the periodic table and the way chemicals’ structures relate to their properties. Because of the technique’s ability to compute analogies, it also found a number of chemical compounds that demonstrate properties similar to those of thermoelectric materials but have not been studied as such before. The researchers believe this could be a new way to mine existing scientific literature for previously unconsidered correlations and accelerate the advancement of research in a field.” — Karen Hao, AI Reporter Learn More from MIT Technology Review >#Not Awesome“In the case of the pedophile scandal, YouTube’s AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes — that is, the more data it has — the more efficient it will become at recommending specific user-targeted content. Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it’s also less likely to recommend such content to those who aren’t. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.” — Guiallume Chaslot, Researcher Learn More from WIRED >What we’re reading.1/ Companies might be using an image of your face to train their facial recognition software. Learn More from The New York Times >2/ The US is renewing its push to create global standards for artificial intelligence regulation in its favor, not China’s. Learn More from Axios >3/ An automated poker-playing algorithm can now beat the world’s best players in a multiplayer of no-limi Texas Hold ’Em poker — a major breakthrough for AI technology since the game is based on hidden information. Learn More from The New York Times >4/ One of the world’s foremost leaders in AI research believes an “underrated deep-learning subcategory known as unsupervised learning” will lead to the next revolution in artificial intelligence. Learn More from MIT Technology Review >5/ Based on a user’s search history, Google can predict if a person is a suicide risk and show them the National Suicide Prevention Lifeline. Learn More from The Atlantic >6/ If Jon Steinbeck were still alive today, he might say that artificial intelligence, a “system built on a pattern [will] try to destroy the free mind, for this is the one thing which can by inspection destroy such a system.” Learn More from recode >7/ Amazon is making a massive investment in re-training its employees who are likely to be automated out of their jobs by machine learning. Learn More from The Wall Street Journal >What we’re building.Come build the next-generation productivity platform with us at Journal!… Read more »
  • AI tech abused to make people look naked
    Awesome, not awesome.#Awesome“Computer scientists have taught an artificial intelligence agent how to do something that usually only humans can do take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions.” — Kristen Grauman, UT Austin Professor Learn More from Human Bioscience >#Not Awesome“On Wednesday, Motherboard reported that an anonymous programmer who goes by the alias “Alberto” created DeepNude, an app that takes an image of a clothed woman, and with one click and a few seconds, turns that image into a nude by algorithmically superimposing realistic-looking breasts and vulva onto her body.” — Samantha Cole, Editor Learn More from Vice >What we’re reading.1/ AI programs used in call centers are filling the role once assumed by management, coaching employees to perform better on each of their calls. Learn More from The New York Times >2/ The AI bubble is unlikely to have a negative impact on the banking system the like bubbles of the past since artificial intelligence companies are being financed by equity investors rather than bank loans. More from MIT Sloan Management Review >3/ AI experts wonder if algorithms will have the same capacity for creativity as humans if they never feel bored. Learn More from Quanta magazine >4/ The terms of the debate surrounding the ethics of facial recognition software have largely been set by big Tech companies — the result may amount to the “last days of privately owning our own faces.” Learn More from The Atlantic >5/ Firefighters in Southern California have begun relying on algorithms to predict where wildfires may appear next, hoping they’ll be able to reduce the spread before they cause mass damage. Learn More from The New York Times >6/ Casinos as turning to AI to predict uncertain outcomes in games, further stacking odds in their favor and against the players’. Learn More from Axios >7/ A whopping 20% of all AI startups are focused on building products within the health and wellbeing sectors — exciting news for humanity! Learn More from AVC >Links from the community.“Machine learning systems are stuck in a rut” submitted by Samiur Rahman (@samiur1204). Learn More from the morning paper >“Write With Transformer” submitted by Clément Delangue (@clementdelangue). Learn More from Hugging Face >“The State of AI 2019: Divergence” submitted by Avi Eisenberger (@aeisenberger). Learn More from speakerdeck >“A guide to deploying Machine/Deep Learning model(s) in Production” by Mahesh Kumar. Learn More from Noteworthy >🤖First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >AI tech abused to make people look naked was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more »
  • Driverless car tech and the danger of a wandering mind
    Awesome, not awesome.#Awesome“Scientists need to develop materials that store, harvest, and use energy more efficiently, but the process of discovering new materials is typically slow and imprecise. Machine learning can accelerate things by finding, designing, and evaluating new chemical structures with the desired properties. This could, for example, help create solar fuels, which can store energy from sunlight, or identify more efficient carbon dioxide absorbents or structural materials that take a lot less carbon to create. The latter materials could one day replace steel and cement — the production of which accounts for nearly 10% of all global greenhouse-gas emissions.” — Karen Hao, AI Reporter Learn More from MIT Technology Review >#Not Awesome“Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a “human in the loop” — that is, with no person involved at any point between identifying a target and killing them. And as facial recognition and decision-making algorithms become more powerful, it will only get easier.” — Kelsey Piper, Writer Learn More from Vox >What we’re reading.1/ New driverless car technology may make driving more dangerous initially — it can allow a driver to be absent minded for long periods then suddenly requires extreme focus. Learn More from The New York Times >2/ The “Amazon Choice” badge you’ve seen next to many products is an algorithmic recommendation based on customer reviews, price, and whether the product is in stock — and is regularly manipulated by people who want their product to stand out. Learn More from BuzzFeed News >3/ To design more effective AI systems, creators must look beyond the intended user of the product and answer this question: “how can our AI system behave in such a way that everyone who might come into contact with our product is enchanted and wants to know more?” Learn More from TechCrunch >4/ Walmart has rolled out computer vision technology in 1000 stores to detect theft and notify workers when they should intervene. Learn More from The Verge >5/ No matter how accurate a machine learnings’s findings may be, the CIA is unwilling to take actions based off a model with an opaque decision making process. Learn More from Defense One >6/ As AI finds its way into more parts of the government and every business, don’t be surprised if “privacy compromises could become normalized swiftly.” Learn More from WIRED >7/ A team of Cornell researchers use machine learning to develop an understanding of how electrons interact, possibly ushering in a new era of discovery in the field of experimental quantum physics. Learn More from Phys.Org >Links from the community.“Machine learning revolution is still some way off” submitted by Samiur Rahman (@samiur1204). Learn More from The Financial Times >“Machine learning revolution is still some way off” submitted by Clément Delangue (@clementdelangue). Learn More from Hugging Face >“Stephen Schwarzman gives $188 million to Oxford to research AI ethics” submitted by Avi Eisenberger (@aeisenberger). Learn More from CNN >🤖First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >Driverless… Read more »
  • Algorithms and extremism
    Awesome, not awesome.#Awesome“Free will, from a neuroscience perspective, can look like quite quaint… using the fMRI to monitor brain activity and machine learning to analyze the neuroimages, the researchers were able to predict which pattern participants would choose up to 11 seconds before they consciously made the decision. And they were able to predict how vividly the participants would be able to envisage it.” — Sophia Chen, Science Writer Learn More from WIRED >#Not Awesome“Users do not need to look for videos of children to end up watching them. [YouTube’s algorithms] can lead them there through a progression of recommendations. So a user who watches erotic videos might be recommended videos of women who become conspicuously younger, and then women who pose provocatively in children’s clothes. Eventually, some users might be presented with videos of girls as young as 5 or 6 wearing bathing suits, or getting dressed or doing a split. On its own, each video might be perfectly innocent, a home movie, say, made by a child. Any revealing frames are fleeting and appear accidental. But, grouped together, their shared features become unmistakable.” — Max Fisher and Amanda Taub, Writers Learn More from The New York Times >What we’re reading.1/ YouTube’s recommendation was built to keep people on the site longer so they could be shown more ads, and it wound up steering visitors towards extreme content. Learn More from The New York Times >2/ A new artificially intelligence prosthetic leg is designed to respond to the thoughts of the person wearing it. Learn More from Quartz >3/ Artificial intelligence researchers learn that the process of training an AI model can be incredibly hard on the environment, emitting more than 626,000 pounds of carbon dioxide. Learn More from MIT Technology Review >4/ Artificial intelligence systems must be designed with the goal of preserving human agency, helping people to feel empowered by technology rather than undercut. Learn More from Wharton >5/ Researchers at Stanford create an algorithm that lets people edit videos as if they were typing a sentence — rewriting one spoken word at a time. Learn More from Stanford >6/ MIT researchers create an algorithm that builds an image of someone’s face based on a short audio clip of their voice. Learn More from Fast Company >7/ AI is used to prevent heartache, spotting and removing fake accounts on dating apps before a real user is met with disappointment. Learn More from BBC >Links from the community.“The Importance of Predictive Maintenance: Using AI to Increase Operational Efficiency” by Jason Meil. Learn More from Noteworthy >“Handwriting Recognition Sdk- Part 1” by Ganesh Krishnan. Learn More from Noteworthy >“Introducing Google Research Football: A Novel Reinforcement Learning Environment” submitted by Samiur Rahman (@samiur1204). Learn More from Google AI >“Modeling the unseen” submitted by Avi Eisenberger (@aeisenberger). Learn More from Instacart >🤖First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >Algorithms and extremism was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more »
  • AI Makes it Harder to Understand What’s Real
    Awesome, not awesome.#Awesome“Computers were as good or better than doctors at detecting tiny lung cancers on CT scans, in a study by researchers from Google and several medical centers…[b]y feeding huge amounts of data from medical imaging into systems called artificial neural networks, researchers can train computers to recognize patterns linked to a specific condition, like pneumonia, cancer or a wrist fracture that would be hard for a person to see.” — Denise Grady, Science Reporter Learn More from The New York Times >#Not Awesome“Beyond its use by repressive regimes, AI can directly interfere with human rights in democratic and open societies. The infinite collection of personal data by AI systems for micro-ad targeting limits the rights of privacy. AI-enabled online content monitoring impedes freedom of expression and opinion, as access to and the sharing of information by users is controlled in opaque and inscrutable ways. Vast AI-powered disinformation campaigns — from troll bots to deepfakes (altered video clips) — threaten societies’ access to accurate information, can disrupt elections and erode social cohesion.” — Kyle Matthews & Alexandrine Royer Learn More from CBC >What we’re reading.1/ Facebook’s algorithms help distribute a manipulated video of Speaker Nancy Pelosi that make it seem as if she’s slurring her words. Learn More from The New York Times >2/ A coalition of countries comes together to develop five democratic principles that will guide the development of artificial intelligence — the first among them is that “AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.” Learn More from MIT Technology Review >3/As long as we struggle to define intelligence, we won’t know if a superhuman AI has been created. Learn More from Cerebra Lab >4/ Leaders in the autonomous vehicles industry fear that not enough thought is being given to the impact the technology will have on public transportation systems and disabled people. Learn More from Axios >5/ It’s too late to stop the proliferation of facial recognition software, but we can all demand that our governments and companies deploy it within ethical guidelines. Learn More from WIRED >6/ As machines start to do more tasks that were once reserved for humans, like book restaurant reservations, it will become harder to separate the fake from the real. Learn More from The New York Times >7/ Fear mongering about AI could actually serve a useful purpose — getting companies think about downsides before they play out in reality. Learn More from TechCrunch >Links from the community.“Using Machine Learning To Identify High-Risk Surgical Patients” by Kristin Corey (@kristoncorey). Learn More from Science Trends >“Video Object Detection with RetinaNet” by Alexander Li. Learn More from Noteworthy >“Could a Robot Ever Be Conscious?” by Phillip Shirvington. Learn More from Noteworthy >“You Can Blend Apache Spark And Tensorflow To Build Potential Deep Learning Solutions“ by Satyajit Maitra. Learn More from Noteworthy >“Using Machine Learning to Solve Real-World Problems” by Jeff Daniel. Learn More from Noteworthy >“Artificial Intelligence, far from GAFA” by Mamadou Diagne. Learn More from Noteworthy >🤖First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the… Read more »
  • Artificial Intelligence and Graphic Violence
    Awesome, not awesome.#Awesome“…[R]esearchers are working on ways to let AI learn from large amounts of medical data while making it very hard for that data to leak…“The whole notion of doing computation while keeping data secret is an incredibly powerful one,” says David Evans, who specializes in machine learning and security at the University of Virginia. When applied across hospitals and patient populations, for instance, machine learning might unlock completely new ways of tying disease to genomics, test results, and other patient information.” — Will Knight, Editor Learn More from MIT Technology Review >#Not Awesome“[China] has been using a wide-ranging, secret facial recognition system to track and control the Uighurs, a largely Muslim minority, The Times reported. The advanced system is integrated into China’s growing network of surveillance cameras and is constantly tracking where Uighurs come and go. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity…”” — Niraj Chokshi, Reporter Learn More from The New York Times >What we’re reading.1/ Facebook’s Chief Technology Officer comes to terms with the limits of using artificial intelligence to keep graphic violence off the platform. Learn More from The New York Times >2/ People willingly give away their data to tech companies, not realizing that it can be used to anything from building military weapons to surveillance systems. Learn More from The Atlantic >3/ Big tech companies that use artificial intelligence depend on the labor of grossly underpaid “labelers” to make their products tick. Learn More from Axios >4/ In an effort to beat the market, quantitative stock traders are designing mathematical processes that help machine learning algorithms generate investing ideas. Learn More from Bloomberg >5/ Algorithms that can be used to discriminate against hiring people with certain diseases are being built, and there aren’t proper legal safeguards in place to protect the public. Learn More from Axios >6/ A prominent Tech journalist fears that the US may accidentally stumbled into becoming a surveillance state as smart camera technology proliferates, and privacy laws are nowhere to be found. Learn More from The New York Times >7/ A new AI algorithm can show the future impact that rising sea levels caused by climate change can have on our homes — hopefully it will be used to move people to action. Learn More from MIT Technology Review >Links from the community.“Helping robots remember: Hyperdimensional computing theory could change the way AI works” submitted by Avi Eisenberger (@aeisenberger). Learn More from University of Maryland >“Building a fully reproducible machine learning pipeline with and Quilt” by Cecelia Shao (@ceceliashao). Learn More from Medium >“The Perceptron — A Building Block of Neural Networks” by Ani Karenovna K. Learn More from Noteworthy >“Machine Learning Product Management: Lessons Learned” submitted by Samiur Rahman (@samiur1204). Learn More from Domino >🤖First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >Artificial Intelligence and Graphic Violence was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story. Read more »
  • How to Confuse an Algorithm
    Awesome, not awesome.#Awesome“A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital (MGH) has created a new deep learning model that can predict from a mammogram if a patient is likely to develop breast cancer in the future. They trained their model on mammograms and known outcomes from over 60,000 patients treated at MGH, and their model learned the subtle patterns in breast tissue that are precursors to malignancy. MIT professor Regina Barzilay, herself a breast cancer survivor, says that the hope is for systems like these to enable doctors to customize screening and prevention programs at the individual level, making late diagnosis a relic of the past.” — Regina Adam Conner Simons and Rachel Gordon, Communications Learn More from CSAIL >#Not Awesome“The EU’s online-terrorism bill, Khatib noted, sends the message that sweeping unsavory content under the rug is okay; the social-media platforms will [use machine learning algorithms] see to it that nobody sees it. He fears the unintended consequences of such a law — that in cracking down on content that’s deemed off-limits in the West, it could have ripple effects that make life even harder for those residing in repressive societies, or worse, in war zones. Any further crackdown on what people can share online, he said, “would definitely be a gift for all authoritarian regimes. It would be a gift for Assad.”” — Drew Bernhard Warner, Journalist Learn More from The Atlantic >What we’re reading.1/ People who share random details about themselves to confuse Facebook’s algorithms are exposed to “unfiltered, randomized extreme[s]…delight, danger, and drudgery…” Learn More from The Atlantic >2/ Mark Cuban invests in a company that plans to help police departments scan people’s faces and search through the database for individuals that meet certain gender, ethnicity, and emotional descriptions. Learn More from Vice >3/ The countries leading in AI research for military applications are building lethal autonomous tools that currently require human intervention, but could one day make killing decisions without it. Learn More from Axios >4/ AI researchers try to build algorithms that can resist “adversarial examples” that are used to intentionally confuse AI technology that performs critical tasks — like scanning luggage at airports and detecting hate speech on online platforms. Learn More from WIRED >5/ Utility companies are increasingly using AI technologies to predict equipment failures before they cause massive environmental disasters — like California’s deadly wildfires. Learn More from Axios >6/ AI algorithms that infer a user’s genders based on purchase/browsing history can reinforce norms that are harmful to people of all genders. Learn More from A List Apart >7/ Religious leaders begin to take stances on how they believe AI should and should not be used — and areas in which they believe it interferes with the “exclusive responsibility of humans.” Learn More from The Wall Street Journal >Links from the community.“How to build a State-of-the-Art Conversational AI with Transfer Learning” submitted by Avi Eisenberger (@aeisenberger). Learn More from Medium >“Python toolset for statistical comparison of machine learning models and human readers” submitted by Samiur Rahman (@samiur1204). Learn More from Mateusz Buda >🤖First time reading Machine Learnings? Sign up to… Read more »
  • AI and Mass Surveillance
    Awesome, not awesome.#Awesome“… For nearly 70 years, the process of interviewing, allocating, and accepting refugees has gone largely unchanged… If it works, [a new algorithm called] Annie could change that dynamic… The system examines a series of variables — physical ailments, age, levels of education and languages spoken, for example — related to each refugee case. In other words, the software uses previous outcomes and current constraints to recommend where a refugee is most likely to succeed…This is a drastic departure from how refugees are typically resettled. Each week, HIAS and the eight other agencies that allocate refugees in the United States make their decisions based largely on local capacity, with limited emphasis on individual characteristics or needs.” — Krishnadev Calamur, Writer Learn More from The Atlantic >#Not Awesome“When the report by special counsel Robert S. Mueller III came out last week, offering the most authoritative account yet of Russian interference in the 2016 presidential election, YouTube recommended one video source hundreds of thousands of times to viewers seeking information, a watchdog says: RT, the global media operation funded by the Russian government… the numbers of recommendations suggest that Russians have grown adept at manipulating YouTube’s algorithm…” — Drew Harwell and Craig Timberg, Reporters Learn More from The Washington Post >What we’re reading.1/ Chinese police collect massive amounts of data on how, where, and with whom ethnic groups spend their time — eventually they can feed these data into algorithms that allow them to “predict, the everyday life and resistance of its population, and, ultimately, to engineer and control reality.” Learn More from Humans Rights Watch >2/ A group of researchers thinks we can use many of the same scientific methods that help us study organic “black boxes” (humans’ brains) to better understand inorganic black boxes (AI systems). Learn More from MIT Technology Review >3/ None of us may have the choice to be anonymous in the future once governments (and massive companies) pair AI-enabled facial recognition software with the data streams and location capabilities of 5G networks. Learn More from The New Yorker >4/ People clutch to the idea that AI won’t replace time old traditions (like car ownership in the US), but we’re already starting to see them fade away. Learn More from Vox >5/ As AI tech progresses, humans may better off erring on the side of humility than with aspirations for control. Learn More from WIRED >6/ An algorithm and a team of humans work together to expunge thousands criminal records related to cannabis possession. Learn More from BBC News >7/ An AI researcher training algorithms to generate puns may be laying the foundation for future artificial intelligence systems to “write stories about things humans wouldn’t think to write about.” Learn More from WIRED >Links from the community.“AI Evolved These Creepy Images to Please a Monkey’s Brain” submitted by Avi Eisenberger (@aeisenberger). Learn More from The Atlantic >“3 Machine Learning Books That Helped Me Level Up As A Data Scientist” submitted by Samiur Rahman (@samiur1204). Learn More from Data Stuff >🤖First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get… Read more »
  • AI and White Supremacy on Twitter
    Awesome, not awesome.#Awesome“Scientists have harnessed artificial intelligence to translate brain signals into speech, in a step toward brain implants that one day could let people with impaired abilities speak their minds, according to a new study…When perfected, the system could give people who can’t speak, such as stroke patients, cancer victims, and those suffering from amyotrophic lateral sclerosis — or Lou Gehrig’s disease — the ability to conduct conversations at a natural pace, the researchers said.” — Robert Lee Hotz, Writer Learn More from The Wall Street Journal >#Not Awesome“…The calls for ethics in AI have been strong and understandable. AI is powerful technology that can and already has gone terribly wrong, as in the advertising and recommendation algorithms used by Facebook and Youtube. Now some of this stuff is obvious and there has been no lack of people pointing out the problems. If you train an algorithm on engagement for example, it will surface more content that confirms users’ existing beliefs and skews towards emotional content that appeals to our instincts (rather than requiring us to engage our rationality which requires effort). “ — Albert Wenger, Investor Learn More from Continuations >What we’re reading.1/ Twitter won’t use machine learning algorithms to ban white supremacists’ accounts from the platform out of fear that they would affect some republican politicians’ accounts too. Learn More from Motherboard >2/ If we as a society want our Tech companies to build ethical AI tools, we’ll need to protect employees who speak out against questionable practices. Learn More from The New York Times >3/ Machine learning algorithms aren’t inherently biased, but problems arise when the “raw” data we feed them are actually cooked without care. Learn More from Benedict Evans >4/ Facebook claims its AI filters failed to flag the Christchurch mass-shooting video as a “harmful act” because it was filmed from a first-person perspective. Learn More from Bloomberg >5/ Elon Musk wants Tesla to operate a fleet of “robo taxis” by the end of next year — here’s a video of one of their self-driving vehicles in action. Learn More from YouTube >6/ Measuring AI advances against human performance is creating incentives for companies to create technologies that replace human efforts, not augment them. Learn More from Axios >7/ AI algorithms can capture and “regurgitat[e] some statistical variation” of music it hears, but does that make them creative? Learn More from MIT Technology Review >Links from the community.“Announcing our series B funding… and what it means for the future of work” submitted by Dan Turchin (@dturchin). Learn More from Astound >“I trained an AI on Mark Zuckerberg’s Facebook posts and it has thoughts about AI.” submitted by Max Woolf (@minimaxir). Learn More from Twitter >“From Principles to Action: How do we Implement Tech Ethics?” by Industry Ethicists. Learn More from Noteworthy >“Zuck on security: “The only hope is building AI systems that can either identify things…” submitted by Samiur Rahman (@samiur1204). Learn More from Twitter >“Convolutional Neural Network on Oil Spills in Niger Delta” by Kehinde Ogunyale. Learn More from Noteworthy >“So you are thinking of taking the AWS Certified Machine Learning Specialty exam” by… Read more »
  • AI and the Death of Democracy
    Awesome, not awesome.#Awesome“According to the American Cancer Society, more than 229,000 people will be diagnosed with lung cancer in the United States this year, with adenocarcinoma being the most common type. To help with diagnosis, researchers from Dartmouth’s Norris Cotton Cancer Center and the Hassanpour Lab at Dartmouth University developed a deep learning-based system for automated classification of histologic subtypes on lung adenocarcinoma surgical resection slides on par with pathologists. ” — NVIDIA News Center Learn More from NVIDIA >#Not Awesome““The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. “Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.” Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy, Graphics editor Learn More from The New York Times >What we’re reading.1/ China is using artificial intelligence systems to systemically oppress a muslim minority group, and one AI researcher at MIT thinks it is an existential threat to democracy. Learn More from The New York Times >2/ Given that “more than 80% of AI professors are men,” we have a lot of work to do to ensure that new AI technologies used throughout society don’t perpetuate historical biases. Learn More from The Guardian >3/ Left unchecked, massive technology companies may pose a larger threat to society than misused artificial intelligence itself. Learn More from Blair Reeves >4/ YouTube’s CEO struggles to unwind the troubled algorithmic recommendation system that her company built to keep people watching videos (no matter how sensational or untrue). Learn More from The New York Times >5/ Universities are struggling mightily to keep up with companies’ demands to pump out as much AI talent as possible. Learn More from Axios >6/ An algorithm used by a British university to discriminate against in the 1980’s should serve as a precautionary tale about how “unbiased” algorithms can be abused. Learn More from IEEE Spectrum >7/ Microsoft is trying to build a reputation as an ethical AI company, but their decision to help authoritarian governments build facial recognition software is complicating things. Learn More from VentureBeat >Links from the community.“Artificial intelligence speeds efforts to develop clean, virtually limitless fusion energy” submitted by Avi Eisenberger (@aeisenberger). Learn More from EurekAlert! >“A Gentle Introduction to Text Summarization in Machine Learning” submitted by Samiur Rahman (@samiur1204). Learn More from FloydHub >“The Secrets of Successful AI Startups. Who’s Making Money in AI? Part II” by Simon Greenman (@sgreenman). Learn More from Towards Data Science >“Google Coral Edge TPU vs NVIDIA Jetson Nano: A quick deep dive into EdgeAI performance? Part II” by Sam Sterckval. Learn More… Read more »
WordPress RSS Feed Retriever by Theme Mason

Author: hits1k

Leave a Reply