AI machine learning

  • TensorFlow 2.0 and Cloud AI make it easy to train, deploy, and maintain scalable machine learning models
    Since it was open-sourced in 2015, TensorFlow has matured into an entire end-to-end ML ecosystem that includes a variety of tools, libraries, and deployment options to help users go from research to production easily. This month at the 2019 TensorFlow Dev Summit we announced TensorFlow 2.0 to make machine learning models easier to use and deploy.TensorFlow started out as a machine learning framework and has grown into a comprehensive platform that gives researchers and developers access to both intuitive higher-level APIs and low-level operations. In TensorFlow 2.0, eager execution is enabled by default, with tight Keras integration. You can easily ingest datasets via pipelines, and you can monitor your training in TensorBoard directly from Colab and Jupyter Notebooks. The TensorFlow team will continue to work on improving TensorFlow 2.0 alpha with a general release candidate coming later in Q2 2019. Making ML easier to useThe TensorFlow team’s decision to focus on developer productivity and ease of use doesn’t stop at iPython notebooks and Colab, but extends to make API components integrate far more intuitively with tf.keras (now the standard high level API), and to TensorFlow Datasets, which let users import common preprocessed datasets with only one line of code. Data ingestion pipelines can be orchestrated with, pushed into production with TensorFlow Extended (TFX), and scaled to multiple nodes and hardware architectures with minimal code change using distribution strategies.The TensorFlow engineering team has created an upgrade tool and several migration guides to support users who wish to migrate their models from TensorFlow 1.x to 2.0. TensorFlow is also hosting a weekly community testing stand-up for users to ask questions about TensorFlow 2.0 and migration support. If you’re interested, you can find more information on the TensorFlow website.Upgrading a model with the tf_upgrade_v2 tool.Experiment and iterateBoth researchers and enterprise data science teams must continuously iterate on model architectures, with a focus on rapid prototyping and speed to a first solution. With eager execution a focus in TensorFlow 2.0, researchers have the ability to use intuitive Python control flows, optimize their eager code with tf.function, and save time with improved error messaging. Creating and experimenting with models using TensorFlow has never been so easy.Faster training is essential for model deployments, retraining, and experimentation. In the past year, the TensorFlow team has worked diligently to improve training performance times on a variety of platforms including the second-generation Cloud TPU (by a factor of 1.6x) and the NVIDIA V100 GPU (by a factor of more than 2x). For inference, we saw speedups of over 3x with Intel’s MKL library, which supports CPU-based Compute Engine instances.Through add-on extensions, TensorFlow expands to help you build advance models. For example, TensorFlow Federated lets you train models both in the cloud and on remote (IoT or embedded) devices in a collaborative fashion. Often times, your remote devices have data to train on that your centralized training system may not. We also recently announced the TensorFlow Privacy extension, which helps you strip personally identifiable information (PII) from your training data. Finally, TensorFlow Probability extends TensorFlow’s abilities to more traditional statistical use cases, which you can use in conjunction with other functionality like estimators.Deploy your ML model in a variety ofenvironments and languagesA core strength of TensorFlow has always been the ability to deploy models into production. In TensorFlow 2.0, the TensorFlow team is making it even easier. TFX Pipelines give you the ability to coordinate how you serve your trained models for inference at runtime, whether on a single instance, or across an entire cluster. Meanwhile, for more resource-constrained systems, like mobile or IoT devices and embedded hardware, you can easily quantize your models to run with TensorFlow Lite. Airbnb, Shazam, and the BBC are all using TensorFlow Lite to enhance their mobile experiences, and to validate as well as classify user-uploaded content.Exploring and analyzing data with TensorFlow Data Validation.JavaScript is one of the world’s most popular programming languages, and TensorFlow.js helps make ML available to millions of JavaScript developers. The TensorFlow team announced TensorFlow.js version 1.0. This integration means you can not only train and run models in the browser, but also run TensorFlow as a part of server-side hosted JavaScript apps, including on App Engine. TensorFlow.js now has better performance than ever, and its community has grown substantially: in the year since its initial launch, community members have downloaded TensorFlow.js over 300,000 times, and its repository now incorporates code from over 100 contributors.How to get startedIf you’re eager to get started with TensorFlow 2.0 alpha on Google Cloud, start up a Deep Learning VM and try out some of the tutorials. TensorFlow 2.0 is available through Colab via pip install, if you’re just looking to run a notebook anywhere, but perhaps more importantly, you can also run a Jupyter instance on Google Cloud using a Cloud Dataproc Cluster, or launch notebooks directly from Cloud ML Engine, all from within your GCP project.Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.Along with announcing the alpha release of TensorFlow 2.0, we also announced new community and education partnerships. In collaboration with O’Reilly Media, we’re hosting TensorFlow World, a week-long conference dedicated to fostering and bringing together the open source community and all things TensorFlow. Call for proposals is open for attendees to submit papers and projects to be highlighted at the event. Finally, we announced two new courses to help beginners and learners new to ML and TensorFlow. The first course is’s Course 1 - Introduction to TensorFlow for AI, ML and DL, part of the TensorFlow: from Basic to Mastery series. The second course is Udacity’s Intro to TensorFlow for Deep Learning.If you’re using TensorFlow 2.0 on Google Cloud, we want to hear about it! Make sure to join our Testing special interest group, submit your project abstracts to TensorFlow World, and share your projects in our #PoweredByTF Challenge on DevPost. To quickly get up to speed on TensorFlow, be sure to check out our free courses on Udacity and Read more »
  • NVIDIA’s RAPIDS joins our set of Deep Learning VM images for faster data science
    If you’re a data scientist, researcher, engineer, or developer, you may be familiar with Google Cloud’s set of Deep Learning Virtual Machine (VM) images, which enable the one-click setup machine learning-focused development environments. But some data scientists still use a combination of pandas, Dask, scikit-learn, and Spark on traditional CPU-based instances. If you’d like to speed up your end-to-end pipeline through scale, Google Cloud’s Deep Learning VMs now include an experimental image with RAPIDS, NVIDIA’s open source and Python-based GPU-accelerated data processing and machine learning libraries that are a key part of NVIDIA’s larger collection of CUDA-X AI accelerated software. CUDA-X AI is the collection of NVIDIA's GPU acceleration libraries to accelerate deep learning, machine learning, and data analysis.The Deep Learning VM Images comprise a set of Debian 9-based Compute Engine virtual machine disk images optimized for data science and machine learning tasks. All images include common machine learning (typically deep learning, specifically) frameworks and tools installed from first boot, and can be used out of the box on instances with GPUs to accelerate your data processing tasks. In this blog post you’ll learn to use a Deep Learning VM which includes GPU-accelerated RAPIDS libraries.RAPIDS is an open-source suite of data processing and machine learning libraries developed by NVIDIA that enables GPU-acceleration for data science workflows. RAPIDS relies on NVIDIA’s CUDA language allowing users to leverage GPU processing and high-bandwidth GPU memory through user-friendly Python interfaces. It includes the DataFrame API based on Apache Arrow data structures called cuDF, which will be familiar to users of pandas. It also includes cuML, a growing library of GPU-accelerated ML algorithms that will be familiar to users of scikit-learn. Together, these libraries provide an accelerated solution for ML practitioners to use requiring only  minimal code changes and no new tools to learn.RAPIDS is available as a conda or pip package, in a Docker image, and as source code.Using the RAPIDS Google Cloud Deep Learning VM image automatically initializes a Compute Engine instance with all the pre-installed packages required to run RAPIDS. No extra steps required!Creating a new RAPIDS virtual machine instanceCompute Engine offers predefined machine types that you can use when you create an instance. Each predefined machine type includes a preset number of vCPUs and amount of memory, and bills you at a fixed rate, as described on the pricing page.If predefined machine types do not meet your needs, you can create an instance with a custom virtualized hardware configuration. Specifically, you can create an instance with a custom number of vCPUs and amount of memory, effectively using a custom machine type.In this case, we’ll create a custom Deep Learning VM image with 48 vCPUs, extended memory of 384 GB, 4 NVIDIA Tesla T4 GPUs and RAPIDS support.Notes:You can create this instance in any available zone that supports T4 GPUs.The option install-nvidia-driver=True Installs NVIDIA GPU driver automatically.The option proxy-mode=project_editors makes the VM visible in the Notebook Instances section.To define extended memory, use 1024*X where X is the number of GB required for RAM.Using RAPIDSTo put the RAPIDS through its paces on Google Cloud Platform (GCP), we focused on a common HPC workload: a parallel sum reduction test. This test can operate on very large problems (the default size is 2TB) using distributed memory and parallel task processing.There are several applications that require the computation of parallel sum reductions in high performance computing (HPC). Some examples include:Solving linear recurrencesEvaluation of polynomialsRandom number generationSequence alignmentN-body simulationIt turns out that parallel sum reduction is useful for the data science community at large. To manage the deluge of big data, a parallel programming model called “MapReduce” is used for processing data using distributed clusters. The “Map” portion of this model supports sorting: for example, sorting products into queues. Once the model maps the data, it then summarizes the output with the “Reduce” algorithm—for example, count the number of products in each queue. A summation operation is the most compute-heavy step, and given the scale of data that the model is processing, these sum operations must be carried out using parallel distributed clusters in order to complete in a reasonable amount of time.But certain reduction sum operations contain dependencies that inhibit parallelization. To illustrate such a dependency, suppose we want to add a series of numbers as shown in Figure 1.From the figure 1 on the left, we must first add 7 + 6 to obtain 13, before we can add 13 + 14 to obtain 27 and so on in a sequential fashion. These dependencies inhibit parallelization. However, since addition is associative, the summation can be expressed as a tree (figure 2 on the right). The benefit of this tree representation is that the dependency chain is shallow, and since the root node summarizes its leaves, this calculation can be split into independent tasks.Speaking of tasks, this brings us to the Python package Dask, a popular distributed computing framework. With Dask, data scientists and researchers can use Python to express their problems as tasks. Dask then distributes these tasks across processing elements within a single system, or across a cluster of systems. The RAPIDS team recently integrated GPU support into a package called dask-cuda. When you import both dask-cuda and another package called CuPY, which allows data to be allocated on GPUs using familiar numpy constructs, you can really explore the full breadths of models you can build with your data set. To illustrate, shown in Figures 3 and 4 show side-by-side comparisons of the same test run. On the left, 48 cores of a single system are used to process 2 terabytes (TB) of randomly initialized data using 48 Dask workers. On the right, 4 Dask workers process the same 2 TB of data, but dask-cuda is used to automatically associate those workers with 4 Tesla T4 GPUs installed in the same system.Running RAPIDSTo test parallel sum-reduction, perform the following steps:1. SSH into the instance. See Connecting to Instances for more details.2. Download the code required from this repository and upload it to your Deep Learning Virtual Machine Compute Engine instance. These two files are of particular importance as you profile helper bash shell summation Python scriptYou can find the sample code to run these tests, based on this blog, GPU Dask Arrays, below.3. Run the tests:Run test on the instance’s CPU complex, in this case specifying 48 vCPUs (indicated by the -c flag):Now, run the test using 4 (indicated by the -g flag) NVIDIA Tesla T4 GPUs:Figure 3.c: CPU-based solution. Figure 4.d: GPU-based solutionHere are some initial conclusions we derived from these tests:Processing 2 TB of data on GPUs is much faster (an ~12x speed-up for this test)Using Dask’s dashboard, you can visualize the performance of the reduction sum as it is executingCPU cores are fully occupied during processing on CPUs, but the GPUs are not fully utilizedYou can also run this test in a distributed environmentIn this example, we allocate Python arrays using the double data type by default. Since this code allocates an array size of (500K x 500K) elements, this represents 2 TB  (500K × 500K × 8 bytes / word). Dask initializes these array elements randomly via normal Gaussian distribution using the dask.array package.Running RAPIDS on a distributed clusterYou can also run RAPIDS in a distributed environment using multiple Compute Engine instances. You can use the same code to run RAPIDS in a distributed way with minimal modification and still decrease the processing time. If you want to explore RAPIDS in a distributed environment please follow the complete guide here.ConclusionAs you can see from the above example, the RAPIDS VM Image can dramatically speed up your ML workflows. Running RAPIDS with Dask lets you seamlessly integrate your data science environment with Python and its myriad libraries and wheels, HPC schedulers such as SLURM, PBS, SGE, and LSF, and open-source infrastructure orchestration projects such as Kubernetes and YARN. Dask also helps you develop your model once, and adaptably run it on either a single system, or scale it out across a cluster. You can then dynamically adjust your resource usage based on computational demands. Lastly, Dask helps you ensure that you’re maximizing uptime, through fault tolerance capabilities intrinsic in failover-capable cluster computing.It’s also easy to deploy on Google’s Compute Engine distributed environment. If you’re eager to learn more, check out the RAPIDS project and open-source community website, or review the RAPIDS VM Image documentation.Acknowledgements: Ty McKercher, NVIDIA, Principal Solution Architect; Vartika Singh, NVIDIA, Solution Architect; Gonzalo Gasca Meza, Google, Developer Programs Engineer; Viacheslav Kovalevskyi, Google, Software Engineer Read more »
  • Cloud AI helps you train and serve TensorFlow TFX pipelines seamlessly and at scale
    Last week, at the TensorFlow Dev Summit, the TensorFlow team released new and updated components that integrate into the open source TFX Platform (TensorFlow eXtended). TFX components are a subset of the tools used inside Google to power hundreds of teams’ wide-ranging machine learning applications. They address critical challenges to successful deployment of machine learning (ML) applications in production, such as:The prevention of training-versus-serving skewInput data validation and quality checksVisualization of model performance on multiple slices of dataA TFX pipeline is a sequence of components that implements an ML pipeline that is specifically designed for scalable, high-performance machine learning tasks. TFX pipelines support modeling, training, serving/inference, and managing deployments to online, native mobile, and even JavaScript targets.In this post, we‘ll explain how Google Cloud customers can use the TFX platform for their own ML applications, and deploy them at scale.Cloud Dataflow as a serverless autoscaling execution engine for (Apache Beam-based) TFX componentsThe TensorFlow team authored TFX components using Apache Beam for distributed processing. You can run Beam natively on Google Cloud with Cloud Dataflow, a seamless autoscaling runtime that gives you access to large amounts of compute capability on-demand. Beam can also run in many other execution environments, including Apache Flink, both on-premises and in multi-cloud mode. When you run Beam pipelines on Cloud Dataflow—the execution environment they were designed for—you can access advanced optimization features such as Dataflow Shuffle that groups and joins datasets larger than 200 terabytes. The same team that designed and built MapReduce and Google Flume also created third-generation data runtime innovations like dynamic work rebalancing, batch and streaming unification, and runner-agnostic abstractions that exist today in Apache Beam.Kubeflow Pipelines makes it easy to author, deploy, and manage TFX workflowsKubeflow Pipelines, part of the popular Kubeflow open source project, helps you author, deploy and manage TFX workflows on Google Cloud. You can easily deploy Kubeflow on Google Kubernetes Engine (GKE), via the 1-click deploy process. It automatically configures and runs essential backend services, such as the orchestration service for workflows, and optionally the metadata backend that tracks information relevant to workflow runs and the corresponding artifacts that are consumed and produced. GKE provides essential enterprise capabilities for access control and security, as well as tooling for monitoring and metering.Thus, Google Cloud makes it easy for you to execute TFX workflows at considerable scale using:Distributed model training and scalable model serving on Cloud ML EngineTFX component execution at scale on Cloud DataflowWorkflow and metadata orchestration and management with Kubeflow Pipelines on GKEFigure 1: TFX workflow running in Kubeflow PipelinesThe Kubeflow Pipelines UI shown in the above diagram makes it easy to visualize and track all executions. For deeper analysis of the metadata about component runs and artifacts, you can host a Jupyter notebook in the Kubeflow cluster, and query the metadata backend directly. You can refer to this sample notebook for more details.At Google Cloud, we work to empower our customers with the same set of tools and technologies that we use internally across many Google businesses to build sophisticated ML workflows. To learn more about using TFX, please check out the TFX user guide, or learn how to integrate TFX pipelines into your existing Apache Beam workflows in this video.Acknowledgments:Sam McVeety, Clemens Mewald, and Ajay Gopinathan also contributed to this post. Read more »
  • New study: The state of AI in the enterprise
    Editor’s note: Today we hear from one of our Premier partners, Deloitte.  Deloitte’s recent report, The State of AI in the Enterprise, 2nd Edition, examines how businesses are thinking about—and deploying—AI services.From consumer products to financial services, AI is transforming the global business landscape. In 2017, we began our relationship with Google Cloud to help our joint customers deploy and scale AI applications for their businesses. These customers frequently tell us they’re seeing steady returns on their investments in AI, and as a result, they’re interested in more ways to increase those investments.We regularly conduct research on the broader market trends for AI, and in November of 2018, we released our second annual “State of AI in the Enterprise” study. It showed that industry trends at large reflect what we hear from our customers: the business community remains bullish on AI’s impact.In this blog post, we’ll examine some of the key takeaways from our survey of 1,100 IT and line-of-business executives and discuss how these findings are relevant to our customers.Enterprises are doubling down on AI—and seeing financial benefitsMore than 95 percent of respondents believe that AI will transform both their businesses and their industries. A majority of survey respondents have already made large-scale investments in AI, with 37 percent saying they have committed $5 million or more to AI-specific initiatives. Nearly two-thirds of respondents (63 percent) feel AI has completely upended the marketplace and they need to make large-scale investments to catch up with rivals—or even to open a narrow lead.A surprising 82 percent of our respondents told us they’ve already gained a financial return from their AI investments. But that return is not equal across industries. Technology, media, and telecom companies, along with professional services firms, have made the biggest investments and realized the highest returns. In contrast, the public sector and financial services, with lower investments, lag behind. With 88 percent of surveyed companies planning to increase AI spending in the coming year, there’s a significant opportunity to increase both revenue and cost savings across all industries. However, like past transformative technologies, selecting the right AI use cases will be key to recognizing near and long-term benefits.Enterprises are using a broad range of AI technologies, increasingly in the cloudOur findings show that enterprises are employing a wide variety of AI technologies. More than half of respondents say their businesses are using statistical machine learning (63 percent), robotic process automation (59 percent), or natural language processing and generation (53 percent). Just under half (49 percent) are still using expert or rule-based systems, and 34 percent are using deep learning.When asked how they were accessing these AI capabilities, 59 percent said they relied on enterprise software with AI capabilities (much of which is available in the cloud) and 49 percent said, “AI as a service” (again, presumably in the cloud). Forty-six percent, a surprisingly high number, said they were relying upon automated machine learning—a set of capabilities that are only available in the cloud. It’s clear, then, that the cloud is already having a major effect on AI use in these large enterprises.These trends suggest that public cloud providers can become the primary way businesses access AI services. As a result, we believe this could lower the cost of cloud services and enhance its capabilities at the same time. In fact, our research shows that AI technology companies are investing more R&D dollars into enhancing cloud native versions of AI systems. If this trend continues, it seems likely that enterprises seeking best-of-breed AI solutions will increasingly need to access them from cloud providers.There are still challenges to overcomeGiven the enthusiasm surrounding AI technologies, it is not surprising that organizations also need to supplement their investments in talent. Although 31 percent of respondents listed “lack of AI skills” as a top-three concern—below such issues as implementation, integration, and data—HR teams need to look beyond technology skills to understand their organization’s pain points and end goals. Companies should try to secure teams that bring a mix of business and technology experience to help fully realize their AI project potential.Our respondents also had concerns about AI-related risks. A little more than half are worried about cybersecurity issues around AI (51 percent), and are concerned about “making the wrong strategic decisions based on AI recommendations” (43 percent). Companies have also begun to recognize ethical risks from AI, the most common being “using AI to manipulate information and create falsehoods” (43 percent).In conclusionDespite some challenges, our study suggests that enterprises are enthusiastic about AI, have already seen value from their investments, and are committed to expanding those investments. Looking forward, we expect to see substantial growth in AI and its cloud-based implementations, and that businesses will increasingly turn to public cloud providers as their primary method of accessing them.Deloitte was proud to be named Google Cloud’sGlobal Services Partner of the Year for 2017–in part due to our joint investments in AI. To learn more about how we can help you accelerate your organization’s AI journey, contact used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting. Read more »
  • Everyday AI: beyond spell check, how Google Docs is smart enough to correct grammar
    Written communication is at the heart of what drives businesses. Proposals, presentations, emails to colleagues—this all keeps work moving forward. This is why we’ve built features into G Suite to help you communicate effectively, like Smart Compose and Smart Reply, which use machine learning smarts to help you draft and respond to messages quickly. More recently, we’ve introduced machine translation techniques into Google Docs to flag grammatical errors within your documents as you draft them.If you’ve ever questioned whether to use “a” versus “an” in a sentence, or if you’re using the correct verb tense or preposition, you’re not alone. Grammar is nuanced and tricky, which makes it a great problem to solve with the help of artificial intelligence. Here’s a look at how we built grammar suggestions in Docs.The gray areas of grammarAlthough we generally think of grammar as a set of rules, these rules are often complex and subjective. In spelling, you can reference a resource that tells you whether a word exists or how it’s spelled: dictionaries (Remember those?).Grammar is different. It’s a harder problem to tackle because its rules aren’t fixed. It varies based on language and context, and may change over time, too. To make things more complicated, there are many different style books—whether it be MLA, AP or some other style—which makes consistency a challenge.Given these nuances, even the experts don’t always agree on what’s correct. For our grammar suggestions, we worked with professional linguists to proofread sample sentences to get a sense of the true subjectivity of grammar. During that process, we found that linguists disagreed on grammar about 25 percent of the time. This raised the obvious question: how do we automate something that doesn’t run on definitive rules?Where machine translation makes a markMuch like having someone red-line your document with suggestions on how to replace “incorrect” grammar with “correct” grammar, we can use machine translation technology to help automate that process. At a basic level, machine translation performs substitution and reorders words from a source language to a target language, for example, substituting a “source” word in English (“hello!”) for a “target” word in Spanish (¡hola!). Machine translation techniques have been developed and refined over the last two decades throughout the industry, in academia and at Google, and have even helped power Google Translate.Along similar lines, we use machine translation techniques to flag “incorrect” grammar within Docs using blue underlines, but instead of translating from one language to another like with Google Translate, we treat text with incorrect grammar as the “source” language and correct grammar as the “target.”Working with the expertsBefore we could train models, we needed to define “correct” and "incorrect” grammar. What better way to do so than to consult the experts? Our engineers worked with a collection of computational and analytical linguists, with specialties ranging from sociology to machine learning. This group supports a host of linguistic projects at Google and helps bridge the gap between how humans and machines process language (and not just in English—they support over 40 languages and counting).For several months, these linguists reviewed thousands of grammar samples to help us refine machine translation models, from classic cases like “there” versus “their” versus “they’re,” to more complex rules involving prepositions and verb tenses. Each sample received close attention—three linguists reviewed each case to identify common patterns and make corrections. The third linguist served as the “tie breaker” in case of disagreement (which happened a quarter of the time).Once we identified the samples, we then fed them into statistical learning algorithms—along with “correct” text gathered from high-quality web sources (billions of words!)—to help us predict outcomes using stats like the frequency at which we’ve seen a specific correction occur. This process helped us build a basic spelling and grammar correction model.We iterated over these models by rolling them out to a small portion of people who use Docs, and then refined them based on user feedback and interactions. For example, in earlier models of grammar suggestions, we received feedback that suggestions for verb tenses and the correct singular or plural form of a noun or verb were inaccurate. We’ve since adjusted the model to solve for these specific issues, resulting in more precise suggestions. Better grammar. No ifs, ands or buts.So if you’ve ever asked yourself “how does it know what to suggest when I write in Google Docs,” these grammar suggestion models are the answer.  They’re working in the background to analyze your sentence structure, and the semantics of your sentence, to help you find mistakes or inconsistencies. With the help of machine translation, here are some mistakes that Docs can help you catch:Evolving grammar suggestions, just like languageWhen it comes to grammar, we’re constantly improving the quality of each suggestion to make corrections as useful and relevant as possible. With our AI-first approach, G Suite is in the best position to help you communicate smarter and faster, without sweating the small stuff. Learn more. Read more »
  • Let Deep Learning VMs and Jupyter notebooks burn the midnight oil for you: robust and automated training with Papermill
    In the past several years, Jupyter notebooks have become a convenient way of experimenting with machine learning datasets and models, as well as sharing training processes with colleagues and collaborators. Often times your notebook will take a long time to complete its execution. An extended training session may cause you to incur charges even though you are no longer using Compute Engine resources.This post will explain how to execute a Jupyter Notebook in a simple and cost-efficient way.We’ll explain how to deploy a Deep Learning VM image using TensorFlow to launch a Jupyter notebook which will be executed using the Nteract Papermill open source project. Once the notebook has finished executing, the Compute Engine instance that hosts your Deep Learning VM image will automatically terminate.The components of our system:First, Jupyter NotebooksThe Jupyter Notebook is an open-source web-based, interactive environment for  creating and sharing IPython notebook (.ipynb) documents that contain live code, equations, visualizations and narrative text. This platform supports data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.Next, Deep Learning Virtual Machine (VM) imagesThe Deep Learning Virtual Machine images are a set of Debian 9-based Compute Engine virtual machine disk images that are optimized for data science and machine learning tasks. All images include common ML frameworks and tools installed from first boot, and can be used out of the box on instances with GPUs to accelerate your data processing tasks. You can launch Compute Engine instances pre-installed with popular ML frameworks like TensorFlow, PyTorch, or scikit-learn, and even add Cloud TPU and GPU support with a single click.And now, PapermillPapermill is a library for parametrizing, executing, and analyzing Jupyter Notebooks. It lets you spawn multiple notebooks with different parameter sets and execute them concurrently. Papermill can also help collect and summarize metrics from a collection of notebooks.Papermill also permits you to read or write data from many different locations. Thus, you can store your output notebook on a different storage system that provides higher durability and easy access in order to establish a reliable pipeline. Papermill recently added support for Google Cloud Storage buckets, and in this post we will show you how to put this new functionality to use.InstallationSubmit a Jupyter notebook for executionThe following command starts execution of a Jupyter notebook stored in a Cloud Storage bucket:The above commands do the following:Create a Compute Engine instance using TensorFlow Deep Learning VM and 2 NVIDIA Tesla T4 GPUsInstall the latest NVIDIA GPU driversExecute the notebook using PapermillUpload notebook result (with all the cells pre-computed) to Cloud Storage bucket in this case: “gs://my-bucket/”Terminate the Compute Engine instanceAnd there you have it! You’ll no longer pay for resources you don’t use since after execution completes, your notebook, with populated cells, is uploaded to the specified Cloud Storage bucket. You can read more about it in the Cloud Storage documentation.Note: In case you are not using a Deep Learning VM, and you want to install Papermill library with Cloud Storage support, you only need to run:Note: Papermill version 0.18.2 supports Cloud Storage.And here is an even simpler set of bash commands:Execute a notebook using GPU resourcesExecute a notebook using CPU resourcesThe Deep Learning VM instance requires several permissions: read and write ability to Cloud Storage, and the ability to delete instances on Compute Engine. That is why our original command has the scope “” defined.Your submission process will look like this:Note: Verify that you have enough CPU or GPU resources available by checking your quota in the zone where your instance will be deployed.Executing a Jupyter notebookLet’s look into the following code:This command is the standard way to create a Deep Learning VM. But keep in mind, you’ll need to pick the VM that includes the core dependencies you need to execute your notebook. Do not try to use a TensorFlow image if your notebook needs PyTorch or vice versa.Note: if you do not see a dependency that is required for your notebook and you think should be in the image, please let us know on the forum (or with a comment to this article).The secret sauce here contains two following things:Papermill libraryStartup shell scriptPapermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks.Papermill lets you:Parameterize notebooks via command line arguments or a parameter file in YAML formatExecute and collect metrics across the notebooksSummarize collections of notebooksIn our case, we are just using its ability to execute notebooks and pass parameters if needed.Behind the scenesLet’s start with the startup shell script parameters:INPUT_NOTEBOOK_PATH: The input notebook located Cloud Storage bucket.Example: gs://my-bucket/input.ipynbOUTPUT_NOTEBOOK_PATH: The output notebook located Cloud Storage bucket.Example: gs://my-bucket/input.ipynb.PARAMETERS_FILE: Users can provide a YAML file where notebook parameter values should be read.Example: gs://my-bucket/params.yamlPARAMETERS: Pass parameters via -p key value for notebook execution.Example: -p batch_size 128 -p epochs 40.The two ways to execute the notebook with parameters are: (1) through the Python API and (2) through the command line interface. This sample script supports two different ways to pass parameters to Jupyter notebook, although Papermill supports other formats, so please consult Papermill’s documentation.The above script performs the following steps:Creates a Compute Engine instance using the TensorFlow Deep Learning VM and 2 NVIDIA Tesla T4 GPUsInstalls NVIDIA GPU driversExecutes the notebook using Papermill toolUploads notebook result (with all the cells pre-computed) to Cloud Storage bucket in this case: gs://my-bucket/Papermill emits a save after each cell executes, this could generate “429 Too Many Requests” errors, which are handled by the library itself.Terminates the Compute Engine instanceConclusionBy using the Deep Learning VM images, you can automate your notebook training, such that you no longer need to pay extra or manually manage your Cloud infrastructure. Take advantage of all the pre-installed ML software and Nteract’s Papermill project to help you solve your ML problems more quickly! Papermill will help you automate the execution of yourJupyter notebooks and in combination of Cloud Storage and Deep Learning VM images you can now set up this process in a very simple and cost efficient way. Read more »
  • Why so tense? Let grammar suggestions in Google Docs help you write even better
    If you're working against deadlines to create documents daily (how’s that for alliteration?), having correct grammar probably isn’t the first thing on your mind. And when it is, it seems there’s almost always a contested debate about what is correct (or “which?”). Even professional linguists have a hard time agreeing on grammatical suggestions—our own research found that one in four times linguists disagree on whether a suggestion is correct.We first introduced spell check in Google Docs to help folks catch errors seven years ago, and have since improved these features so that you can present your best work. Today we’re taking that a step further by using machine translation techniques to help you catch tricky grammatical errors, too, with grammar suggestions in Docs (which we first introduced at Google Cloud Next last year).G Suite Basic, Business, and Enterprise customers will start to see inline, contextual grammar suggestions in their documents as they type, just like spellcheck. If you’ve made a grammar mistake, a squiggly blue line will appear under the phrase as you write it. You can choose to accept the suggestion by right-clicking it.“Affect” versus “effect,” “there” versus “their,” or even more complicated rules like how to use prepositions correctly or the right verb tense, are examples of errors that grammar suggestions can help you catch. Because this technology is built right into in Docs, you don’t have to rely on third-party apps to do the work.How it worksWhen it comes to spelling, you can typically look up whether a word exists in the dictionary. Grammar is different. It’s a more complex set of rules that can vary based on the language, region, style and more. Because it’s subjective, it can be a harder problem to tackle using a fixed set of rules. To solve the problem, we use machine translation to build a model that can incorporate the complexity and nuances of grammar correction.Using machine translation, we are able to recognize errors and suggest corrections as work is getting done. We worked closely with linguists to decipher the rules for the machine translation model and used this as the foundation of automatic suggestions in your Docs, all powered by AI. Learn more about that process in detail in this blog post.In doing so, machine translation techniques can catch a range of different corrections, from simple grammatical rules such as how to use “a” versus “an” in a sentence, to more complex grammatical concepts such as how to use subordinate clauses correctly.Using artificial intelligence to make work easierGoogle’s machine intelligence helps individuals collaborate more efficiently everyday in G Suite. If you’ve ever assigned action items or used the Explore feature to search for relevant content to add to your Docs, you’ve tried the power of AI first hand.Happy writing! Read more »
  • Train fast on TPU, serve flexibly on GPU: switch your ML infrastructure to suit your needs
    When developing machine learning models, fast iteration and short training times are of utmost importance. In order for you or your data science team to reach higher levels of accuracy, you may need to run tens or hundreds of training iterations in order to explore different options.A growing number of organizations use Tensor Processing Units (Cloud TPUs) to train complex models due to their ability to reduce the training time from days to hours (roughly a 10X reduction) and the training costs from thousands of dollars to tens of dollars (roughly a 100X reduction). You can then deploy your trained models to CPUs, GPUs, or TPUs to make predictions at serving time. In some applications for which response latency is critical—e.g., robotics or self-driving cars—you might need to make additional optimizations. For example, many data scientists frequently use NVIDIA’s TensorRT to improve inference speed on GPUs. In this post, we walk through training and serving an object detection model and demonstrate how TensorFlow’s comprehensive and flexible feature set can be used to perform each step, regardless of which hardware platform you choose.A TensorFlow model consists of many operations (ops) that are responsible for training and making predictions, for example, telling us whether a person is crossing the street. Most of TensorFlow ops are platform-agnostic and can run on CPU, GPU, or TPU. In fact, if you implement your model using TPUEstimator, you can run it on a Cloud TPU by just setting the use_tpu flag to True, and run it on a CPU or GPU by setting the flag to False.NVIDIA has developed TensorRT (an inference optimization library) for high-performance inference on GPUs. TensorFlow (TF) now includes TensorRT integration (TF-TRT) module that can convert TensorFlow ops in your model to TensorRT ops. With this integration, you can train your model on TPUs and then use TF-TRT to convert the trained model to a GPU-optimized one for serving. In the following example we will train a state-of-the-art object detection model, RetinaNet, on a Cloud TPU, convert it to a TensorRT-optimized version, and run predictions on a GPU.Train and save a modelYou can use the following instructions for any TPU model, but in this guide, we choose as our example the TensorFlow TPU RetinaNet model. Accordingly, you can start by following this tutorial to train a RetinaNet model on Cloud TPU. Feel free to skip the section titled "Evaluate the model while you train (optional)"1.For the RetinaNet model that you just trained, if you look inside the model directory (${MODEL_DIR} in the tutorial) in Cloud Storage you’ll see multiple model checkpoints. Note that checkpoints may be dependent on the architecture used to train a model and are not suitable for porting the model to a different architecture.TensorFlow offers another model format, SavedModel, that you can use to save and restore your model independent of the code that generated it. A SavedModel is language-neutral and contains everything you need (graph, variables, and metadata) to port your model from TPU to GPU or CPU.Inside the model directory, you should find a timestamped subdirectory (in Unix epoch time format, for example, 1546300800 for 2019-01-01-12:00:00 GMT) that contains the exported SavedModel. Specifically, your subdirectory contains the following files:saved_model.pbvariables/ training script stores your model graph as saved_model.pb in a protocol buffer (protobuf) format, and stores in the variables in the aptly named variables subdirectory. Generating a SavedModel involves two steps—first, to define a serving_input_receiver_fn and then, to export a SavedModel.At serving time, the serving input receiver function ingests inference requests and prepares them for the model, just as at training time the input function input_fn ingests the training data and prepares them for the model. In the case of RetinaNet, the following code defines the serving input receiver function:The  serving_input_receiver_fn returns a tf.estimator.export.ServingInputReceiver object that takes the inference requests as arguments in the form of receiver_tensors and the features used by model as features. When the script returns a ServingInputReceiver, it’s telling TensorFlow everything it needs to know in order to construct a server. The features arguments describe the features that will be fed to our model. In this case, features is simply the set of images to run our detector on. receiver_tensors specifies the inputs to our server. Since we want our server to take JPEG encoded images, there will be a tf.placeholder for an array of strings. We decode each string into an image, crop it to the correct size and return the resulting image tensor.To export a SavedModel, call the export_saved_model method on your estimator shown in the following code snippet:Running export_saved_model generates a `SavedModel` directory in your FLAGS.model_dir directory. The SavedModel exported from TPUEstimator contains information on how to serve your model on CPU, GPU and TPU architectures.InferenceYou can take the SavedModel that you trained on a TPU and load it on CPU(s), GPU(s) or TPU(s), to run predictions. The following lines of code restore the model and run inference.model_dir is your model directory where the SavedModel is stored. loader.load returns a MetaGraphDef protocol buffer loaded in the provided session. model_outputs is the list of model outputs you’d like to predict, model_input is the name of the placeholder that receives the input data, and input_image_batch is the input data directory2.With TensorFlow, you can very easily train and save a model on one platform (like TPU) and load and serve it on another platform (like GPU or CPU). You can choose from different Google Cloud Platform services such as Cloud Machine Learning Engine, Kubernetes Engine, or Compute Engine to serve your models. In the remainder of this post you’ll learn how to optimize the SavedModel using TF-TRT, which is a common process if you plan to serve your model on one or more GPUs.TensorRT optimizationWhile you can use the SavedModel exported earlier to serve predictions on GPUs directly, NVIDIA’s TensorRT allows you to get improved performance from your model by using some advanced GPU features. To use TensorRT, you’ll need a virtual machine (VM) with a GPU and NVIDIA drivers. Google Cloud’s Deep Learning VMs are ideal for this case, because they have everything you need pre-installed.Follow these instructions to create a Deep Learning VM instance with one or more GPUs on Compute Engine. Select the checkbox "Install NVIDIA GPU driver automatically on first startup?" and choose a "Framework" (for example, "Intel optimized TensorFlow 1.12" at the time of writing this post) that comes with the most recent version of CUDA and TensorRT that satisfy the dependencies for the TensorFlow with GPU support and TF-TRT modules. After your VM is initialized and booted, you can remotely log into it by clicking the SSH button next to its name on the Compute Engine page on Cloud Console or using the gcloud compute ssh command. Install the dependencies (recent versions of TensorFlow include TF-TRT by default) and clone the TensorFlow TPU GitHub repository3.Now run tpu/models/official/retinanet/ and provide the location of the SavedModel as an argument:In the preceding code snippet, SAVED_MODEL_DIR is the path where SavedModel is stored (on Cloud Storage or local disk). This step converts the original SavedModel to a new GPU optimized SavedModel and prints out the prediction latency for the two models.If you look inside the model directory you can see that has converted the original SavedModel to a TensorRT-optimized SavedModel and stored it in a new folder ending in _trt. This step was done using the command.In the new SavedModel, the TensorFlow ops have been replaced by their GPU-optimized TensorRT implementations. During conversion, the script converts all variables to constants, and writes out to saved_model.pb, and therefore the variables folder is empty. TF-TRT module has an implementation for the majority of TensorFlow ops. For some ops, such as control flow ops such as Enter, Exit, Merge, and Switch, there are no TRT implementation, therefore they stay unchanged in the new SavedModel, but their effect on prediction latency is negligible.Another method to convert the SavedModel to its TensorRT inference graph is to use the saved_model_cli tool using the following command:In the preceding command MY_DIR is the shared filesystem directory and SAVED_MODEL_DIR is the directory inside the shared filesystem directory where the SavedModel is also loads and runs two models before and after conversion and prints the prediction latency. As we expect, the converted model has lower latency. Note that for inference, the first prediction often takes longer than subsequent predictions. This is due to startup overhead and for TPUs, the time taken to compile the TPU program via XLA. In our example, we skip the time taken by the first inference step, and average the remaining steps from the second iteration onwards.You can apply these steps to other models to easily port them to a different architecture, and optimize their performance. The TensorFlow and TPU GitHub repositories contain a diverse collection of different models that you can try out for your application including another state of the art object detection model, Mask R-CNN. If you’re interested in trying out TPUs, to see what they can offer you in terms of training and serving times, try this Colab and quickstart.1. You can skip the training step altogether by using the pre-trained checkpoints which are stored in Cloud Storage under gs://cloud-tpu-checkpoints/retinanet-model.2. Use loader.load(sess, [tag_constants.SERVING], saved_model_dir).signature_def to load the model and return the signature_def which contains the model input(s) and output(s). sess is the Session object here.3. Alternatively you can use a Docker image with a recent version of TensorFlow and the dependencies. Read more »
  • Enabling connected transformation with Apache Kafka and TensorFlow on Google Cloud Platform
    Editor’s note: Many organizations depend on real-time data streams from a fleet of remote devices, and would benefit tremendously from machine learning-derived, automated insights based on that real-time data. Founded by the team that built Apache Kafka, Confluent offers a a streaming platform to help companies easily access data as real-time streams. Today, Confluent’s Kai Waehner describes an example describing a fleet of connected vehicles, represented by Internet of Things (IoT) devices, to explain how you can leverage the open source ecosystems of Apache Kafka and TensorFlow on Google Cloud Platform and in concert with different Google machine learning (ML) services.Imagine a global automotive company with a strategic initiative for digital transformation to improve customer experience, increase revenue, and reduce risk. Here is the initial project plan:The main goal of this transformation plan is to improve existing business processes, rather than to create new services. Therefore, cutting-edge ML use cases like sentiment analysis using Recurrent Neural Networks (RNN) or object detection (e.g. for self-driving cars) using Convolutional Neural Networks (CNN) are out of scope and covered by other teams with longer-term mandates.Instead, the goal of this initiative is to analyze and act on critical business events by improving existing business processes in the short term, meaning months, not years, to achieve some quick wins with machine learning:All these business processes are already in place, and the company depends on them. Our goal is to leverage ML to improve these processes in the near term. For example, payment fraud is a consistent problem in online platforms, and our automotive company can use a variety of data sources to successfully analyze and help identify fraud in this context. In this post, we’ll explain how the company can leverage an analytic model for continuous stream processing in real-time, and use IoT infrastructure to detect payment fraud and alert them in the case of risk.Building a scalable, mission-critical, and flexible ML infrastructureBut before we can do that, let’s talk about the infrastructure needed for this project.If you’ve spent some time with TensorFlow tutorials or its most popular wrapper framework, Keras, which is typically even easier to use, you might not think that building and deploying models is all that challenging. Today, a data scientist can build an analytic model with only a few lines of Python code that run predictions on new data with very good accuracy.However, data preparation and feature engineering can consume most of a data scientist’s time. This idea may seem to contradict what you experience when you follow tutorials, because these efforts are already completed by the tutorial’s designer. Unfortunately, there is a hidden technical debt inherent in typical machine learning systems:You can read an in-depth analysis of the hidden technical debt in ML systems here.Thus, we need to ask the fundamental question that addresses how you’ll add real business value to your big data initiatives: how can you build a scalable infrastructure for your analytics models? How will you preprocess and monitor incoming data feeds? How will you deploy the models in production, on real-time data streams, at scale, and with zero downtime?Many larger technology companies faced these challenges some years before the rest of the industry. Accordingly, they have already implemented their own solutions to many of these challenges. For example, consider:Netflix’ Meson: a scalable recommendation engineUber’s Michelangelo: a platform and technology independent ML frameworkPayPal’s real time ML pipeline for fraud detectionAll of these projects use Apache Kafka as their streaming platform. This blog post explains how to solve the above described challenges for your own use cases leveraging the open source ecosystem of Apache Kafka and a number of services on Google Cloud Platform (GCP).Apache Kafka: the rise of a streaming platformYou may already be familiar with Apache Kafka, a hugely successful open source project created at LinkedIn for big data log analytics. But today, this is just one of its many use cases. Kafka evolved from a data ingestion layer to a feature-rich event streaming platform for all the use cases discussed above. These days, many enterprise data-focused projects build mission-critical applications around Kafka. As such, it has to be available and responsive, round the clock. If Kafka is down, their business processes stop working.The practicality of keeping messaging, storage, and processing in one distributed, scalable, fault-tolerant, high volume, technology-independent streaming platform is the primary reason for the global success of Apache Kafka in many large enterprises, regardless of industry. For example, LinkedIn processes over 4.5 trillion messages per day1 and Netflix handles over 6 petabytes of data on peak days2.Apache Kafka also enjoys a robust open source ecosystem. Let’s look at its components:Kafka Connect is an integration framework for connecting external sources / destinations into Kafka.Kafka Streams is a simple library that enables streaming application development within the Kafka framework. There are also additional Clients available for non-Java programming languages, including C, C++, Python, .NET, Go, and several others.The REST Proxy provides universal access to Kafka from any network connected device via HTTP.The Schema Registry is a central registry for the format of Kafka data—it guarantees that all data is in the proper format and can survive a schema evolution. As such, the Registry guarantees that the data is always consumable.KSQL is a streaming SQL engine that enables stream processing against Apache Kafka without writing source code.All these open source components build on Apache Kafka’s core messaging and storage layers, leveraging its high scalability, high volume and throughput, and failover capabilities. Then, if you need coverage for your Kafka deployment, we here at Confluent offer round-the-clock support and enterprise tooling for end-to-end monitoring, management of Kafka clusters, multi-data center replication, and more, with Confluent Cloud on GCP. This  Kafka ecosystem as a fully managed service includes a 99.95% service level agreement (SLA), guaranteed throughput and latency, and commercial support, while out-of-the-box integration with GCP services like Cloud Storage enable you to build out your scalable, mission-critical ML infrastructure.Apache Kafka’s open-source ecosystem as infrastructure for Machine LearningThe following picture shows an architecture for your ML infrastructure leveraging Confluent Cloud for data ingestion, model training, deployment, and monitoring:Now, with that background, we’re ready to build scalable, mission-critical ML infrastructure. Where do we start?Replicating IoT data from on-premises data centers to Google CloudThe first step is to ingest the data from the remote end devices. In the case of our automotive company, the data is already stored and processed in local data centers in different regions. This happens by streaming all sensor data from the cars via MQTT to local Kafka Clusters that leverage Confluent’s MQTT Proxy. This integration from devices to a local Kafka cluster typically is its own standalone project, because you need to handle IoT-specific challenges like constrained devices and unreliable networks. The integration can be implemented with different technologies, including low-level clients in C for microcontrollers, a REST Proxy for HTTP(S) communication, or an integration framework like Kafka Connect or MQTT Proxy. All of these components integrate natively with the local Kafka cluster so that you can leverage Kafka's features like high scalability, fault-tolerance and high throughput.The data from the different local clusters then needs to be replicated to a central Kafka Cluster in GCP for further processing and to train analytics models:Confluent Replicator is a tool based on Kafka Connect that replicates the data in a scalable and reliable way from any source Kafka cluster—regardless of whether it lives on premise or in the cloud—to the Confluent Cloud on GCP.GCP also offers scalable IoT infrastructure. If you want to ingest MQTT data directly into Cloud Pub/Sub from devices, you can also use GCP’s MQTT Bridge. Google provides open-source Kafka Connect connectors to get data from Cloud Pub/Sub into Kafka and Confluent Cloud so that you can make the most of KSQL with both first- and third-party logging integration.Data preprocessing with KSQLThe next step is to preprocess your data at scale. You likely want to do this in a reusable way, so that you can ingest the data into other pipelines, and to preprocess the real-time feeds for predictions in the same way once you’ve deployed the trained model.Our automotive company leverages KSQL, the open source streaming SQL engine for Apache Kafka, to do filtering, transformation, removal of personally identifiable information (PII), and feature extraction:This results in several tangible benefits:High throughput and scalability, failover, reliability, and infrastructure-independence, thanks to the core Kafka infrastructurePreprocessing data at scale with no codeUse SQL statements for interactive analysis and at-scale deployment to productionLeveraging Python using KSQL’s REST interfaceReusing preprocessed data for later deployment, even at the edge (outside of the cloud, possibly on embedded systems)Here’s what a continuous query looks like:You can then deploy this stream to one or more KSQL server instances to process all incoming sensor data in a continuous manner.Data ingestion with Kafka ConnectAfter preprocessing the data, you need to ingest it into a data store to train your models. Ideally, you should format and store in a flexible way, so that you can use it with multiple ML solutions and processes. But for today, the automotive company focuses on using TensorFlow to build neural networks that perform anomaly detection with autoencoders as a first use case. They use Cloud Storage as scalable, long-term data store for the historical data needed to train the models.In the future, the automotive company also plans to build other kinds of models using open source technologies like or for algorithms beyond neural networks. Deep Learning with TensorFlow is helpful, but it doesn’t fit every use case. In other scenarios, a random forest tree, clustering, or naïve Bayesian learning is much more appropriate due to simplicity, interpretability, or computing time.In other cases, you might be able to reduce efforts and costs a lot by using prebuilt and managed analytic models in Google’s API services like Cloud Vision for image recognition, Cloud Translate for translation between languages, or Cloud Text-to-Speech for speech synthesis. Or if you need to build custom models, Cloud AutoML might be the ideal solution to easily build out your deployment without the need for a data scientist.You can then use Kafka Connect as your ingestion layer because it provides several benefits:Kafka’s core infrastructure advantages: high throughput and scalability, fail-over, reliability, and infrastructure-independenceOut-of-the-box connectivity to various sources and sinks for different analytics and non-analytics use cases (for example, Cloud Storage, BigQuery, Elasticsearch, HDFS, MQTT)A set of out-of-the-box integration features, called Simple Message Transformation (SMT), for data (message) enrichment, format conversion, filtering, routing, and error-handlingModel training with Cloud ML Engine and TensorFlowAfter you’ve ingested your historical data into Cloud Storage, you’re now able to train your models at extreme scale using TensorFlow and TPUs on Google ML Engine. One major benefit of running your workload on a public cloud is that you can use powerful hardware in a flexible way. Spin it up for training and stop it when finished. The pay-as-you-go principle allows you to use cutting-edge hardware while still controlling your costs.In the case of our automotive company, it needs to train and deploy custom neural networks that include domain-specific knowledge and experience. Thus, they cannot use managed, pre-fabricated ML APIs or Cloud AutoML here. Cloud ML Engine provides powerful API and an easy-to-use web UI to train and evaluate different models:Although Cloud ML Engine supports other frameworks, TensorFlow is a great choice because it is open source and highly scalable, features out-of-the-box integration with GCP, offers a variety of tools (like TensorBoard for Keras), and has grown a sizable community.Replayability with Apache Kafka: a log never forgetsWith Apache Kafka as the streaming platform in your machine learning infrastructure, you can easily:Train different models on the same dataTry out different ML frameworksLeverage Cloud AutoML if and where appropriateDo A/B testing to evaluate different modelsThe architecture lets you leverage other frameworks besides TensorFlow later—if appropriate. Apache Kafka allows you to replay the data again and again over time to train different analytic models with the same dataset:In the above example, using TensorFlow, you can train multiple alternative models on historical data stored in Cloud Storage. In the future, you might want or need to use other machine learning techniques. For example, if you want to offer AutoML services to less experienced data scientists, you might train Google AutoML on Cloud Storage, or experiment with alternative, third party AutoML solutions like DataRobot or H2O Driverless, which leverage HDFS as storage on Cloud Dataproc, a managed service for Apache Hadoop and Spark.Alternative methods for model deployment and serving (inference)The automotive company is now ready to deploy its first models to do real-time predictions at scale. Two alternatives exist for model deployment:Option 1: RPC communication for model inference on your model serverCloud ML Engine allows you to deploy your trained models directly to a model server (based on TensorFlow Serving).Pros of using a model server:Simple integration with existing technologies and organizational processesEasier to understand if you come from the non-streaming (batch) worldAbility to migrate to true streaming down the roadModel management built-in for different models, versioning and A/B testingOption 2: Integrate model inference natively into your streaming applicationHere are some challenges you might encounter as you deploy your model natively in your streaming application:Worse latency: classification requires a remote call instead of local inferenceNo offline inference: on a remote or edge device, you might have limited or no connectivityCoupling the availability, scalability, and latency/throughput of your Kafka Streams application with the SLAs of the RPC interfaceOutliers or externalities (e.g., in case of failure) not covered by Kafka processingFor each use case, you have to assess the trade-offs and decide whether you want to deploy your model in a model server or natively in the application.Deployment and scalability of your client applicationsConfluent Cloud running in conjunction with GCP services ensures high availability and scalability for the machine learning infrastructure described above. You won’t need to worry about operations, just use the components to build your analytic models. However, what about the deployment and dynamic scalability of the Kafka clients, which use the analytic models to do predictions on new incoming events in real-time?You can write these clients using any programming language like (Java, Scala, .NET, Go, Python, JavaScript), Confluent REST Proxy, Kafka Streams or KSQL applications. Unlike on a Kafka server, clients need to scale dynamically to accommodate the load. Whichever option you choose for writing your Kafka clients, Kubernetes is a more and more widely adopted solution that handles deployment, dynamic scaling, and failover. Although it would be out of scope to introduce Kubernetes in this post, Google Kubernetes Engine Quickstart Guide can help you set up your own Kubernetes cluster on GCP in minutes. If you need to learn more details about the container orchestration engine itself, Kubernetes’ official website is a good starting point.The need for local data processing and model inferenceIf you’ve deployed analytics models on Google Cloud, you’ll have noticed that the service (and by extension, GCP) takes over most of the burden of deployment and operations. Unfortunately, migrating to the cloud is not always possible due to legal, compliance, security, or more technical reasons.Our automotive company is ready to use the models it built for predictions, but all the personally identifiable information (PII) data needs to be processed in its local data center. However, this demand creates a challenge, because the architecture (and some future planned integrations) would be simpler if everything were to run within one public cloud.Self-managed on-premise deployment for model serving and monitoring with KubernetesOn premises, you do not get all the advantages of GCP and Confluent Cloud—you need to operate the Apache Kafka cluster and its clients yourself.What about scaling brokers, external clients, persistent volumes, failover, and rolling upgrades? Confluent Operator takes over the challenge of operating Kafka and its ecosystem on Kubernetes, with automated provisioning, scaling, fail-over, partition rebalancing, rolling updates, and monitoring.For your clients, you face the same challenges as if you deploy in the cloud. What about dynamic load-balancing, scaling, and failover? In addition, if you use a model server on premise, you also need to manage its operations and scaling yourself.Kubernetes is an appropriate solution to solve these problems in an on-premises deployment. Using it both on-premises and on Google Cloud allows you to re-use past lessons learned and ongoing best practices.Confluent schema registry for message validation and data governanceHow can we ensure that every team in every data center gets the data they’re looking for, and that it’s consistent across the entire system?Kafka’s core infrastructure advantages: high throughput and scalability, fail-over, reliability, and infrastructure-independenceSchema definition and updatesForward- and backward-compatibilityMulti-region deploymentA mission-critical application: a payment fraud detection systemLet’s begin by reviewing the implementation of our first use case in more detail, including some code examples. We now plan to analyze historical data about payments for digital car services (perhaps for a car’s mobile entertainment system or paying for fuel at a gas station) to spot anomalies indicating possible fraudulent behavior. The model training happens in GCP, including preprocessing that anonymizes private user data. After building a good analytic model in the cloud, you can deploy it at the edge in a real-time streaming application, to analyze new transactions locally and in real time.Model training with TensorFlow on TPUsOur automotive company trained a model in Cloud ML Engine. They used Python and Keras to build an autoencoder (anomaly detection) for real-time sensor analytics, and then trained this model in TensorFlow on Cloud ML Engine leveraging Cloud TPUs (Tensor Processing Units):Google Cloud’s documentation has lots of information on how to train a model with Cloud ML Engine, including for different frameworks and use cases. If you are new to this topic, the Cloud ML Engine Getting Started guide is a good start to build your first model using TensorFlow. As a next step, you can walk through more sophisticated examples using Google ML Engine, TensorFlow and Keras for image recognition, object detection, text analysis, or a recommendation engine.The resulting, trained model is stored in Cloud Storage and thus can either deployed on a “serving” instance for live inference, or downloaded for edge deployment, as in our example above.Model deployment in KSQL streaming microservicesThere are different ways to build a real-time streaming application in the Kafka ecosystem. You can build your own application or microservice using any Kafka Client API (Java, Scala, .NET, Go, Python, node.js, or REST). Or you might want to leverage Kafka Streams (writing Java code) or KSQL (writing SQL statements)—two lightweight but powerful stream-processing frameworks that natively integrate with Apache Kafka to process high volumes of messages at scale, handle failover without data loss, and dynamically adjust scale without downtime.Here is an example of model inference in real-time. It is a continuous query that leverages the KSQL user-defined function (UDF) ‘applyFraudModel’, which embeds an autoencoder:You can deploy this KSQL statement as a microservice to one or more KSQL servers. The called model performs well and scales as needed, because it uses the same integration and processing pipeline for both training and deploying the model, leveraging Kafka Connect for real-time streaming ingestion of the sensor data, and KSQL (with Kafka Streams engine under the hood) for preprocessing and model deployment.You can build your own stateless or stateful UDFs for KSQL easily. You can find the above KSQL ML UDF and a step-by-step guide on using MQTT sensor data.If you’d prefer to leverage Kafka Streams to write your own Java application instead of writing KSQL, you might look at some code examples for deployment of a TensorFlow model in Kafka Streams.Monitoring your ML infrastructureOn GCP, you can leverage tools like Stackdriver, which allows monitoring and management for services, containers, applications, and infrastructure. Conventionally, organizations use Prometheus and JMX notifications for notifications and updates on their Kubernetes and Kafka integrations. Still, there is no silver bullet for monitoring your entire ML infrastructure, and adding an on-premises deployment creates additional challenges, since you have to set up your own monitoring tools instead of using GCP.Feel free to use the monitoring tools and frameworks you and your team already know and like. Ideally, you need to monitor your data ingestion, processing, training, deployment, accuracy, A/B testing workloads all in a scalable, reliable way. Thus, the Kafka ecosystem and GCP can provide the right foundation for your monitoring needs. Additional frameworks, services, or UIs can help your team to effectively monitor its infrastructure.Scalable, mission-critical machine learning infrastructure with GCP and Confluent CloudIn sum, we’ve shown you how you might build scalable, mission-critical, and even vendor-agnostic machine learning infrastructure. We’ve also shown you how you might leverage the open source Apache Kafka ecosystem to do data ingestion, processing, model inference, and monitoring. GCP offers all the compute power and extreme scale to run the Kafka infrastructure as a service, plus out-of-the-box integration with platform services such as Cloud Storage, Cloud ML Engine, GKE, and others. No matter where you want to deploy and run your models (in the cloud vs. on-premises; natively in-app vs. on a model server), GCP and Confluent Cloud are a great combination to set up your machine learning infrastructure.If you need coverage for your Kafka deployment, Confluent offers round-the-clock support and enterprise tooling for end-to-end monitoring, management of Kafka clusters, multi-data center replication, and more. If you’d like to learn more about Confluent Cloud, check out:The Confluent Cloud site, which provides more information about Confluent’s Enterprise and Professional offerings.The Confluent Cloud Professional getting started video.Our instructions to spin up your Kafka cloud instance on GCP here.Our community Slack group, where you can post questions in the #confluent-cloud channel.1. Read more »
  • Making AI-powered speech more accessible—now with more options, lower prices, and new languages and voices
    The ability to recognize and synthesize speech is critical for making human-machine interaction natural, easy, and commonplace, but it’s still too rare. Today we're making our Cloud Speech-to-Text and Text-to-Speech products more accessible to companies around the world, with more features, more voices (roughly doubled), more languages in more countries (up 50+%), and at lower prices (by up to 50% in some cases).Making Cloud Speech-to-Text more accessible for enterprisesWhen creating intelligent voice applications, speech recognition accuracy is critical. Even at 90% accuracy, it's hard to have a useful conversation. Unfortunately, many companies build speech applications that need to run on phone lines and that produce noisy results, and that data has historically been hard for AI-based speech technologies to interpret.For these situations with less than pristine data, we announced premium models for video and enhanced phone in beta last year, developed with customers who opted in to share usage data with us via data logging to help us refine model accuracy. We are excited to share today that the resulting enhanced phone model now has 62% fewer transcription errors (improved from 54% last year), while the video model, which is based on technology similar to what YouTube uses for automatic captioning, has 64% fewer errors. In addition, the video model also works great in settings with multiple speakers such as meetings or podcasts.The enhanced phone model was initially available only to customers participating in the opt-in data logging program announced last year. However, many large enterprises have been asking us for the option to use the enhanced model without opting into data logging. Starting today, anyone can access the enhanced phone model, and customers who choose the data logging option pay a lower rate, bringing the benefits of improved accuracy to more users.In addition to the general availability of both premium models, we’re also announcing the general availability of multi-channel recognition, which helps the Cloud Speech-to-Text API distinguish between multiple audio channels (e.g., different people in a conversation), which is very useful for doing call or meeting analytics and other use cases involving multiple participants. With general availability, all these features now qualify for an SLA and other enterprise-level guarantees.Cloud Speech-to-Text at LogMeInLogMeIn is an example of a customer that requires both accuracy and enterprise scale: Every day, millions of employees use its GoToMeeting product to attend an online meeting. Cloud Speech-to-Text lets LogMeIn automatically create transcripts for its enterprise GoToMeeting customers, enabling users to collaborate more effectively. “LogMeIn continues to be excited about our work with Google Cloud and its market-leading video and real-time speech to text technology. After an extensive market study for the best Speech-to-Text video partner, we found Google to be the highest quality and offered a useful array of related technologies. We continue to hear from our customers that the feature has been a way to add significant value by capturing in-meeting content and making it available and shareable post-meeting. Our work with Google Cloud affirms our commitment to making intelligent collaboration a fundamental part of our product offering to ultimately add more value for our global UCC customers.” - Mark Strassman, SVP and General Manager, Unified Communications and Collaboration (UCC) at LogMeIn.Making Cloud Speech-to-Text more accessible through lower pricing (up to 50% cheaper)Lowering prices is another way we are making Cloud Speech-to-Text more accessible. Starting now:For standard models and the premium video model, customers that opt-in to our data logging program will now pay 33% less for all usage that goes through the program.We’ve cut pricing for the premium video model by 25%, for a total savings of 50% for current video model customers who opt-in to data logging.Making Cloud Text-to-Speech accessible across more countriesWe’re also excited to help enterprises benefit from our research and experience in speech synthesis. Thanks to unique access to WaveNet technology powered by Google Cloud TPUs, we can build new voices and languages faster and easier than is typical in the industry: Since our update last August, we’ve made dramatic progress on Cloud Text-to-Speech, roughly doubling the number of overall voices, WaveNet voices, and WaveNet languages, and increasing the number of supported languages overall by ~50%, including:Support for seven new languages or variants, including Danish, Portuguese/Portugal, Russian, Polish, Slovakian, Ukrainian, and Norwegian Bokmål (all in beta). This update expands the list of supported languages to 21 and enables applications for millions of new end-users.31 new WaveNet voices (and 24 new standard voices) across those new languages. This gives more enterprises around the world access to our speech synthesis technology, which based on mean opinion score has already closed the quality gap with human speech by 70%. You can find the complete list of languages and voices here.20 languages and variants with WaveNet voices, up from nine last August--and up from just one a year ago when Cloud Text-to-Speech was introduced, marking a broad international expansion for WaveNet.In addition, the Cloud Text-to-Speech Device Profiles feature, which optimizes audio playback on different types of hardware, is now generally available. For example, some customers with call center applications optimize for interactive voice response (IVR), whereas others that focus on content and media (e.g., podcasts) optimize for headphones. In every case, the audio effects are customized for the hardware.Get started todayIt’s easy to give Cloud Speech products a try—check out the simple demos on the Cloud Speech-to-Text and Cloud Text-to-Speech landing pages. If you like what you see, you can use the $300 GCP credit to start testing. And as always, the first 60 minutes of audio you process every month with Cloud Speech-to-Text is free. Read more »
  • AI in depth: monitoring home appliances from power readings with ML
    As the popularity of home automation and the cost of electricity grow around the world, energy conservation has become a higher priority for many consumers. With a number of smart meter devices available for your home, you can now measure and record overall household power draw, and then with the output of a machine learning model, accurately predict  individual appliance behavior simply by analyzing meter data. For example, your electric utility provider might send you a message if it can reasonably assess that you left your refrigerator door open, or if the irrigation system suddenly came on at an odd time of day.In this post, you’ll learn how to accurately identify home appliances’ (e.g. electric kettles and washing machines, in this dataset) operating status using smart power readings, together with modern machine learning techniques such as long short-term memory (LSTM) models. Once the algorithm identifies an appliance’s operating status, we can then build out a few more applications. For example:Anomaly detection: Usually the TV is turned off when there is no one at home. An application can send a message to the user if the TV turns on at an unexpected or unusual time.Habit-improving recommendations: We can present users the usage patterns of home appliances in the neighborhood at an aggregated level so that they can compare or refer to the usage patterns and optimize the usage of their home appliances.We developed our end-to-end demo system entirely on Google Cloud Platform, including data collection through Cloud IoT Core, a machine learning model built using TensorFlow and trained on Cloud Machine Learning Engine, and real-time serving and prediction made possible by Cloud Pub/Sub, App Engine and Cloud ML Engine. As you progress through this post, you can access the full set of source files in the GitHub repository here.IntroductionThe growing popularity of IoT devices and the evolution of machine learning technologies have brought new opportunities for businesses. In this post, you’ll learn how home appliances’ (for example, an electric kettle and a washing machine) operating status (on/off) can be inferred from gross power readings collected by a smart meter, together with state-of-the-art machine learning techniques. An end-to-end demo system, developed entirely on Google Cloud Platform (as shown in Fig. 1), includes:Data collection and ingest through Cloud IoT Core and Cloud Pub/SubA machine learning model, trained using Cloud ML EngineThat same machine learning model, served using Cloud ML Engine together with App Engine as a front endData visualization and exploration using BigQuery and ColabFigure 1. System architectureThe animation below shows real-time monitoring, as real-world energy usage data is ingested through Cloud IoT Core into Colab.Figure 2. Illustration of real-time monitoringIoT extends the reach of machine learningData ingestionIn order to train any machine learning model, you need data that is both suitable and sufficient in quantity. In the field of IoT, we need to address a number of challenges in order to reliably and safely send the data collected by smart IoT devices to remote centralized servers. You’ll need to consider data security, transmission reliability, and use case-dependent timeliness, among other factors.Cloud IoT Core is a fully managed service that allows you to easily and securely connect, manage, and ingest data from millions of globally dispersed devices. The two main features of Cloud IoT Core are its device manager and its protocol bridge. The former allows you to configure and manage individual devices in a coarse-grained way by establishing and maintaining devices’ identities along with authentication after each connection. The device manager also stores each device’s logical configuration and is able to remotely control the devices—for example, changing a fleet of smart power meters’ data sampling rates. The protocol bridge provides connection endpoints with automatic load balancing for all device connections, and natively supports secure connection over industry standard protocols such as MQTT and HTTP. The protocol bridge publishes all device telemetry to Cloud Pub/Sub, which can then be consumed by downstream analytic systems. We adopted the MQTT bridge in our demo system and the following code snippet includes MQTT-specific logic.Data consumptionAfter the system publishes data to Cloud Pub/Sub, it delivers a message request to the “push endpoint,” typically the gateway service that consumes the data. In our demo system, Cloud Pub/Sub pushes data to a gateway service hosted in App Engine which then forwards the data to the machine learning model hosted in the Cloud ML Engine for inference, and at the same time stores the raw data together with received prediction results in BigQuery for later (batch) analysis.While there are numerous business-dependent use cases you can deploy based on our sample code, we illustrate raw data and prediction results visualization in our demo system. In the code repository, we have provided two notebooks:EnergyDisaggregationDemo_Client.ipynb: this notebook simulates multiple smart meters by reading in power consumption data from a real world dataset and sends the readings to the server. All Cloud IoT Core-related code resides in this notebook.EnergyDisaggregationDemo_View.ipynb: this notebook allows you to view raw power consumption data from a specified smart meter and our model's prediction results in almost real time.If you follow the deployment instructions in the README file and in the accompanying notebooks, you should be able to reproduce the results shown in Figure 2. Meanwhile, if you’d prefer to build out your disaggregation pipeline in a different manner, you can also use Cloud Dataflow and Pub/Sub I/O to build an app with similar functionality.Data processing and machine learningDataset introduction and explorationWe trained our model to predict each appliance’s on/off status from gross power readings, using the UK Domestic Appliance-Level Electricity (UK-DALE, publicly available here1) dataset  in order for this end-to-end demo system to be reproducible. UK-DALE records both whole-house power consumption and usage from each individual appliance every 6 seconds from 5 households. We demonstrate our solution using the data from house #2, for which the dataset includes a total of 18 appliances’ power consumption. Given the granularity of the dataset (a sample rate of ⅙ Hz), it is difficult to estimate appliances with relatively tiny power usage. As a result, appliances such as laptops and computer monitors are removed from this demo. Based on a data exploration study shown below, we selected eight appliances out of the original 18 items as our target appliances: a treadmill, washing machine, dishwasher, microwave, toaster, electric kettle, rice cooker and “cooker,” a.k.a., electric stovetop.The figure below shows the power consumption histograms of selected appliances. Since all the appliances are off most of the time, most of the readings are near zero. Fig. 4 shows the comparisons between aggregate power consumption of selected appliances (`app_sum`) and the whole-house power consumption (`gross`). It is worth noting that the input to our demo system is the gross consumption (the blue curve) because this is the most readily available power usage data, and is even measurable outside the home.Figure 3. Target appliances and demand histogramsFigure 4. Data sample from House #2 (on 2013-07-04 UTC)The data for House #2 spans from late February to early October 2013. We used data from June to the end of September in our demo system due to missing data at both ends of the period. The descriptive summary of selected appliances is illustrated in Table 1. As expected, the data is extremely imbalanced in terms of both “on” vs. “off” for each appliance and power consumption scale of each appliance, which introduces the main difficulty of our prediction task.Table 1. Descriptive summary of power consumptionPreprocessing the dataSince UK-DALE did not record individual appliance on/off status, one key preprocessing step is to label the on/off status of each appliance at each timestamp. We assume an appliance to be “on” if its power consumption is larger than one standard deviation from the sample mean of its power readings, given the fact that appliances are off most of the time and hence most of the readings are near zero. The code for data preprocessing can be found in the notebook provided, and you can also download the processed data from here.With the preprocessed data in CSV format, TensorFlow’s Dataset class serves as a convenient tool for data loading and transformation—for example, the input pipeline for machine learning model training. For example, in the following code snippet lines 7 - 9 load data from the specified CSV file and lines 11 - 13 transform data into our desired time-series sequence.In order to address the data imbalance issue, you can either down-sample the majority class or up-sample the minority class. In our case, we propose a probabilistic negative down-sampling method: we’ll preserve the subsequences in which at least one appliance remains on, but we’ll filter the subsequences with all appliances off, based on a certain probability and threshold. The filtering logic integrates easily with the API, as in the following code snippet:Finally, you’ll want to follow best practices from Input Pipeline Performance Guide to ensure that your GPU or TPU (if they are used to speed up training process) resources are not wasted while waiting for the data to load from the input pipeline. To maximize usage, we employ parallel mapping to parallelize data transformation and prefetch data to overlap the preprocessing and model execution of a training step, as shown in the following code snippet:The machine learning modelWe adopt a long short-term memory (LSTM) based network as our classification model. Please see Understanding LSTM Networks for an introduction to recurrent neural networks and LSTMs. Fig. 5 depicts our model design, in which an input sequence of length n is fed into a multilayered LSTM network, and prediction is made for all m appliances. A dropout layer is added for the input of LSTM cell, and the output of the whole sequence is fed into a fully connected layer. We implemented this model as a TensorFlow estimator.Figure 5. LSTM based model architectureThere are two ways of implementing the above architecture: TensorFlow native API (tf.layers and tf.nn) and Keras API (tf.keras). Compared to TensorFlow’s native API, Keras serves as a higher level API that lets you train and serve deep learning models with three key advantages: ease of use, modularity, and extensibility. tf.keras is TensorFlow's implementation of the Keras API specification. In the following code sample, we implemented the same LSTM-based classification model using both methods, so that you can compare the two:Model authoring using TensorFlow’s native API:Model authoring using the Keras API:Training and hyperparameter tuningCloud Machine Learning Engine supports both training and hyperparameter tuning. Figure 6 shows the average (over all appliances) precision, recall and f_score for multiple trials with different combinations of hyperparameters. We observed that hyperparameter tuning significantly improves model performance.Figure 6. Learning curves from hyperparameter tuning.We selected two experiments with optimal scores from hyperparameter tunings and report their performances in Table 2.Table 2. Hyper-parameter tuning of selected experimentsTable 3 lists the precision and recall of each individual appliance. As mentioned in the previous “Dataset introduction and exploration” section, the cooker and the treadmill (“running machine”) are difficult to predict, because their peak power consumptions were significantly lower than other appliances.Table 3. Precision and recall of predictions for individual appliancesConclusionWe have provided an end-to-end demonstration of how you can use machine learning to determine the operating status of home appliances accurately, based on only smart power readings. Several products including Cloud IoT Core,  Cloud Pub/Sub,  Cloud ML Engine, App Engine and BigQuery are orchestrated to support the whole system, in which each product solves a specific problem required to implement this demo, such as data collection/ingestion, machine learning model training, real time serving/prediction, etc. Both our code and data are available for those of you who would like to try out the system for yourself.We are optimistic that both we and our customers will develop ever more interesting applications at the intersection of more capable IoT devices and fast-evolving machine learning algorithms. Google Cloud provides both the IoT infrastructure and machine learning training and serving capabilities that make newly capable smart IoT deployments both a possibility and a reality.1. Jack Kelly and William Knottenbelt. The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes. Scientific Data 2, Article number:150007, 2015, DOI:10.1038/sdata.2015.7. Read more »
  • How Box Skills can optimize your workflow with the help of Cloud AI
    Have you ever had to manually upload and tag a lot of files? It’s no fun. Increasingly though, machine learning algorithms can help you or your team classify and tag large volumes of content automatically. And if your company uses Box, a popular Cloud Content Management platform, you can now apply Google ML services to your enterprise content with just a few lines of code using the Box Skills framework. Box Skills makes it easy to connect an ML service.With technologies like image recognition, speech-to-text transcription, and natural language understanding, Google Cloud makes it easy to enrich your Box files with useful metadata. For example, if you have lots of images in your repository, you can use the Cloud Vision API to understand more about the image, such as objects or landmarks in an image, or in documents, —or even parse their contents and identify elements that determine the document’s category. If your needs extend beyond functionality provided by Cloud Vision, you can point your Skill at a custom endpoint that serves your own custom-trained model. The metadata applied to files in Box via a Box Skill can power other Box functionality such as search, workflow, or retention.An example integration in actionNow, let’s look at an example. Many businesses use Box to store images of their products. With the Box Skills Kit and the product search functionality in the Cloud Vision API, you can automatically catalog these products. When a user uploads a new product image into Box, the product search feature within the Vision API helps identify similar products in the catalog, as well as the maximum price for such a product.Configuring and deploying a product search Box SkillLet’s look at how you can use the Box Skills Kit to implement the use case outlined above.1.Create an endpoint for your Skill   a. Follow this QuickStart guide.   b. You can use this API endpoint to call a pre-trained machine learning model to classify new data.   c. Create a Cloud Function to point your Box Skill at the API endpoint created above.   d. Clone the following repository.   e. Next, follow the instructions to deploy the function to your project.   f. Make a note of the endpoint’s URI.2.Configure a Box Custom Skills App in Box, then configure it to point to the Cloud Function created above.   a. Follow the instructions.   b. Then these instructions.And there you have it. You now have a new custom Box Skill enabled by Cloud AI that’s ready to use. Try uploading a new image to your Box drive and notice that maximum retail price and information on similar products are both displayed under the “skills” console.Using your new SkillNow that you’re all set up, you can begin by uploading an image file of household goods, apparel, or toys into your Box drive. The upload triggers a Box Skill event workflow, which calls a Cloud Function you deployed in Google Cloud and whose endpoint you will specify in the Box Admin Console. The Cloud Function you create then uses the Box Skills kit's FileReader API to read the base64-encoded image string, automatically sent by Box when the upload trigger occurs. The Function then calls the product search function of Cloud Vision, and creates a Topics Card with data returned from the product search function. Next, it creates a Faces card in which to populate a thumbnail that it scaled from the original image. Finally, the function persists the skills card within Box using the skillswriter API. Now, you can open the image in Box drive and click on the "skills" menu (which expands, when you click the “magic wand” icon on the right), and you’ll see product catalog information, with similar products and maximum price populated.What’s next?Over the past several years, Google Cloud and Box have built a variety of tools to make end users more productive. Today, the Box Skills integration opens the door to a whole new world of advanced AI tools and services: in addition to accessing pre-trained models via the Vision API, Video Intelligence API or Speech-to-Text API, data scientists can train and host custom models written in TensorFlow, sci-kit learn, Keras, or PyTorch on Cloud ML Engine. Lastly, Cloud AutoML lets you train a model on your dataset without having to write any code. Whatever your levels of comfort with code or data science, we’re committed to making it easy for you to run machine learning-enhanced annotations on your data.You can find all the code discussed in this post and its associated documentation in its GitHub repository. Goodbye, tedious repetition! Hello, productivity. Read more »
  • Making the machine: the machine learning lifecycle
    As a Googler, one of my roles is to educate the software development community on machine learning (ML). The first introduction for many individuals is what is referred to as the ‘model’. While building models, tuning them, and evaluating their predictive abilities has generated a great deal of interest and excitement, many organizations still find themselves asking more basic questions, like how does machine learning fit into their software development lifecycle?In this post, I explain how machine learning (ML) maps to and fits in with the traditional software development lifecycle. I refer to this mapping as the machine learning lifecycle. This will help you as you think about how to incorporate machine learning, including models, into your software development processes. The machine learning lifecycle consists of three major phases: Planning (red), Data Engineering (blue) and Modeling (yellow).PlanningIn contrast to a static algorithm coded by a software developer, an ML model is an algorithm that is learned and dynamically updated. You can think of a software application as an amalgamation of algorithms, defined by design patterns and coded by software engineers, that perform planned tasks. Once an application is released to production, it may not perform as planned, prompting developers to rethink, redesign, and rewrite it (continuous integration/continuous delivery).We are entering an era of replacing some of these static algorithms with ML models, which are essentially dynamic algorithms. This dynamism presents a host of new challenges for planners, who work in conjunction with product owners and quality assurance (QA) teams.For example, how should the QA team test and report metrics? ML models are often expressed as confidence scores. Let’s suppose that a model shows that it is 97% accurate on an evaluation data set. Does it pass the quality test? If we built a calculator using static algorithms and it got the answer right 97% of the time, we would want to know about the 3% of the time it does not.Similarly, how does a daily standup work with machine learning models? It’s not like the training process is going to give a quick update each morning on what it learned yesterday and what it anticipates learning today. It’s more likely your team will be giving updates on data gathering/cleaning and hyperparameter tuning.When the application is released and supported, one usually develops policies to address user issues. But with continuous learning and reinforcement learning, the model is learning the policy. What policy do we want it to learn? For example, you may want it to observe and detect user friction in navigating the user interface and learn to adapt the interface (Auto A/B) to reduce the friction.Within an effective ML lifecycle, planning needs to be embedded in all stages to start answering these questions specific to your organization.Data engineeringData engineering is where the majority of the development budget is spent—as much as 70% to 80% of engineering funds in some organizations. Learning is dependent on data—lots of data, and the right data. It’s like the old software engineering adage: garbage in, garbage out. The same is true for modeling: if bad data goes in, what the model learns is noise.In addition to software engineers and data scientists, you really need a data engineering organization. These skilled engineers will handle data collection (e.g., billions of records), data extraction (e.g., SQL, Hadoop), data transformation, data storage and data serving. It’s the data that consumes the vast majority of your physical resources (persistent storage and compute). Typically due to the magnitude in scale, these are now handled using cloud services versus traditional on-prem methods.Effective deployment and management of data cloud operations are handled by those skilled in data operations (DataOps). The data collection and serving are handled by those skilled in data warehousing (DBAs). The data extraction and transformation are handled by those skilled in data engineering (Data Engineers), and data analysis are handled by those skilled in statistical analysis and visualization (Data Analysts).ModelingModeling is integrated throughout the software development lifecycle. You don’t just train a model once and you’re done. The concept of one-shot training, while appealing in budget terms and simplification, is only effective in academic and single-task use cases.Until fairly recently, modeling was the domain of data scientists. The initial ML frameworks (like Theano and Caffe) were designed for data scientists. ML frameworks are evolving and today are more in the realm of software engineers (like Keras and PyTorch). Data scientists play an important role in researching the classes of machine learning algorithms and their amalgamation, advising on business policy and direction, and moving into roles of leading data driven teams.But as ML frameworks and AI as a Service (AIaaS) evolve, the majority of modeling will be performed by software engineers. The same goes for feature engineering, a task performed by today’s data engineers: with its similarities to conventional tasks related to data ontologies, namespaces, self-defining schemas, and contracts between interfaces, it too will move into the realm of software engineering. In addition, many organizations will move model building and training to cloud-based services used by software engineers and managed by data operations. Then, as AIaaS evolves further, modeling will transition to a combination of turnkey solutions accessible via cloud APIs, such as for Cloud Vision and Cloud Speech-to-Text, and customizing pre-trained algorithms using transfer learning tools such as AutoML.Frameworks like Keras and PyTorch have already transitioned away symbol programming into imperative programming (the dominant form in software development), and incorporate object-oriented programming (OOP) principles such as inheritance, encapsulation, and polymorphism. One should anticipate that other ML frameworks will evolve to include object relational models (ORM), which we already use for databases, to data sources and inference (prediction). Common best practices will evolve and industry-wide design patterns will become defined and published, much like how Design Patterns by the Gang of Four influenced the evolution of OOP.Like continuous integration and delivery, continuous learning will also move into build processes, and be managed by build and reliability engineers. Then, once your application is released, its usage and adaptation in the wild will provide new insights in the form of data, which will be fed back to the modeling process so the model can continue learning.As you can see, adopting machine learning isn’t simply a question of learning to train a model, and you’re done. You need to think deeply about how those ML models will fit into your existing systems and processes, and grow your staff accordingly. I, and all the staff here at Google, wish you the best in your machine learning journey, as you upgrade your software development lifecycle to accommodate machine learning. To learn more about machine learning on Google Cloud here, visit our Cloud AI products page. Read more »
  • Spam does not bring us joy—ridding Gmail of 100 million more spam messages with TensorFlow
    1.5 billion people use Gmail every month, and 5 million paying businesses use Gmail in the workplace as a part of G Suite. For consumers and businesses alike, a big part of Gmail’s draw is its built-in security protections.Good security means constantly staying ahead of threats, and our existing ML models are highly effective at doing this—in conjunction with our other protections, they help block more than 99.9 percent of spam, phishing, and malware from reaching Gmail inboxes. Just as we evolve our security protections, we also look to advance our machine learning capabilities to protect you even better.That’s why we recently implemented new protections powered by TensorFlow, an open-source machine learning (ML) framework developed at Google. These new protections complement existing ML and rules-based protections, and they’ve successfully improved our detection capabilities. With TensorFlow, we are now blocking around 100 million additional spam messages every day.Where did we find these 100 million extra spam messages? We’re now blocking spam categories that used to be very hard to detect. Using TensorFlow has helped us block image-based messages, emails with hidden embedded content, and messages from newly created domains that try to hide a low volume of spammy messages within legitimate traffic.Given we’re already blocking the majority of spammy emails in Gmail, blocking millions more with precision is a feat. TensorFlow helps us catch the spammers who slip through that less than 0.1 percent, without accidentally blocking messages that are important to users. One person’s spam is another person’s treasureML makes catching spam possible by helping us identify patterns in large data sets that humans who create the rules might not catch; it makes it easy for us to adapt quickly to ever-changing spam attempts.ML-based protections help us make granular decisions based on many different factors. Consider that every email has thousands of potential signals. Just because some of an email’s characteristics match up to those commonly considered “spammy,” doesn’t necessarily mean it’s spam. ML allows us to look at all of these signals together to make a determination.Finally, it also helps us personalize our spam protections to each user—what one person considers spam another person might consider an important message (think newsletter subscriptions or regular email notifications from an application).Using TensorFlow to power MLBy complementing our existing ML models with TensorFlow, we’re able to refine these models even further, while allowing the team to focus less on the underlying ML framework, and more on solving the problem: ridding your inbox of spam!Applying ML at scale can be complex and time consuming. TensorFlow includes many tools that make the ML process easier and more efficient, accelerating the speed at which we can iterate. As an example, TensorBoard allows us to both comprehensively monitor our model training pipelines and quickly evaluate new models to determine how useful we expect them to be.TensorFlow also gives us the flexibility to easily train and experiment with different models in parallel to develop the most effective approach, instead of running one experiment at a time.As an open standard, TensorFlow is used by teams and researchers all over the world (There have been 71,000 forks of the public code and other open-source contributions!). This strong community support means new research and ideas can be applied quickly. And, it means we can collaborate with other teams within Google more quickly and easily to best protect our users.All in all, these benefits allow us to scale our ML efforts, requiring fewer engineers to run more experiments and protect users more effectively.This is just one example of how we’re using machine learning to keep users and businesses safe, and just one application of TensorFlow. Even within Gmail, we’re currently experimenting with TensorFlow in other security-related areas, such as phishing and malware detection, as part of our continuous efforts to keep users safe.And you can use it, too. Google open-sourced TensorFlow in 2015 to make ML accessible for everyone—so that many different organizations can take advantage of the technology that powers critical capabilities like spam prevention in Gmail and more.To learn more about TensorFlow and the companies that are using it, visit To learn more about the security benefits of G Suite, download this eBook. Read more »
  • Exoplanets, astrobiological research, and Google Cloud: What we learned from NASA FDL’s Reddit AMA
    Are we alone in the universe? Does intelligent life exist on other planets? If you’ve ever wondered about these things, you’re not the only one. Last summer, we partnered with NASA's Frontier Development Lab (FDL) to help find answers to these questions—you can read about some of this work in this blog post. And as part of this work we partnered with FDL researchers to host an AMA (“ask me anything”) to answer all those burning questions from Redditlings far and wide. Here are some of the highlights:Question: What can AI do to detect intelligent life on other planets?Massimo Mascaro, Google Cloud Director of Applied AI: AI can help extract the maximum information from the very faint and noisy signals we can get from our best instruments. AI is really good at detecting anomalies and in digging through large amounts of data and that's pretty much what we do when we search for life in space.Question: About how much data is expected to be generated during this mission? Are we looking at the terabyte, 10s of terabytes, or 100s of terabytes of data?Megan Ansdell, Planetary Scientist with a specialty in exoplanets: The TESS mission will download ~6 TB of data every month as it observes a new sector of sky containing 16,000 target stars at 2-minute cadence. The mission lifetime is at least 2 years, which means TESS will produce on the order of 150 TB of data. You can learn more about the open source deep learning models that have been developed to sort through the data here.Question: What does it mean to simulate atmospheres?Giada Arney, Astronomy and astrobiology (mentor): Simulating atmospheres for me involves running computer models where I provide inputs to the computer on gases in the atmosphere, “boundary conditions”, temperature and more. These atmospheres can then be used to simulate telescopic observations of similar exoplanets so that we can predict what atmospheric features might be observable with future observatories for different types of atmospheres.Question: How useful is a simulated exoplanet database?Massimo Mascaro: It's important to have a way to simulate the variability of the data you could observe, before observing it, to understand your ability to distinguish patterns, to plan on how to build and operate instruments and even to plan how to analyze the data eventually.Giada Arney: Having a database of different types of simulated worlds will allow us to predict what types of properties we’ll be able to observe on a diverse suite of planets. Knowing these properties will then help us to think about the technological requirements of future exoplanet observing telescopes, allowing us to anticipate the unexpected!Question: Which off-the-shelf Google Cloud AI/ML APIs are you using?Massimo Mascaro, Google Cloud Director of Applied AI: We've leveraged a lot of Google Cloud’s infrastructure, in particular Compute Engine and GKE, to both experiment with data and to run computation on large scale (using up to 2500 machines simultaneously), as well as TensorFlow and PyTorch running on Google Cloud to train deep learning models for the exoplanets and astrobiology experiments.Question: What advancements in science can become useful in the future other than AI?Massimo Mascaro: AI is just one of the techniques science can benefit in our times. I would put in that league definitely the wide access to computation. This is not only helping science in data analysis and AI, but in simulation, instrument design, communication, etc.Question: What do you think are the key things that will inspire the next generation of astrophysicists, astrobiologists, and data scientists?Sara Jennings, Deputy Director, NASA FDL: For future data scientists, I think it will be the cool problems like the ones we tackle at NASA FDL, which they will be able to solve using new and ever increasing data and techniques. With new instruments and data analysis techniques getting so much better, we're now at a moment where asking question such as whether there's life outside our planet is not anymore preposterous, but real scientific work.Daniel Angerhausen, Astrophysicist with expertise spanning astrobiology to exoplanets (mentor): I think one really important point is that we see more and more women in science. This will be such a great inspiration for girls to pursue careers in STEM. For most of the history of science we were just using 50 percent of our potential and this will hopefully be changed by our generation.You can read the full AMA transcript here. Read more »
  • AI in Depth: Cloud Dataproc meets TensorFlow on YARN: Let TonY help you train right in your cluster
    Apache Hadoop has become an established and long-running framework for distributed storage and data processing. Google’s Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simple, cost-efficient way. With Cloud Dataproc, you can set up a distributed storage platform without worrying about the underlying infrastructure. But what if you want to train TensorFlow workloads directly on your distributed data store?This post will explain how to install a Hadoop cluster for LinkedIn open-source project TonY (TensorFlow on YARN). You will deploy a Hadoop cluster using Cloud Dataproc and TonY to launch a distributed machine learning job. We’ll explore how you can use two of the most popular machine learning frameworks: TensorFlow and PyTorch.TensorFlow supports distributed training, allowing portions of the model’s graph to be computed on different nodes. This distributed property can be used to split up computation to run on multiple servers in parallel. Orchestrating distributed TensorFlow is not a trivial task and not something that all data scientists and machine learning engineers have the expertise, or desire, to do—particularly since it must be done manually. TonY provides a flexible and sustainable way to bridge the gap between the analytics powers of distributed TensorFlow and the scaling powers of Hadoop. With TonY, you no longer need to configure your cluster specification manually, a task that can be tedious, especially for large clusters.The components of our system:First, Apache HadoopApache Hadoop is an open source software platform for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware.  Hadoop services provides for data storage, data processing, data access, data governance, security, and operations.Next, Cloud DataprocCloud Dataproc is a managed Spark and Hadoop service that lets you take advantage of open source data tools for batch processing, querying, streaming, and machine learning. Cloud Dataproc’s automation capability helps you create clusters quickly, manage them easily, and save money by turning clusters off when you don't need them. With less time and money spent on administration, you can focus on your jobs and your data.And now TonYTonY is a framework that enables you to natively run deep learning jobs on Apache Hadoop. It currently supports TensorFlow and PyTorch. TonY enables running either single node or distributed training as a Hadoop application. This native connector, together with other TonY features, runs machine learning jobs reliably and flexibly.InstallationSetup a Google Cloud Platform projectGet started on Google Cloud Platform (GCP) by creating a new project, using the instructions found here.Create a Cloud Storage bucketThen create a Cloud Storage bucket. Reference here.Create a Hadoop cluster via Cloud Dataproc using initialization actionsYou can create your Hadoop cluster directly from Cloud Console or via an appropriate `gcloud` command. The following command initializes a cluster that consists of 1 master and 2 workers:When creating a Cloud Dataproc cluster, you can specify in your TonY initialization actions script that Cloud Dataproc should run on all nodes in your Cloud Dataproc cluster immediately after the cluster is set up.Note: Use Cloud Dataproc version 1.3-deb9, which is supported for this deployment. Cloud Dataproc version 1.3-deb9 provides Hadoop version 2.9.0. Check this version list for details.Once your cluster is created. You can verify that under Cloud Console > Big Data > Cloud Dataproc > Clusters, that cluster installation is completed and your cluster’s status is Running.Go to  Cloud Console > Big Data > Cloud Dataproc > Clusters and select your new cluster:You will see the Master and Worker nodes.Connect to your Cloud Dataproc master server via SSHClick on SSH and connect remotely to Master server.Verify that your YARN nodes are activeExampleInstalling TonYTonY’s Cloud Dataproc initialization action will do the following:Install and build TonY from GitHub repository.Create a sample folder containing TonY examples, for the following frameworks:TensorFlowPyTorchThe following folders are created:TonY install folder (TONY_INSTALL_FOLDER) is located by default in:TonY samples folder (TONY_SAMPLES_FOLDER) is located by default in:The Tony samples folder will provide 2 examples to run distributed machine learning jobs using:TensorFlow MNIST examplePyTorch MNIST exampleRunning a TensorFlow distributed jobLaunch a TensorFlow training jobYou will be launching the Dataproc job using a `gcloud` command.The following folder structure was created during installation in `TONY_SAMPLES_FOLDER`, where you will find a sample Python script to run the distributed TensorFlow job.This is a basic MNIST model, but it serves as a good example of using TonY with distributed TensorFlow. This MNIST example uses “data parallelism,” by which you use the same model in every device, using different training samples to train the model in each device. There are many ways to specify this structure in TensorFlow, but in this case, we use “between-graph replication” using tf.train.replica_device_setter.DependenciesTensorFlow version 1.9Note: If you require a more recent TensorFlow and TensorBoard version, take a look at the progress of this issue to be able to upgrade to latest TensorFlow version.Connect to Cloud ShellOpen Cloud Shell via the console UI:Use the following gcloud command to create a new job. Once launched, you can monitor the job. (See the section below on where to find the job monitoring dashboard in Cloud Console.)Running a PyTorch distributed jobLaunch your PyTorch training jobFor PyTorch as well, you can launch your Cloud Dataproc job using gcloud command.The following folder structure was created during installation in the TONY_SAMPLES_FOLDER, where you will find an available sample script to run the TensorFlow distributed job:DependenciesPyTorch version 0.4Torch Vision 0.2.1Launch a PyTorch training jobVerify your job is running successfullyYou can track Job status from the Dataproc Jobs tab: navigate to Cloud Console > Big Data > Dataproc > Jobs.Access your Hadoop UILogging via web to Cloud Dataproc’s master node via web: http://<Node_IP>:8088 and track Job status. Please take a look at this section to see how to access the Cloud Dataproc UI.Cleanup resourcesDelete your Cloud Dataproc clusterConclusionDeploying TensorFlow on YARN enables you to train models straight from your data infrastructure that lives in HDFS and Cloud Storage. If you’d like to learn more about some of the related topics mentioned in this post, feel free to check out the following documentation links:Machine Learning with TensorFlow on GCPHyperparameter tuning on GCPHow to train ML models using GCPAcknowledgements: Anthony Hsu, LinkedIn Software Engineer; and Zhe Zhang, LinkedIn Core Big Data Infra team manager. Read more »
  • How we built a derivatives exchange with BigQuery ML for Google Next ‘18
    Financial institutions have a natural desire to predict the volume, volatility, value or other parameters of financial instruments or their derivatives, to manage positions and mitigate risk more effectively. They also have a rich set of business problems (and correspondingly large datasets) to which it’s practical to apply machine learning techniques.Typically, though, in order to start using ML, financial institutions must first hire data scientist talent with ML expertise—a skill set for which recruiting competition is high. In many cases, an organization has to undertake the challenge and expense of bootstrapping an entire data science practice. This summer, we announced BigQuery ML, a set of machine learning extensions on top of our scalable data warehouse and analytics platform. BigQuery ML effectively democratizes ML by exposing it via the familiar interface of SQL—thereby letting financial institutions accelerate their productivity and maximize existing talent pools.As we got ready for Google Cloud Next London last summer, we decided to build a demo to showcases BigQuery ML’s potential for the financial services community. In this blog post, we’ll walk through how we designed the system, selected our time-series data, built an architecture to analyze six months of historical data, and quickly trained a model to outperform a 'random guess' benchmark—all while making predictions in close to real time.Meet the Derivatives ExchangeA team of Google Cloud solution architects and customer engineers built the Derivatives Exchange in the form of an interactive game, in which you can opt to either rely on luck, or use predictions from a model running in BigQuery ML, in order to decide which options contracts will expire in-the-money. Instead of using the value of financial instruments as the “underlying” for the options contracts, we used the volume of Twitter posts (tweets) for a particular hashtag within a specific timeframe. Our goal was to show the ease with which you can deploy machine learning models on Google Cloud to predict an instrument’s volume, volatility, or value.The Exchange demo, as seen at Google Next ‘18 LondonOur primary goal was to translate an existing and complex trading prediction process into a simple illustration to which users from a variety of industries can relate. Thus, we decided to:Use the very same Google Cloud products that our customers use daily.Present a time-series that is familiar to everyone—in this case, the number of hashtag Tweets observed in a 10-minute window as the “underlying” for our derivative contracts.Build a fun, educational, and inclusive experience.When designing the contract terms, we used this Twitter time-series data in a manner similar to the strike levels specified in weather derivatives.Architectural decisionsSolution architecture diagram: the social media options marketWe imagined the exchange as a retail trading pit where, using mobile handsets, participants purchase European binary range call option contracts across various social media single names (what most people would think of as hashtags). Contracts are issued every ten minutes and expire after ten minutes. At expiry, the count of accumulated #hashtag mentions for the preceding window is used to determine which participants were holding in-the-money contracts, and their account balances are updated accordingly. Premiums are collected upon opening interest in a contract, and are refunded if the contract strikes in-the-money. All contracts pay out 1:1.We chose the following Google Cloud products to implement the demo:Compute Engine served as our job server:The implementation executes periodic tasks for issuing, expiring, and settling contracts. The design also requires a singleton process to run as a daemon to continually ingest tweets into BigQuery. We decided to consolidate these compute tasks into an ephemeral virtual machine on Compute Engine. The job server tasks were authored with node.js and shell scripts, using cron jobs for scheduling, and configured by an instance template with embedded VM startup scripts, for flexibility of deployment. The job server does not interact with any traders on the system, but populates the “market operational database” with both participant and contract status.Cloud Firestore served as our market operational database:Cloud Firestore is a document-oriented database that we use to store information on market sessions. It serves as a natural destination for the tweet count and open interest data displayed by the UI, and enables seamless integration with the front end.Firebase and App Engine provided our mobile and web applications:Using the Firebase SDK for both our mobile and web applications’ interfaces enabled us to maintain a streamlined codebase for the front end. Some UI components (such as the leaderboard and market status) need continual updates to reflect changes in the source data (like when a participant’s interest in a contract expires in-the-money). The Firebase SDK provides concise abstractions for developers and enables front-end components to be bound to Cloud Firestore documents, and therefore to update automatically whenever the source data changes.Choosing App Engine to host the front-end application allowed us to focus on UI development without the distractions of server management or configuration deployment. This helped the team rapidly produce an engaging front end.Cloud Functions ran our backend API services:The UI needs to save trades to Cloud Firestore, and Cloud Functions facilitate this serverlessly. This serverless backend means we can focus on development logic, rather than server configuration or schema definitions, thereby significantly reducing the length of our development iterations.BigQuery and BigQuery ML stored and analyzed tweetsBigQuery solves so many diverse problems that it can be easy to forget how many aspects of this project it enables. First, it reliably ingests and stores volumes of streaming Twitter data at scale and economically, with minimal integration effort. The daemon process code for ingesting tweets consists of 83 lines of Javascript, with only 19 of those lines pertaining to BigQuery.Next, it lets us extract features and labels from the ingested data, using standard SQL syntax. Most importantly, it brings ML capabilities to the data itself with BigQuery ML, allowing us to train a model on features extracted from the data, ultimately exposing predictions at runtime by querying the model with standard SQL.BigQuery ML can help solve two significant problems that the financial services community faces daily. First, it brings predictive modeling capabilities to the data, sparing the cost, time and regulatory risk associated with migrating sensitive data to external predictive models. Second, it allows these models to be developed using common SQL syntax, empowering data analysts to make predictions and develop statistical insights. At Next ‘18 London, one attendee in the pit observed that the tool fills an important gap between data analysts, who might have deep familiarity with their particular domain’s data but less familiarity with statistics; and data scientists, who possess expertise around machine learning but may be unfamiliar with the particular problem domain. We believe BigQuery ML helps address a significant talent shortage in financial services by blending these two distinct roles into one.Structuring and modeling the dataOur model training approach is as follows:First, persist raw data in the simplest form possible: filter the Twitter Enterprise API feed for tweets containing specific hashtags (pulled from a pre-defined subset), and persist a two-column time-series consisting of the specific hashtag as well as the timestamp of that tweet as it was observed in the Twitter feed.Second, define a view in SQL that sits atop the main time-series table and extracts features from the raw Twitter data. We selected features that allow the model to predict the number of tweet occurrences for a given hashtag within the next 10-minute period. Specifically:#Hashtag#fintech may have behaviors distinct from #blockchain and distinct from #brexit, so the model should be aware of this as a feature.Day of weekSunday’s tweet behaviors will be different from Thursday’s tweet behaviors.Specific intra-day windowWe sliced a 24-hour day into 144 10-minute segments, so the model can inform us on trend differences between various parts of the 24-hour cycle.Average tweet count from the past hourThese values are calculated by the view based upon the primary time-series data.Average tweet velocity from the past hourTo predict future tweet counts accurately, the model should know how active the hashtag has been in the prior hour, and whether that activity was smooth (say, 100 tweets consistently for each of the last six 10-minute windows) or bursty (say, five 10-minute windows with 0 tweets followed by one window with 600 tweets).Tweet count rangeThis is our label, the final output value that the model will predict. The contract issuance process running on the job server contains logic for issuing options contracts with strike ranges for each hashtag and 10-minute window (Range 1: 0-100, Range 2: 101-250, etc.) We took the large historical Twitter dataset and, using the same logic, stamped each example with a label indicating the range that would have been in-the-money. Just as equity option chains issued on a stock are informed by the specific stock’s price history, our exchange’s option chains are informed by the underlying hashtag's volume history.Train the model on this SQL view. BigQuery ML makes model training an incredibly accessible exercise. While remaining inside the data warehouse, we use a SQL statement to declare that we want to create a model trained on a particular view containing the source data, and using a particular column as a label.Finally, deploy the trained model in production. Again using SQL, simply query the model based on certain input parameters, just as you would query any table.Trading options contractsTo make the experience engaging, we wanted to recreate a bit of the open-outcry pit experience by having multiple large “market data” screens for attendees (the trading crowd) to track contract and participant performance. Demo participants used Pixel 2 handsets in the pit to place orders using a simple UI, from which they could allocate their credits to any or all of the three hashtags. When placing their order, they chose between relying on their own forecast, or using the predictions of a BigQuery ML model for their specific options portfolio, among the list of contracts currently trading in the market. Once the trades were made for their particular contracts, they monitored how their trades performed compared to other “traders” in real-time, then saw how accurate the respective predictions were when the trading window closed at expiration time (every 10 minutes).ML training processIn order to easily generate useful predictions about tweet volumes, we use a three-part process, First, we store tweet time series data to a BigQuery table. Second, we layer views are layered on top of this table to extract the features and labels required for model training. Finally, we use BigQuery ML to train and get predictions from the model.The canonical list of hashtags to be counted is stored within a BigQuery table named “hashtags”. This is joined with the “tweets” table to determine aggregates for each time window.Example 1: Schema definition for the “hashtags” table1. Store tweet time series data The tweet listener writes tags, timestamps, and other metadata to a BigQuery table named “tweets” that possesses the schema listed in example 2:Example 2: Schema definition for the “tweets” table2. Extract features via layered viewsThe lowest-level view calculates the count of each hashtag’s occurrence, per intraday window. The mid-level view extracts the features mentioned in the above section (“Structuring and modeling the data”). The top-level view then extracts the label (i.e., the “would-have-been in-the-money” strike range) from that time-series data. Lowest-level view The lowest-level view is defined by the SQL in example 3. The view definition contains logic to aggregate tweet history into 10-minute buckets (with 144 of these buckets per 24-hour day) by hashtag.Example 3: low-level view definitionb. Intermediate viewThe selection of some features (for example: hashtag, day-of-week or specific intraday window) is straightforward, while others (such as average tweet count and velocity for the past hour) are more complex. The SQL in example 4 illustrates these more complex feature selections.Example 4: intermediate view definition for adding featuresc. Highest-level viewHaving selected all necessary features in the prior view, it’s time to select the label. The label should be the strike range that would have been in-the-money for a given historical hashtag and ten-minute-window. The application’s “Contract Issuance” batch job generates strike ranges for every 10-minute window, and its “Expiration and Settlement” job determines which contract (range) struck in-the-money. When labeling historical examples for model training, it’s critical to apply this exact same application logic.Example 5: highest level view3. Train and get predictions from modelHaving created a view containing our features and label, we refer to the view in our BigQuery ML model creation statement:Example 6: model creationThen, at the time of contract issuance, we execute a query against the model to retrieve a prediction as to which contract will be in-the-money.Example 7: SELECTing predictions FROM the modeImprovementsThe exchange was built with a relatively short lead time, hence there were several architectural and tactical simplifications made in order to realistically ship on schedule. Future iterations of the exchange will look to implement several enhancements, such as:Introduce Cloud Pub/Sub into the architectureCloud Pub/Sub is an enabler for refined data pipeline architectures, and it stands to improve several areas within the exchange’s solution architecture. For example, it would reduce the latency of reported tweet counts by allowing the requisite components to be event-driven rather than batch-oriented.Replace VM `cron` jobs with Cloud SchedulerThe current architecture relies on Linux `cron`, running on a Compute Engine instance, for issuing and expiring options contracts, which contributes to the net administrative footprint of the solution. Launched in November of last year (after the version 1 architecture had been deployed), Cloud Scheduler will enable the team to provide comparable functionality with less infrastructural overhead.Reduce the size of the code base by leveraging Dataflow templatesOften, solutions contain non-trivial amounts of code responsible for simply moving data from one place to another, like persisting Pub/Sub messages to BigQuery. Cloud Dataflow templates allow development teams to shed these non-differentiating lines of code from their applications and simply configure and manage specific pipelines for many common use cases. Expand the stored attributes of ingested tweetsStoring the geographical tweet origins and the actual texts of ingested tweets could provide a richer basis from which future contracts may be defined. For example, sentiment analysis could be performed on the Tweet contents for particular hashtags, thus allowing binary contracts to be issued pertaining to the overall sentiment on a topic.Consider BigQuery user-defined functions (UDFs) to eliminate duplicate code among batch jobs and model executionCertain functionality, such as the ability to nimbly deal with time in 10-minute slices, is required by multiple pillars of the architecture, and resulted in the team deploying duplicate algorithms in both SQL and Javascript. With BigQuery UDFs, the team can author the algorithm once, in Javascript, and leverage the same code assets in both the Javascript batch processes as well as in the BigQuery ML models.A screenshot of the exchange dashboard during a trading sessionIf you’re interested in learning more about BigQuery ML, check out our documentation, or more broadly, have a look at our solutions for the financial services industry, or check out this interactive BigQuery ML walkthrough video. Or, if you’re able to attend Google Next ‘19 in San Francisco, you can even try out the exchange for yourself. Read more »
  • Build an AI-powered, customer service virtual agent with Chatbase
    These days, most people don’t tolerate more than one or two bad customer service experiences. For contact centers drowning in customer calls and live chats, an AI-powered customer service virtual agent can reduce that risk by complementing humans to provide personalized service 24/7, without queuing or waiting. But the status-quo approach to designing those solutions (i.e., intuition and brainstorming) is slow, based on guesswork, and just scratches the surface on functionality -- usually, causing more harm than good because the customer experience is poor.Built within Google’s internal incubator called Area 120, Chatbase is a conversational AI platform that replaces the risky status quo approach with a data-driven one based on Google’s world-class machine learning and search capabilities. The results include faster development (by up to 10x) of a more helpful and versatile virtual agent, and happier customers!Lessons learned along the journeyInitially, Chatbase provided a free-to-use analytics service for measuring and optimizing any AI-powered chatbot. (That product is now called Chatbase Virtual Agent Analytics.) After analyzing hundreds of thousands of bots and billions of messages in our first 18 months of existence, we had two revelations about how to help bot builders in a more impactful way: one, that customer service virtual agents would become the primary use case for the technology; and two, that using ML to glean insights from live-chat transcripts at scale would drastically shorten development time for those agents while creating a better consumer experience. With those lessons learned, Chatbase Virtual Agent Modeling (currently available via an EAP) was born.Virtual Agent Modeling explainedVirtual Agent Modeling (a component in the Cloud Contact Center AI solution) uses Google’s core strengths in ML and search to analyze thousands of transcripts, categorizing customer issues into “drivers” and then digging deeper to find specific intents (aka customer requests) per driver. For complex intents, Chatbase models simple yet rich flows developers can use to build a voice or chat virtual agent that handles up to 99% of interactions, responds helpfully to follow-up questions, and knows exactly when to do a hand-off to a live agent.In addition, the semantic search tool finds potentially thousands of training phrases per intent. When this analysis is complete, developers can export results to their virtual agent (via Dialogflow) -- cutting weeks, months, or even years from development time.Don’t settle for the slow status quoAccording to one Fortune 100 company upgrading its customer service virtual agent with Virtual Agent Modeling, “This new approach to virtual agent development moves at 200 mph, compared to 10-20 mph with current solutions.” Furthermore, it expects to nearly double the number of interactions its virtual agent can handle, from 53% of interactions to 92%.If you have at least 100,000 English-language live-chat transcripts available and plan to deploy or enhance either a voice or chat customer service virtual agent in 2019, Virtual Agent Modeling can help you get into that fast lane. Request your personal demo today! Read more »
  • Introducing Feast: an open source feature store for machine learning
    To operate machine learning systems at scale, teams need to have access to a wealth of feature data to both train their models, as well as to serve them in production. GO-JEK and Google Cloud are pleased to announce the release of Feast, an open source feature store that allows teams to manage, store, and discover features for use in machine learning projects.Developed jointly by GO-JEK and Google Cloud, Feast aims to solve a set of common challenges facing machine learning engineering teams by becoming an open, extensible, unified platform for feature storage. It gives teams the ability to define and publish features to this unified store, which in turn facilitates discovery and feature reuse across machine learning projects.“Feast is an essential component in building end-to-end machine learning systems at GO-JEK,” says Peter Richens, Senior Data Scientist at GO-JEK, “we are very excited to release it to the open source community. We worked closely with Google Cloud in the design and development of the product,  and this has yielded a robust system for the management of machine learning features, all the way from idea to production.”For production deployments, machine learning teams need a diverse set of systems working together. Kubeflow is a project dedicated to making these systems simple, portable and scalable and aims to deploy best-of-breed open-source systems for ML to diverse infrastructures. We are currently in the process of integrating Feast with Kubeflow to address the feature storage needs inherent in the machine learning lifecycle.The motivationFeature data are signals about a domain entity, e.g: for GO-JEK, we can have a driver entity and a feature of the daily count of trips completed. Other interesting features might be the distance between the driver and a destination, or the time of day. A combination of multiple features are used as inputs for a machine learning model.In large teams and environments, how features are maintained and served can diverge significantly across projects and this introduces infrastructure complexity, and can result in duplicated work.Typical challenges:Features not being reused: Features representing the same business concepts are being redeveloped many times, when existing work from other teams could have been reused.Feature definitions vary: Teams define features differently and there is no easy access to the documentation of a feature.Hard to serve up to date features: Combining streaming and batch derived features, and making them available for serving, requires expertise that not all teams have. Ingesting and serving features derived from streaming data often requires specialised infrastrastructure. As such, teams are deterred from making use of real time data.Inconsistency between training and serving: Training requires access to historical data, whereas models that serve predictions need the latest values. Inconsistencies arise when data is siloed into many independent systems requiring separate tooling.Our solutionFeast solves these challenges by providing a centralized platform on which to standardize the definition, storage and access of features for training and serving. It acts as a bridge between data engineering and machine learning.Feast handles the ingestion of feature data from both batch and streaming sources. It also manages both warehouse and serving databases for historical and the latest data. Using a Python SDK, users are able to generate training datasets from the feature warehouse. Once their model is deployed, they can use a client library to access feature data from the Feast Serving API.Feast provides the following:Discoverability and reuse of features: A centralized feature store allows organizations to build up a foundation of features that can be reused across projects. Teams are then able to utilize features developed by other teams, and as more features are added to the store it becomes easier and cheaper to build models.Access to features for training: Feast allows users to easily access historical feature data. This allows users to produce datasets of features for use in training models. ML practitioners can then focus more on modelling and less on feature engineering.Access to features in serving: Feature data is also available to models in production through a feature serving API. The serving API has been designed to provide low latency access to the latest feature values.Consistency between training and serving: Feast provides consistency by managing and unifying the ingestion of data from batch and streaming sources, using Apache Beam, into both the feature warehouse and feature serving stores. Users can query features in the warehouse and the serving API using the same set of feature identifiers.Standardization of features: Teams are able to capture documentation, metadata and metrics about features. This allows teams to communicate clearly about features, test features data, and determine if a feature is useful for a particular model.KubeflowThere is a growing ecosystem of tools that attempt to productionize machine learning. A key open source ML platform in this space is Kubeflow, which has focused on improving packaging, training, serving, orchestration, and evaluation of models. Companies that have built successful internal ML platforms have identified that standardizing feature definitions, storage, and access, was critical to that success.For this reason, Feast aims to be both deployable on Kubeflow and to integrate seamlessly with other Kubeflow components.  This includes a Python SDK for use with Kubeflow's Jupyter notebooks, as well as Kubeflow Pipelines.There is a Kubeflow GitHub issue here that allows for discussion of future Feast integration.How you can contributeFeast provides a consistent way to access features that can be passed into serving models, and to access features in batch for training. We hope that Feast can act as a bridge between your data engineering and machine learning teams, and we would love to hear your feedback via our GitHub project. For additional ways to contribute:Find the Feast project on GitHub repository hereJoin the Kubeflow community and find us on SlackLet the Feast begin! Read more »
  • A simple blueprint for building AI-powered customer service on GCP
    As a Google Cloud customer engineer based in Amsterdam, I work with a lot of banks and insurance companies in the Netherlands. All of them have this common requirement: to help customer service agents (many of whom are poorly trained interns due to the expense of hiring) handle large numbers of customer calls, especially at the end of the year when many consumers want to change or update their insurance plan.Most of these requests are predictable and easily resolved with the exchange of a small amount of information, which is a perfect use case for an AI-powered customer service agent. Virtual agents can provide non-queued service around the clock, and can easily be programmed to handle simple requests as well as do a hand-off to well-trained live agents for more complicated issues. Furthermore, a well-designed solution can help ensure that consumer requests, regardless of the channel in which they are received (phone, chat, IoT), are routed to the correct resource. As a result, in addition to the obvious customer satisfaction benefits, research says that virtual agents could help businesses in banking and healthcare alone trim costs collectively by $11 billion a year.In this post, I’ll provide an overview of a simple solution blueprint I designed that may inspire you to meet these objectives using GCP. Similar solutions that integrate with existing call center systems can be obtained through Cloud Contact Center AI partners, as well.Requirements and solutionAll businesses have the goal of making customer service effortless. With an AI-powered approach, a system can be designed that can accommodate consumers however they choose to reach out, whether by telephone, web chat, social media, mobile apps, or smart speaker.The particular approach described here covers three channels: web chat, the Google Assistant (on a Google Home), and telephone (through a telephony gateway). It also meets a few other requirements:Ability to optimize over time. If you know what questions consumers ask and how their sentiment changes during a conversation, the virtual agent (and thus customer satisfaction) can be improved over time.Protection of consumer privacy. Per GDPR, sensitive personal information can’t be revealed or stored.An easy deployment and management experience. It goes without saying that any company adopting cloud wants to avoid maintaining VMs, networks, and operating systems, as well as monolithic architecture. Thus the solution should take advantage of the ability to easily/automatically build, deploy, and publish updates.With Google Cloud, meeting these requirements is as easy as stitching a few components together. Let’s have a closer look.Technology stackThe diagram below provides a high-level overview; I’ll explain each piece in turn.DialogflowDialogflow Enterprise Edition, an emerging standard for building AI-powered conversational experiences across multiple channels, is the “brains” of this solution. My customers love it because it doesn’t require special natural language understanding skills; a team of content experts and UX designers are all you need to build a robust virtual agent for a simple use case. It also integrates well with other Google Cloud components, offers error reporting and debug information out of the box, and is available along with Google Cloud Support and SLA.As you can see in the architectural diagram, Dialogflow is integrated with the website channel through the Dialogflow SDK. Integration with the Google Assistant or the Phone Gateway simply requires flipping a switch during configuration.ChannelsWebsite: The website front-end and back-end are split into two separate Kubernetes containers. The website front-end is build with Angular, and the back-end container is based on Node.js with integration. Dialogflow has a Node.js client library, so text messages from the Angular app are passed to the Node.js server app via WebSocket in order to send it to the Dialogflow SDK.The Google Assistant: Actions on Google is a framework for creating software applications (a.k.a., “actions”) for the Google Assistant. Actions on Google is nicely integrated in Dialogflow: Just log in with your Google account and you can easily deploy your agent to the Google Assistant, enabling interactions on Android apps, via the Google Assistant app on iOS, or on Google Home.Phone: As mentioned in the introduction, if your plan is to integrate your virtual agent with an existing contact center call system, Google Cloud partners like Genesys, Twilio, and Avaya can help integrate Cloud Contact Center AI with their platforms. (For an overview, see this video from Genesys.) For startups and SMBs, the Dialogflow Phone Gateway feature (currently in beta) integrates a virtual agent with a Google Voice telephone number with just a few clicks, creating an “instant” customer service voice bot.AnalyticsWhether you’re building a full customer service AI system, a simple action for the Google Assistant, or anything in between, it’s important to know which questions/journeys are common, which responses are most satisfying, and if and when the virtual agent isn’t programmed to respond beyond a default “fallback” message. The diagram below shows the solution analytics architecture for addressing this need.Cloud Pub/Sub: Cloud Pub/Sub, a fully-managed, real-time publish/subscribe messaging service that sends and receives messages between independent applications, is the “glue” that holds the analytic components together. All transcripts (from voice calls or chats) are sent to Cloud Pub/Sub as a first step before analysis.Cloud Functions: Google Cloud Functions is a lightweight compute platform for creating single-purpose, standalone functions that respond to events without the need to manage a server or runtime environment. In this case, the event will be triggered by Cloud Pub/Sub: Every time a message arrives there through the subscriber endpoint, a cloud function will run the message through two Google Cloud services (see below) before storing it in Google BigQuery.Cloud Natural Language: This service reveals the structure of a text message; you can use it to extract information about people, places, or in this case, to detect the sentiment of a customer conversation. The API returns a sentiment level between 1 and -1.Cloud Data Loss Prevention: This service discovers and redacts any sensitive information such as addresses and telephone numbers remaining in transcripts before storage.BigQuery: BigQuery is Google Cloud’s serverless enterprise data warehouse, supporting super-fast SQL queries enabled by the massive processing power of Google's infrastructure. Using BigQuery you could combine your website data together with your chat logs. Imagine you can see that your customer browsed through one of your product webpages, and then interacted with a chatbot. Now you can answer them proactively with targeted deals.. Naturally, this analysis can be done through a third-party business intelligence tool like Tableau, with Google Data Studio, or through a homegrown web dashboard like the one shown below.Another use case would be to write a query that returns all chat messages that have a negative sentiment score:SELECT * from `chatanalytics.chatmessages` where SCORE < 0 ORDER BY SCORE ASCThis query also returns the session ID,  so you could then write a query to get the full chat transcript and explore why this customer became unhappy:SELECT * from `chatanalytics.chatmessages` where SESSION = '6OVkcIQg7QFvdc5EAAAs' ORDER BY POSTEDDeployment: Finally, you can use Cloud Build to easily build and deploy these containers to Google Kubernetes Engine with a single command in minutes. A simple YAML file in the project will specify how this all works. As a result, each component/container can be independently modified as needed.Chatbase (optional): It’s not included in this blueprint, but for a more robust approach, Chatbase Virtual Agent Analytics (which powers Dialogflow Analytics and is free to use) is also an option. In addition to tracking health KPIs, it provides deep insights into user messages and journeys through various reports combined with transcripts. Chatbase also lets you report across different channels/endpoints.ConclusionRecently, it took me just a couple of evenings to build a full demo of this solution.And going forward, I don’t need to worry about installing operating systems, patches, or software, nor about scaling for demand: whether I have 10 or hundreds of thousands of users talking to the bot, it will just work. If you’re exploring improving customer satisfaction with an AI-powered customer service virtual agent, hopefully this blueprint is a thought-provoking place to start! Read more »
WordPress RSS Feed Retriever by Theme Mason

AI – Latest News

WordPress RSS Feed Retriever by Theme Mason

ScienceDaily – Artificial Intelligence News

  • Researchers get humans to think like computers
    Computers, like those that power self-driving cars, can be tricked into mistaking random scribbles for trains, fences and even school buses. People aren't supposed to be able to see how those images trip up computers but in a new study, researchers show most people actually can. Read more »
  • Brain-inspired AI inspires insights about the brain (and vice versa)
    Researchers have described the results of experiments that used artificial neural networks to predict with greater accuracy than ever before how different areas in the brain respond to specific words. The work employed a type of recurrent neural network called long short-term memory (LSTM) that includes in its calculations the relationships of each word to what came before to better preserve context. Read more »
  • Robotic 'gray goo'
    Researchers have demonstrated for the first time a way to make a robot composed of many loosely coupled components, or 'particles.' Unlike swarm or modular robots, each component is simple, and has no individual address or identity. In their system, which the researchers call a 'particle robot,' each particle can perform only uniform volumetric oscillations (slightly expanding and contracting), but cannot move independently. Read more »
  • Google research shows how AI can make ophthalmologists more effective
    As artificial intelligence continues to evolve, diagnosing disease faster and potentially with greater accuracy than physicians, some have suggested that technology may soon replace tasks that physicians currently perform. But a new study shows that physicians and algorithms working together are more effective than either alone. Read more »
  • The robots that dementia caregivers want: Robots for joy, robots for sorrow
    A team of scientists spent six months co-designing robots with informal caregivers for people with dementia, such as family members. They found that caregivers wanted the robots to fulfill two major roles: support positive moments shared by caregivers and their loved ones; and lessen caregivers' emotional stress by taking on difficult tasks, such as answering repeated questions and restricting unhealthy food. Read more »
  • Water-resistant electronic skin with self-healing abilities created
    Inspired by jellyfish, researchers have created an electronic skin that is transparent, stretchable, touch-sensitive, and repairs itself in both wet and dry conditions. The novel material has wide-ranging uses, from water-resistant touch screens to soft robots aimed at mimicking biological tissues. Read more »
  • Seeing through a robot's eyes helps those with profound motor impairments
    An interface system that uses augmented reality technology could help individuals with profound motor impairments operate a humanoid robot to feed themselves and perform routine personal care tasks such as scratching an itch and applying skin lotion. The web-based interface displays a 'robot's eye view' of surroundings to help users interact with the world through the machine. Read more »
  • Can artificial intelligence solve the mysteries of quantum physics?
    A new study has demonstrated mathematically that algorithms based on deep neural networks can be applied to better understand the world of quantum physics, as well. Read more »
  • How intelligent is artificial intelligence?
    Scientists are putting AI systems to a test. Researchers have developed a method to provided a glimpse into the diverse 'intelligence' spectrum observed in current AI systems, specifically analyzing these AI systems with a novel technology that allows automatized analysis and quantification. Read more »
  • Faster robots demoralize co-workers
    New research finds that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort -- and they tend to dislike the robots. Read more »
  • A robotic leg, born without prior knowledge, learns to walk
    Researchers believe they have become the first to create an AI-controlled robotic limb driven by animal-like tendons that can even be tripped up and then recover within the time of the next footfall, a task for which the robot was never explicitly programmed to do. Read more »
  • How to train your robot (to feed you dinner)
    Researchers have developed a robotic system that can feed people who need someone to help them eat. Read more »
  • Ultra-low power chips help make small robots more capable
    An ultra-low power hybrid chip inspired by the brain could help give palm-sized robots the ability to collaborate and learn from their experiences. Combined with new generations of low-power motors and sensors, the new application-specific integrated circuit (ASIC) -- which operates on milliwatts of power -- could help intelligent swarm robots operate for hours instead of minutes. Read more »
  • Robots can detect breast cancer as well as radiologists
    A new article suggests that artificial intelligence systems may be able to perform as accurately as radiologists in the evaluation of digital mammography in breast cancer screening. Read more »
  • Neurodegenerative diseases identified using artificial intelligence
    Researchers have developed an artificial intelligence platform to detect a range of neurodegenerative disease in human brain tissue samples, including Alzheimer's disease and chronic traumatic encephalopathy. Read more »
  • Mini cheetah is the first four-legged robot to do a backflip
    New mini cheetah robot is springy and light on its feet, with a range of motion that rivals a champion gymnast. The four-legged powerpack can bend and swing its legs wide, enabling it to walk either right-side up or upside down. The robot can also trot over uneven terrain about twice as fast as an average person's walking speed. Read more »
  • Spiking tool improves artificially intelligent devices
    The aptly named software package Whetstone enables neural computer networks to process information up to 100 times more efficiently than current standards, making possible an increased use of artificial intelligence in mobile phones, self-driving cars, and image interpretation. Read more »
  • Robots track moving objects with unprecedented precision
    A novel system uses RFID tags to help robots home in on moving objects with unprecedented speed and accuracy. The system could enable greater collaboration and precision by robots working on packaging and assembly, and by swarms of drones carrying out search-and-rescue missions. Read more »
  • Artificial intelligence to boost Earth system science
    A new study shows that artificial intelligence can substantially improve our understanding of the climate and the Earth system. Read more »
  • The first walking robot that moves without GPS
    Desert ants are extraordinary solitary navigators. Researchers were inspired by these ants as they designed AntBot, the first walking robot that can explore its environment randomly and go home automatically, without GPS or mapping. This work opens up new strategies for navigation in autonomous vehicles and robotics. Read more »
  • Getting a grip on human-robot cooperation
    There is a time when a successful cooperation between humans and robots has decisive importance: it is in the precise moment that one "actor" is required to hand an object to another "actor" and, therefore, to coordinate their actions accordingly. But how can we make this interaction more natural for robots? Read more »
  • Teaching self-driving cars to predict pedestrian movement
    By zeroing in on humans' gait, body symmetry and foot placement, researchers are teaching self-driving cars to recognize and predict pedestrian movements with greater precision than current technologies. Read more »
  • Toward automated animal identification in wildlife research
    A new program automatically detects regions of interest within images, alleviating a serious bottleneck in processing photos for wildlife research. Read more »
  • Psychology: Robot saved, people take the hit
    To what extent are people prepared to show consideration for robots? A new study suggests that, under certain circumstances, some people are willing to endanger human lives -- out of concern for robots. Read more »
  • Citizen science projects have a surprising new partner, the computer
    Data scientists and citizen science experts partnered with ecologists who often study wildlife populations by deploying camera traps. These camera traps are remote, independent devices, triggered by motion and infrared sensors that provide researchers with images of passing animals. The researchers built skill sets to help computers identify other animals, such as a deer or squirrel, with even fewer images. Read more »
  • Walking with Pokémon
    In a recent study, researchers reveal how the Pokémon GO augmented reality game positively impact the physical activity in players over 40. The authors hope the findings will inform urban planners and game designers to inspire people to be more active. Read more »
  • Robot combines vision and touch to learn the game of Jenga
    Machine-learning approach could help robots assemble cellphones and other small parts in a manufacturing line. Read more »
  • Atari master: New AI smashes Google DeepMind in video game challenge
    A new breed of algorithms has mastered Atari video games 10 times faster than state-of-the-art AI, with a breakthrough approach to problem solving. Read more »
  • Most people overlook artificial intelligence despite flawless advice
    A team of researchers recently discovered that most people overlook artificial intelligence despite flawless advice. AI-like systems will be an integral part of the Army's strategy over the next five years, so system designers will need to start getting a bit more creative in order to appeal to users. Read more »
  • Engineers translate brain signals directly into speech
    In a scientific first, neuroengineers have created a system that translates thought into intelligible, recognizable speech. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain. Read more »
  • Defending against cyberattacks by giving attackers 'false hope'
    'The quarantine is a decoy that behaves very similar to the real compromised target to keep the attacker assuming that the attack is still succeeding. In a typical cyberattack the more deeply attackers go in the system, the more they have the ability to go many directions. It becomes like a Whack-A-Mole game for those defending the system. Our strategy simply changes the game, but makes the attackers think they are being successful.' Read more »
  • Self-driving cars, robots: Identifying AI 'blind spots'
    A novel model identifies instances in which autonomous systems have 'learned' from training examples that don't match what's actually happening in the real world. Engineers could use this model to improve the safety of artificial intelligence systems, such as driverless vehicles and autonomous robots. Read more »
  • The first tendril-like soft robot able to climb
    Researchers have made the first soft robot mimicking plant tendrils: it is able to curl and climb, using the same physical principles determining water transport in plants. In the future this tendril-like soft robot could inspire the development of wearable devices, such as soft braces, able to actively morph their shape. Read more »
  • Increasing skepticism against robots
    In Europe, people are more reserved regarding robots than they were five years ago. Read more »
  • Artificial intelligence can dramatically cut time needed to process abnormal chest X-rays
    New research has found that a novel Artificial Intelligence (AI) system can dramatically reduce the time needed to ensure that abnormal chest X-rays with critical findings will receive an expert radiologist opinion sooner, cutting the average delay from 11 days to less than three days. Chest X-rays are routinely performed to diagnose and monitor a wide range of conditions affecting the lungs, heart, bones, and soft tissues. Read more »
  • Smart microrobots that can adapt to their surroundings
    Scientists have developed tiny elastic robots that can change shape depending on their surroundings. Modeled after bacteria and fully biocompatible, these robots optimize their movements so as to get to hard-to-reach areas of the human body. They stand to revolutionize targeted drug delivery. Read more »
  • Measuring ability of artificial intelligence to learn is difficult
    Organizations looking to benefit from the artificial intelligence (AI) revolution should be cautious about putting all their eggs in one basket, a study has found. Read more »
  • 'Ambidextrous' robots could dramatically speed e-commerce
    Engineers present a novel, 'ambidextrous' approach to grasping a diverse range of object shapes without training. Read more »
  • Smart home tests first elder care robot
    Researchers believe the robot, nicknamed RAS, could eventually help those with dementia and other limitations continue to live independently in their own homes. Read more »
  • Artificial bug eyes
    Single lens eyes, like those in humans and many other animals, can create sharp images, but the compound eyes of insects and crustaceans have an edge when it comes to peripheral vision, light sensitivity and motion detection. That's why scientists are developing artificial compound eyes to give sight to autonomous vehicles and robots, among other applications. Now, a new report describes the preparation of bioinspired artificial compound eyes using a simple low-cost approach. Read more »
  • Can artificial intelligence tell a teapot from a golf ball?
    How smart is the form of artificial intelligence known as deep learning computer networks, and how closely do these machines mimic the human brain? They have improved greatly in recent years, but still have a long way to go, according to a team of cognitive psychologists. Read more »
  • How game theory can bring humans and robots closer together
    Researchers have for the first time used game theory to enable robots to assist humans in a safe and versatile manner. Read more »
  • Bees can count with small number of nerve cells in their brains, research suggests
    Bees can solve seemingly clever counting tasks with very small numbers of nerve cells in their brains, according to researchers. Read more »
  • New AI computer vision system mimics how humans visualize and identify objects
    Researchers have demonstrated a computer system that can discover and identify the real-world objects it 'sees' based on the same method of visual learning that humans use. Read more »
  • Robots with sticky feet can climb up, down, and all around
    Researchers have created a micro-robot whose electroadhesive foot pads, inspired by the pads on a gecko's feet, allow it to climb on vertical and upside-down conductive surfaces, like the inside walls of a commercial jet engine. Groups of them could one day be used to inspect complicated machinery and detect safety issues sooner, while reducing maintenance costs. Read more »
  • Computer hardware designed for 3D games could hold the key to replicating human brain
    Researchers have created the fastest and most energy efficient simulation of part of a rat brain using off-the-shelf computer hardware. Read more »
  • Computer chip vulnerabilities discovered
    A research team has uncovered significant and previously unknown vulnerabilities in high-performance computer chips that could lead to failures in modern electronics. Read more »
  • New models sense human trust in smart machines
    New 'classification models' sense how well humans trust intelligent machines they collaborate with, a step toward improving the quality of interactions and teamwork. Read more »
  • Mountain splendor? Scientists know where your eyes will look
    Using precise brain measurements, researchers predicted how people's eyes move when viewing natural scenes, an advance in understanding the human visual system that can improve a host of artificial intelligence efforts, such as the development of driverless cars. Read more »
  • Computers successfully trained to identify animals in photos
    Researchers trained a deep neural network to classify wildlife species using 3.37 million camera-trap images of 27 species of animals obtained from five states across the United States. The model then was tested on nearly 375,000 animal images at a rate of about 2,000 images per minute on a laptop computer, achieving 97.6 percent accuracy -- likely the highest accuracy to date in using machine learning for wildlife image classification. Read more »
  • Smarter AI: Machine learning without negative data
    A research team has successfully developed a new method for machine learning that allows an AI to make classifications without what is known as 'negative data,' a finding which could lead to wider application to a variety of classification tasks. Read more »
  • Aquatic animals that jump out of water inspire leaping robots
    Ever watch aquatic animals jump out of the water and wonder how they manage to do it in such a streamlined and graceful way? Researchers who specialize in water entry and exit in nature had the same question. Read more »
  • Model of quantum artificial life on quantum computer
    Researchers have developed a quantum biomimetic protocol that reproduces the characteristic process of Darwinian evolution adapted to the language of quantum algorithms and quantum computing. The researchers anticipate a future in which machine learning, artificial intelligence and artificial life itself will be combined on a quantum scale. Read more »
  • Android child's face strikingly expressive
    Android faces must express greater emotion if robots are to interact with humans more effectively. Researchers tackled this challenge as they upgraded their android child head, named Affetto. They precisely examined Affetto's facial surface points and the precise balancing of different forces necessary to achieve more human-like motion. Through mechanical measurements and mathematical modeling, they were able to use their findings to greatly enhance Affetto's range of emotional expression. Read more »
  • AI capable of outlining in a single chart information from thousands of scientific papers
    Scientists have developed a Computer-Aided Material Design (CAMaD) system capable of extracting information related to fabrication processes and material structures and properties -- factors vital to material design -- and organizing and visualizing the relationship between them. The use of this system enables information from thousands of scientific and technical articles to be summarized in a single chart, rationalizing and expediting material design. Read more »
  • Artificial intelligence may fall short when analyzing data across multiple health systems
    A new study shows deep learning models must be carefully tested across multiple environments before being put into clinical practice. Read more »
  • Codebreaker Turing's theory explains how shark scales are patterned
    A system proposed by world war two codebreaker Alan Turing more than 60 years ago can explain the patterning of tooth-like scales possessed by sharks, according to new research. Read more »
  • Could machines using artificial intelligence make doctors obsolete?
    The technology of these tools is evolving rapidly. Standalone machines can now perform limited tasks raising the question of whether machines will ever completely replace doctors? Read more »
  • New method peeks inside the 'black box' of artificial intelligence
    Computer scientists have developed a promising new approach for interpreting machine learning algorithms. Unlike previous efforts, which typically sought to 'break' the algorithms by removing key words from inputs to yield the wrong answer, the researchers instead reduced the inputs to the bare minimum required to yield the correct answer. On average, the researchers got the correct answer with an input of less than three words. Read more »
  • Shape-shifting robots perceive surroundings, make decisions for first time
    Researchers have developed modular robots that can perceive their surroundings, make decisions and autonomously assume different shapes in order to perform various tasks -- an accomplishment that brings the vision of adaptive, multipurpose robots a step closer to reality. Read more »
WordPress RSS Feed Retriever by Theme Mason

AI Trends – AI News and Events

  • Boeing 737 MAX 8 and Lessons for AI: The Case of AI Self-Driving Cars
    By Lance Eliot, the AI Trends Insider The Boeing 737 MAX 8 aircraft has been in the news recently, doing so sadly as a result of a fatal crash that occurred on March 10, 2019 involving Ethiopian Airlines flight #302. News reports suggest that another fatal crash of the Boeing 737 MAX 8 that took place on October 29, 2018 for Lion Air flight #610 might be similar in terms of how the March 10, 2019 crash took place. It is noteworthy to point out that the Lion Air crash is still under investigation, possibly with a final report being released later this year, and the Ethiopian Airlines crash investigation is just now starting (at the time of this writing). I’d like to consider at this stage of understanding about the crashes whether we can tentatively identify aspects about the matter that could be instructive toward the design, development, testing, and fielding of Artificial Intelligence (AI) systems. Though the Boeing 737 MAX 8 does not include elements that might be considered in the AI bailiwick per se, it seems relatively apparent that systems underlying the aircraft could be likened to how advanced automation is utilized. Perhaps the Boeing 737 MAX 8 incidents can reveal vital and relevant characteristics that can be valuable insights for AI systems, especially AI systems of a real-time nature. A modern-day aircraft is outfitted with a variety of complex automated systems that need to operate on a real-time basis. During the course of a flight, starting even when the aircraft is on the ground and getting ready for flight, there are a myriad of systems that must each play a part in the motion and safety of the plane. Furthermore, these systems are at times either under the control of the human pilots or are in a sense co-sharing the flying operations with the human pilots. The Human Machine Interface (HMI) is a key matter to the co-sharing arrangement. I’m going to concentrate my relevancy depiction on a particular type of real-time AI system, namely AI self-driving cars. Please though do not assume that the insights or lessons mentioned herein are only applicable to AI self-driving cars. I would assert that the points made are equally important for other real-time AI systems, such as robots that are working in a factory or warehouse, and of course other AI autonomous vehicles such as drones and submersibles. You can even take out of the equation the real-time aspects and consider that these points still would readily apply to AI systems that are considered less-than real-time in their activities. One overarching aspect that I’d like to put clearly onto the table is that this discussion is not about the Boeing 737 MAX 8 as to the actual legal underpinnings of the aircraft and the crashes. I am not trying to solve the question of what happened in those crashes. I am not trying to analyze the details of the Boeing 737 MAX 8. Those kinds of analyzes are still underway and by experts that are versed in the particulars of airplanes and that are closely examining the incidents. That’s not what this is about herein. I am going to instead try to surface out of the various media reporting the semblance of what some seem to believe might have taken place. Those media guesses might be right, they might be wrong. Time will tell. What I want to do is see whether we can turn the murkiness into something that might provide helpful tips and suggestions of what can or might someday or already is happening in AI systems. I realize that some of you might argue that it is premature to be “unpacking” the incidents. Shouldn’t we wait until the final reports are released? Again, I am not wanting to make assertions about what did or did not actually happen. Among the many and varied theories and postulations, I believe there is a richness of insights that can be right now applied to how we are approaching the design, development, testing, and fielding of AI systems. I’d also claim that time is of the essence, meaning that it would behoove those AI efforts already underway to be thinking about the points I’ll be bringing up. Allow me to fervently clarify that the points I’ll raise are not dependent on how the investigations bear out about the Boeing 737 MAX 8 incidents. Instead, my points are at a level of abstraction that they are useful for AI systems efforts, regardless of what the final reporting says about the flight crashes. That being said, it could very well be that the flight crash investigations undercover other and additional useful points, all of which could further be applied to how we think about and approach AI systems. As you read herein the brief recap about the flight crashes and the aircraft, allow yourself the latitude that we don’t yet know what really happened. Therefore, the discussion is by-and-large of a tentative nature. New facts are likely to emerge. Viewpoints might change over time. In any case, I’ll try to repeatedly state that the aspects being described are tentative and you should refrain from judging those aspects, allowing your mind to focus on how the points can be used for enhancing AI systems. Even something that turns out to not have been true in the flight crashes can nonetheless still present a possibility of something that could have happened, and for which we can leverage that understanding to the advantage of AI systems adoption. So, do not trample on this discussion because you find something amiss about a characterization of the aircraft and/or the incident. Look past any such transgression. Consider whether the points surfaced can be helpful to AI developers and to those organizations embarking upon crafting AI systems. That’s what this is about. For those of you that are particularly interested in the Boeing 737 MAX 8 coverage in the media, here are a few handy examples: Bloomberg news: Seattle Times news: LA Times news: Wall Street Journal news: Background About the Boeing 737 MAX 8 The Boeing 737 was first flown in late 1960’s and spawned a multitude of variants over the years, including in the 1990s the Boeing 737 NG (Next Generation) series. Considered the most selling aircraft for commercial flight, last year the Boeing 737 model surpassed sales of 10,000 units sold. It is composed of twin jets, a relatively narrow body, and intended for a flight range of short to medium distances. The successor to the NG series is the Boeing 737 MAX series. As part of the family of Boeing 737’s, the MAX series is based on the prior 737 designs and was purposely re-engined by Boeing, along with having changes made to the aerodynamics and the airframe, doing so to make key improvements including a lowered burn rate of fuel and other aspects that would make the plane more efficient and have a longer range than its prior versions. The initial approval to proceed with the Boeing 737 MAX series was signified by the Boeing board of directors in August 2011. Per many news reports, there were discussions within Boeing about whether to start anew and craft a brand-new design for the Boeing 737 MAX series or whether to continue and retrofit the design. The decision was made to retrofit the prior design.  Of the changes made to prior designs, perhaps the most notable element consisted of mounting the engines further forward and higher than had been done for prior models. This design change tended to have an upward pitching effect on the plane. It was more so prone to this than prior versions, as a result of the more powerful engines being used (having greater thrust capacity) and the positioning at a higher and more pronounced forward position on the aircraft. As to a possibility of the Boeing 737 MAX entering into a potential stall during flight due to this retrofitted approach, particularly doing so in a situation where the flaps are retracted and at low-speed and with a nose-up condition, the retrofit design added a new system called the MCAS (Maneuvering Characteristics Augmentation System). The MCAS is essentially software that receives sensor data and then based on the readings will attempt to trim down the nose in an effort to avoid having the plane get into a dangerous nose-up stall during flight. This is considered a stall prevention system. The primary sensor used by the MCAS consists of an AOA (Angle of Attack) sensor, which is a hardware device mounted on the plane and transmits data within the plane, including feeding of the data to the MCAS system. In many respects, the AOA is a relatively simple kind of sensor and variants of AOA’s in term of brands, models, and designs exist on most modern-day airplanes. This is to point out that there is nothing unusual per se about the use of AOA sensors, it is a common practice to use AOA sensors. Algorithms used in the MCAS were intended to try and ascertain whether the plane might be in a dangerous condition as based on the AOA data being reported and in conjunction with the airspeed and altitude. If the MCAS software calculated what was considered a dangerous condition, the MCAS would then activate to fly the plane so that the nose would be brought downward to try and obviate the dangerous upward-nose potential-stall condition. The MCAS was devised such that it would automatically activate to fly the plane based on the AOA readings and based on its own calculations about a potentially dangerous condition. This activation occurs without notifying the human pilot and is considered an automatic engagement. Note that the human pilot does not overtly act to engage the MCAS per se, instead the MCAS is essentially always on and detecting whether it should engage or not (unless the human pilot opts to entirely turn it off). During a MCAS engagement, if a human pilot tries to trim the plane and uses a switch on the yoke to do so, the MCAS becomes temporarily disengaged. In a sense, the human pilot and the MCAS automated system are co-sharing the flight controls. This is an important point since the MCAS is still considered active and ready to re-engage on its own. A human pilot can entirely disengage the MCAS and turn it off, if the human pilot believes that turning off the MCAS activation is warranted. It is not difficult to turn off the MCAS, though it presumably would rarely if ever be turned off and might be considered an extraordinary and seldom action that would be undertaken by a pilot. Since the MCAS is considered an essential element of the plane, turning off the MCAS would be a serious act and not be done without presumably the human pilot considering the tradeoffs in doing so. In the case of the Lion Air crash, one theory is that shortly after taking off the MCAS might have attempted to push down the nose and that the human pilots were simultaneously trying to pull-up the nose, perhaps being unaware that the MCAS was trying to push down the nose. This appears to account for a roller coaster up-and-down effort that the plane seemed to experience. Some have pointed out that a human pilot might believe they have a stabilizer trim issue, referred to as a runaway stabilizer or runaway trim, and misconstrue a situation in which the MCAS is engaged and acting on the stabilizer trim. Speculation based on that theory is that the human pilot did not realize they were in a sense fighting with the MCAS to control the plane, and had the human pilot realized what was actually happening, it would have been relatively easy to have turned off the MCAS and taken over control of the plane, no longer being in a co-sharing mode. There have been documented cases of other pilots turning off the MCAS when they believed that it was fighting against their efforts to control the Boeing 737 MAX 8. One aspect that according to news reports is somewhat murky involves the AOA sensors in the case of the Lion Air incident. Some suggest that there was only one AOA sensor on the airplane and that it fed to the MCAS faulty data, leading the MCAS to push the nose down, even though apparently or presumably a nose down effort was not actually warranted. Other reports say that there were two AOA sensors, one on the Captain’s side of the plane and one on the other side, and that the AOA on the Captains side generated faulty readings while the one on the other side was generating proper readings, and that the MCAS apparently ignored the properly functioning AOA and instead accepted the faulty readings coming from the Captain’s side. There are documented cases of AOA sensors at times becoming faulty. One aspect too is that environmental conditions can impact the AOA sensor. If there is build-up of water or ice on the AOA sensor, it can impact the sensor. Keep in mind that there are a variety of AOA sensors in terms of brands and models, thus, not all AOA sensors are necessarily going to have the same capabilities and limitations. The first commercial flights of the Boeing 737 MAX 8 took place in May 2017. There are other models of the Boeing 737 MAX series, both ones existing and ones envisioned, including the MAX 7, the MAX 8, the MAX 9, etc. In the case of the Lion Air incident, which occurred in October 2018, it was the first fatal incident of the Boeing 737 MAX series. There are a slew of other aspects about the Boeing 737 MAX 8 and the incidents, and if interested you can readily find such information online. The recap that I’ve provided does not cover all facets — I have focused on key elements that I’d like to next discuss with regard to AI systems. Shifting Hats to AI Self-Driving Cars Topic Let’s shift hats for a moment and discuss some background about AI self-driving cars. Once I’ve done so, I’ll then dovetail together the insights that might be gleaned about the Boeing 737 MAX 8 aspects and how this can potentially be useful when designing, building, testing, and fielding AI self-driving cars. At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As such, we are quite interested in whatever lessons can be learned from other advanced automation development efforts and seek to apply those lessons to our efforts, and I’m sure that the auto makers and tech firms also developing AI self-driving car systems are keenly interested in too. I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: For the levels of self-driving cars, see my article: For why AI Level 5 self-driving cars are like a moonshot, see my article: For the dangers of co-sharing the driving task, see my article: Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion. Here’s the usual steps involved in the AI driving task: Sensor data collection and interpretation Sensor fusion Virtual world model updating AI action planning Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period. For my article about the grand convergence that has led us to this moment in time, see: See my article about the ethical dilemmas facing AI self-driving cars: For potential regulations about AI self-driving cars, see my article: For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: Returning to the matter of the Boeing 737 MAX 8, let’s consider some potential insights that can be gleaned from what the news has been reporting. Here’s a list of the points I’m going to cover:         Retrofit versus start anew         Single sensor versus multiple sensors reliance         Sensor fusion calculations         Human Machine Interface (HMI) designs         Education/training of human operators         Cognitive dissonance and Theory of Mind         Testing of complex systems         Firms and their development teams         Safety considerations for advanced systems I’ll cover each of the points, doing so by first reminding you of my recap about the Boeing 737 MAX 8 as it relates to the point being made, and then shifting into a focus on AI systems and especially AI self-driving cars for that point. I’ve opted to number the points to make them easier to refer to as a sequence of points, but the sequence number does not denote any kind of priority of one point being more or less important than another. They are all worthy points. Take a look at Figure 1. Key Point #1: Retrofit versus start anew Recall that the Boeing 737 MAX 8 is a retrofit of prior designs of the Boeing 737. Some have suggested that the “problem” being solved by the MCAS is a problem that should never have existed at all, namely that rather than creating an issue by adding the more powerful engines and putting them further forward and higher, perhaps the plane ought to have been redesigned entirely anew. Those that make this suggestion are then assuming that the stall prevention capability of the MCAS would not have been needed, which then would have not been built into the planes, which then would never have led to a human pilot essentially co-sharing and battling with it to fly the plane. Don’t know. Might there have been a need for an MCAS anyway? In any case, let’s not get mired in that aspect about the Boeing 737 MAX 8 herein. Instead, think about AI systems and the question of whether to retrofit an existing AI system or start anew. You might be tempted to believe that AI self-driving cars are so new that they are entirely a new design anyway. This is not quite correct. There are some AI self-driving car efforts that have built upon prior designs and are continually “retrofitting” a prior design, doing so by extending, enhancing, and otherwise leveraging the prior foundation. This makes sense in that starting from scratch is going to be quite an endeavor. If you have something that already seems to work, and if you can adjust it to make it better, you would likely be able to do so at a lower cost and at a faster pace of development. One consideration is whether the prior design might have issues that you are not aware of and are perhaps carrying those into the retrofitted version. That’s not good. Another consideration is whether the effort to retrofit requires changes that introduce new problems that were not previously in the prior design. This emphasizes that the retrofit changes are not necessarily always of an upbeat nature. You can make alterations that lead to new issues, which then require you to presumably craft new solutions, and those new solutions are “new” and therefore not already well-tested via prior designs. I routinely forewarn AI self-driving car auto makers and tech firms to be cautious as they continue to build upon prior designs. It is not necessarily pain free. For my article about the reverse engineering of AI self-driving cars, see: For why groupthink among AI developers can be bad, see my article: For how egocentric AI developers can make untoward decisions, see: For the unlikely advent of kits for AI self-driving cars, see my article: Key Point #2: Single sensor versus multiple sensors reliance For the Boeing 737 MAX 8, I’ve mentioned that there are the AOA (Angle of Attack) sensors and they play a crucial role in the MCAS system. It’s not entirely clear whether there is just one AOA or two of the AOA sensors involved in the matter, but in any case, it seems like the AOA is the only type of sensor involved for that particular purpose, though presumably there must be other sensors such as registering the height and speed of the plane that are encompassed by the data feed going into the MCAS. Let’s though assume for the moment that the AOA is the only sensor for what it does on the plane, namely ascertaining the angle of attack of the plane. Go with me on this assumption, though I don’t know for sure if it is true. The reason I bring up this aspect is that if you have an advanced system that is dependent upon only one kind of sensor to provide a crucial indication of the physical aspects of the system, you might be painting yourself into an uncomfortable corner. In the case of AI self-driving cars, suppose that we used only cameras for detecting the surroundings of the self-driving car. It means that the rest of the AI self-driving car system is solely dependent upon whether the cameras are working properly and whether the vision processing systems is working correctly. If we add to the AI self-driving car another capability, such as radar sensors, we now have a means to double-check the cameras. We could add another capability such as LIDAR, and we’d have a triple check involved. We could add ultrasonic sensors too. And so on. Now, we must realize that the more sensors you add, the more the cost goes up, along with the complexity of the system rising too. For each added sensor type, you need to craft an entire capability around it, including where to position the sensors, how to connect them into the rest of the system, and having the software that can collect the sensor data and interpret it. There is added weight to the self-driving car, there is added power consumption being consumed, there is more heat generated by the sensors, etc. Also, the amount of computer processing required goes up, including the number of processors, the memory needed, and the like. You cannot just start including more sensors because you think it will be handy to have them on the self-driving car. Each added sensor involves a lot of added effort and costs. There is an ROI (Return on Investment) involved in making such decisions. I’ve stated many times in my writings and presentations whether Elon Musk and Tesla’s decision to not use LIDAR is going to ultimately backfire on them, and even Elon Musk himself has said it might. I’d like to then use the AOA matter as a wake-up call about the kinds of sensors that the auto makers and tech firms are putting onto their AI self-driving cars. Do you have a type of sensor for which no other sensor can obtain something similar? If so, are you ready to handle the possibility that if the sensor goes bad, your AI system is going to be in the blind about what is happening, or perhaps worse still that it will get faulty readings. This does bring up another handy point, specifically how to cope with a sensor that is being faulty. The AI system cannot assume that a sensor is always going to be working properly. The “easiest” kind of problem is when the sensor fails entirely, and the AI system gets no readings from it at all. I say this is easiest in that the AI then can pretty much make a reasonable assumption that the sensor is then dead and no longer to be relied upon. This doesn’t mean that handling the self-driving car is “easy” and it only means that at least the AI kind of knows that the sensor is not working. The tricky part is when a sensor becomes faulty but has not entirely failed. This is a scary gray area. The AI might not realize that the sensor is faulty and therefore assume that everything the sensor is reporting must be correct and accurate. Suppose a camera is having problems and it is occasionally ghosting images, meaning that an image sent to the AI system has shown perhaps cars that aren’t really there or pedestrians that aren’t really there. This could be disastrous. The rest of the AI might suddenly jam on the brakes to avoid a pedestrian, someone that’s not actually there in front of the self-driving car. Or, maybe the self-driving car is unable to detect a pedestrian in the street because the camera is faulting and sending images that have omissions. The sensor and the AI system must have a means to try and ascertain whether the sensor is faulting or not. It could be that the sensor itself is having a physical issue, maybe by wear-and-tear or maybe it was hit or bumped by some other matter such as the self-driving car nudging another car. Another strong possibility for most sensors is the chance of it getting covered up by dirt, mud, snow, and other environmental aspects. The sensor itself is still functioning but it cannot get solid readings due to the obstruction. AI self-driving car makers need to be thoughtfully and carefully considering how their sensors operate and what they can do to detect faulty conditions, along with either trying to correct for the faulty readings or at least inform and alert the rest of the AI system that faultiness is happening. This is serious stuff. Unfortunately, sometimes it is given short shrift. For the dangers of myopic use of sensors on AI self-driving cars, see my article: For the use of LIDAR, see my article: For my article about the crossing of the Rubicon and sensors issues, see: For what happens when sensors go bad, see my article: Key Point #3: Sensor fusion calculations As mentioned earlier, one theory was that the Boeing 737 MAX 8 in the Lion Air incident had two AOA sensors and one of the sensors was faulting, while the other sensor was still good, and yet the MCAS supposedly opted to ignore the good sensor and instead rely upon the faulty one. In the case of AI self-driving cars, an important aspect involves undertaking a kind of sensor fusion to figure out a larger overall notion of what is happening with the self-driving car. The sensor fusion subsystem needs to collect together the sensory data or perhaps the sensory interpretations from the myriad of sensors and try to reconcile them. Doing so is handy because each type of sensor might be seeing the world from a particular viewpoint, and by “triangulating” the various sensors, the AI system can derive a more holistic understanding of the traffic around the self-driving car. Would it be possible for an AI self-driving car to opt to rely upon a faulting sensor and simultaneously ignore or downplay a fully functioning sensor? Yes, absolutely, it could happen. It all depends upon how the sensor fusion was designed and developed to work. If the AI developers thought that the forward camera is more reliable overall than the forward radar, they might have developed the software such that it tends to weight the camera more so than the radar. This can mean that when the sensor fusion is trying to decide which sensor to choose as providing the right indication at the time, it might default to the camera, rather than the radar, even if the camera is in a faulting mode. Perhaps the sensor fusion is unaware that the camera is faulting, and so it… Read more »
  • Government is Reskilling Workforce to Handle Disruptions from Spread of AI
    The future of work will be dramatically transformed by artificial intelligence, so Americans should be prepared to engage in lifelong learning, tech experts said at a recent AI event in Washington. “New technologies are changing so quickly that any of us who are even experts in the field get out of date very quickly,” Lynne Parker, White House Office of Science and Technology Policy’s assistant director for artificial intelligence, told attendees of The Economist’s “The AI Agenda” event in Washington. “So nationally, we need to foster an environment where we are used to the idea that we’ll have lifelong learning. It’s no longer that you go through K-12, or you go through college, and you’re done.” Parker highlighted some of the government’s attempts at reskilling the workforce to keep up with the changing landscape and encouraged all industries to emulate such efforts. She also said jobs are being affected across the board and “to a wide range of extents,” but it is important to embrace new technologies, such as AI, to alleviate some of the impending challenges workers will face. “Regardless of what kind of job you have—whether it’s in manufacturing, transportation, healthcare, or law—there are ways that AI can help you. It can provide tools to get rid of some of the mundane kinds of tasks,” Parker said. “However if we are not comfortable with those tools, […] then we will feel like we will have our hands tied behind our back.” National Science Foundation Director France Córdova agreed that AI is becoming slowly embedded into everything Americans do, including driving, shopping, flying and especially in regards to how people do work. She said the technology is also creating innovative new opportunities, particularly within the federal government. “Part of what the government is doing now is renewing itself to all of its agencies,” Córdova said. “And AI is playing an important role in that.” Within the NSF, for example, Córdova said AI is being used to approach inoperable databases, “of which [they] have a lot.” She said the foundation is also looking at AI methods that can help them better identify more diverse reviewer pools for funding research. Daniel Weitzner, founding director of MIT’s Internet Policy Research Initiative and principal research scientist at MIT CSAIL, said AI’s evolution and impacts on society will be “incremental” and “lumpy.” He said, as best as anyone can predict, the changes that AI brings to the workforce will be much more complicated than simply eliminating jobs. Instead, it is more likely that it will change the way people work. Read the source article in Nextgov. Read more »
  • Keeping Up In An AI-Driven Workforce
    By AI Trends Staff AI will change the future of work, experts say. To keep up, we’ll need a culture of lifelong learning. “New technologies are changing so quickly that any of us who are even experts in the field get out of date very quickly,” Lynne Parker, White House Office of Science and Technology Policy’s assistant director for artificial intelligence, told attendees of The Economist’s “The AI Agenda” event in Washington. “So nationally, we need to foster an environment where we are used to the idea that we’ll have lifelong learning. It’s no longer that you go through K-12, or you go through college, and you’re done.” Nextgov reported Parker’s comments and her findings that government’s attempts to reskill the workforce to keep up with the changing landscape are worth emulating. New technologies—including AI—are changing jobs across all industries, she said. We must help make our workforce comfortable with these new tools. National Science Foundation Director France Córdova highlighted the opportunities AI is creating within government, in particular. Within the NSF, for example, Córdova said AI is being used to approach inoperable databases, “of which [they] have a lot.” She said the foundation is also looking at AI methods that can help them better identify more diverse reviewer pools for funding research, Nextgov reports. But it was Daniel Weitzner, founding director of MIT’s Internet Policy Research Initiative and principal research scientist at MIT CSAIL, who noted that AI will bring more changes than simply jobs elimination. Instead, he said, it is more likely that it will change the way people work. Read the source article at Nextgov. Read more »
  • LillyWorks Adds Predictive Analytics to its Manufacturing Software
    LillyWorks has announced the addition of predictive analytics to its Protected Flow Manufacturing software. Protected Flow Manufacturing provides the user with an execution plan – what to run and when to run it – based on the mix of products and resources required to make them to provide the best timing of output to minimize delays. The Predictor, the predictive analytics tool of Protected Flow Manufacturing that incorporates AI techniques, takes work order and production requirement information and combines it with the capacity and capabilities information to predict how the shop floor will look in the future. An early user is Graphicast Inc. of Jaffrey, NH. “We’ve spent 40 years innovating the metal casting process through the combination of Zinc-Aluminum(ZA-12) Alloy, our proprietary low-turbulence, auto-fill casting process, and permanent graphite molds ideal for production volumes ranging from 100 to 20,000 parts,” said Val Zanchuk, President of Graphicast. If for example, the Predictor sees that a shipment to a customer depending on critical parts will be late three weeks out, an adjustment can be made to meet the schedule requirements. Or, for a customer request to expedite an order, the Predictor is used to see the impact of that expedite on the entire operation. A decision can be made on whether the factory can absorb the impact, or a change needs to be made in the completion date. “Being able to accurately predict shop floor activity three weeks — or three months– from now is a benefit of artificial intelligence, and it helps us respond to customers and be able to deliver on our promises,” Zanchuk said. Mark Lilly, the CEO of LillyWorks and the son of the founder, said in an interview with AI Trends, “We focus on prioritization on the shop floor primarily. We have built a framework that dynamically estimates time for the work setup and run times. With that, we are always able to see which work order is in danger of being late.” Most American manufacturing is custom or made to order, so the ability to adjust schedules is critical. The Predictor runs a simulation based o n the priority of the manufacturer. “It shows where the bottlenecks are going to be, and it shows for each work order what operations remain to be finished,” Lilly said. Next, the Predictor taps into Amazon’s machine learning service to determine the chances of a job being late. It takes into account past performance and other relevant variables. “It might see something the Predictor misses,” Lilly said. The company is also researching the use of “gamification” techniques to motivate uses to engage more with the software to achieve business objectives. Lilly is working with gamification software supplier Funifier on this work, Lilly said. “We want people to select the top priority, the first job in danger of being late, and use gamification techniques to help view the impact of alternative schedule adjustments, to model different responses.” For more information, go to LillyWorks. Read more »
  • Inspur Unveils Specialized AI Servers to Support Edge Computing in 5G Era
    By AI Trends Staff Inspur has announced a specialized AI server for edge computing applications. The new Inspur NE5250M5 includes two NVIDIA V100 Tensor Core GPUs or six NVIDIA T4 GPUs for compute-intensive AI applications, including autonomous vehicles, smart cities and smart homes. It also can enable 5G edge applications such as the Internet of Things, multi-access edge computing (MEC) and network function virtualization (NFV). It also supports optimization design for harsh deployment environments on the edge. “A lot of people in the US probably don’t know anything about us,” Alan Chang senior director of the server product line for Inspur, told AI Trends. Yet IDC ranked the China-based server supplier as the number one AI server provider in its First Half 2018 China AI Infrastructure Market Survey report, with a 51% share. Worldwide in the x86 server market, IDC has ranged Inspur at No. 3. Inspur also announced an AI server supporting eight NVIDIA V100 Tensor Core GPUs, with an ultra-high-bandwidth NVSwitch, for demanding AI and high-performance computing (HPS) applications. The Inspur NF5488M5 is designed to facilitate a variety of deep-learning and high-performance computing applications, including voice recognition, video analysis and intelligent customer service. The announcements were made at NVIDIA’s GPU Technology Conference held in March in San Jose. Looking For A Niche Some 85% of the company’s servers so far have been sold in China and 15% into Europe and the US, Chang told AI Trends. “We are really focused on cloud server providers,” he said. Inspur is interested in further penetrating western markets. “We need a niche,” Chang said. The niche is high-performance AI servers that work in conjunction with NVIDIA GPUs and other chips. Inspur is a member of the Open Compute Project in China that includes Alibaba and Tencent, which has helped the company to differentiate on software. “Our value proposition is to bring a mature, high-value motherboard. We can create a reliable, high-performance system that other companies cannot produce,” Chang said. Inspur positions to complement and not compete with NVIDIA, but rather with server shipment leaders Dell and HP in the US. The innovations in switch design is leading to performance increases that can range from 20% to 60% in early experience, Chang said. Another differentiation for Inspur is lower power requirements. Markets in China, Japan, India, Taiwan and much of Europe average 8kw per server rack. The US averages 20kw to 30 kw. The new product supporting edge computing is well-timed. “5G is exploding,” Chang said, with drivers coming from AI in autonomous driving, retail, manufacturing and the IoT (Internet of Things). “The AI inferencing will be happening at the edge,” Chang said. “That’s why we designed this new box.” Read more »
  • The Impact Of Algorithmic-Enhanced Care
    Algorithms are changing clinical care, says Sandy Aronson, Executive Director of IT, Partners HealthCare Personalized Medicine. True AI—neural networks—will play a role, but less sophisticated algorithms are already powering dramatic improvements. “We are seeing the benefits of introducing algorithms into the care delivery process to enable us to really harness that data and help the way we make decisions,” Aronson told AI Trends. “It’s logic, often complex Boolean logic, that enables us to get started, improve the care process, increase the amount of data that flows through the care process,” he says. “That should, in turn, set us up for more and more use of AI.” On behalf of AI Trends, Gemma Smith spoke with Aronson about the impact of algorithmic-enhanced care—real world examples he’s seen at Partners and the progress he expects to come. These aren’t just “last mile” technologies, he believes. Algorithm-enhanced care will have broad benefits to healthcare. Editor’s note: Gemma Smith, a conference producer at Cambridge Healthtech Institute, is helping plan a track dedicated to AI in Healthcare at the Bio-IT World Conference & Expo in Boston, April 16-18. Aronson is speaking on the program. Their conversation has been edited for length and clarity. AI Trends: Can you give me examples of where algorithmic-enhanced care has had a significant impact, and the benefits you’ve seen from that? Sandy Aronson: Sure. One example would be the way that we allocate platelets within the hospital system. At Birmingham Women’s Hospital, the way that we obtain platelets is you have these donors who come in and, again and again, sit for up to two hours while their blood is cycled outside of their body to obtain a bag of platelets. That altruistic action is critical to giving us the ability to perform bone marrow transplants. Once a patient gets a bone marrow transplant, we take their platelet count to zero. They reach the hospital floor after they’ve received their transplant, and we start transfusing them with platelets, and about 15% of the time, the patient immediately rejects the platelets that we gave to them because of a lack of a match between the patient’s HLA type (Human leukocyte antigen), and the donor’s HLA type. In that scenario, you not only have not gotten any value out of this altruistic act from the donor, but it’s also an expensive process. You’ve incurred costs, you haven’t given the patient the platelet bump that we were looking for, so they remain at risk for bleeding, and you potentially introduce new antibodies into the patient that could make them harder to match in the future. The reality is that there’s a relatively small number of altruists who consistently come in and donate most of these platelets, so they can be HLA typed.  We have to HLA type the recipient in order to match the bone marrow donor. So the information is available do a much better job at matching platelets to patients. Instead of the oldest bag of platelets being taken automatically and given to the patients when platelets are ordered, what we do now is we provide an application that uses an algorithm to sort the bags of platelets in the inventory so that the blood bank technician can see which bags of platelets are most likely to be accepted by the patient so we can prioritize those. We’re still gathering data on the impact of this, but the initial data we’ve gathered is promising relative to using platelets much more efficiently and getting much better platelet count bumps when we transfuse platelets into patients. So, we’re looking forward to collecting more data and then processing that data to assess the true impact of this program. That’s one example. Another example is the way that we care for patients with heart failure, hypertension, or high lipid values. There are guidelines for treating these patients, but in typical clinical care it is extremely difficult to bring patients into compliance with those guidelines at scale. These are conditions that effect a lot of patients. The process of implementing the guidelines requires a level of iteration that it difficult to implementing within the traditional clinical workflow. The patient needs to see a clinician, clinician needs to prescribe medication, we need to see what the effect of that medication is, and then you need to make adjustments either to the medications being given or to the doses of the medications being given to optimize the patient’s care. The problem is, we all know that doctors are incredibly busy, patients lead busy lives as well, and scheduling these visits where this optimization can happen often takes a lot of time, and therefore the time to get the patient to the optimal treatment is longer than we’d like. We’ve instituted a program where we built the guidelines into an algorithm that’s contained within an App. Navigators are assigned to work with patients and they contact the patients at the time that is optimal for the patient and their care, and they don’t have to consider the scheduling of the busy clinician. They can contact the patient, work with that patient to gather information that assists in determining what tests ideally should be ordered when, track their progress, and then they work with a pharmacist and overseeing clinicians to adjust medications. We find through this process that we’re really able to bring down lipid and blood pressure values in a way that’s really pretty gratifying to see. So those are two examples of where algorithms are entering care and making a difference. What is the single biggest challenge that you faced when implementing algorithmically-enhanced care like you describe here? We initially thought we could just surgically interject these algorithms into the existing care delivery process so that they could help someone make a better decision. What we really found though, and both of these are examples of this, is that as you’re introducing algorithms into care, it gives you the ability to rethink the care delivery process as a whole. And that’s hard; it takes a great deal of effort from both clinicians and IT folks—and sometimes business folks and others—to really figure out how to reformulate the care delivery process. But, that’s also where the power is. That’s where you can really think deeply about the optimal experience from a patient care perspective and how to deliver that. That often involves bigger changes than were first anticipated. What makes you most excited about the use of AI and algorithms in the healthcare industry? I truly believe that we are on the cusp of very, very significant changes and improvements to the healthcare system. Traditional care delivery pathways have evolved over a long time, and been incrementally improved over a long time, but what we have now is the ability to really look at how we fundamentally change these processes to make them better. The could be in the context of new technologies, new forms of data coming online, new ways we can interact with patients becoming available, and new algorithmic capabilities. And it’s not just that. When you move to algorithmically-based care, it forces you to collect clean data to drive the algorithm, and that’s something that the healthcare system hasn’t traditionally been very good at. By introducing a process that collects that type of clean data, that data then has the potential to become the fuel for machine learning to improve the process. When you implement algorithmically-based care, what you’ve really done is systematized part of the care delivery process. As a result of doing that, you can feed back improvements into that process far faster than you could in a traditional care delivery setting where decision making is so distributed. This starts to set us up for continuous learning processes that have the potential to make clinicians far, far more powerful in terms of being able to diagnose, monitor, and treat patients in ways that constantly improve. Folks have talked about the continuous learning healthcare system for a long time, but I do think we are seeing the beginning of the process that can truly make that real. I really think in the best case scenario—and we’ve all got to try to deliver the best case scenario—it could deliver improvements in human health on a scale that we’ve never seen before. Read more »
  • CMU Hosts Discussion of Ethics of AI Use by Department of Defense
    As artificial intelligence looms closer and closer to inevitable integration into nearly every aspect of national security, the U.S. Department of Defense tasked the Defense Innovation Board with drafting a set of guiding principles for the ethical use of AI in such cases. The Defense Innovation Board (DIB) is an organization set up in 2016 to bring the technological innovation and best practice of Silicon Valley to the U.S. Military. The DIB’s subcommittee on science and technology recently hosted a public listening session at Carnegie Mellon University focused on “The Ethical and Responsible Use of Artificial Intelligence for the Department of Defense.” It’s one of three DIB listening sessions scheduled for across the U.S. to collect public thoughts and concerns. Using the ideas collected, the DIB will put together its guidelines in the coming months and announce a full recommendation for the DoD later this year. Participants of the listening session included Michael McQuade, vice president for research at CMU; Milo Medin, vice president of wireless services at Google; Missy Cummings, director of Humans and Autonomy Lab at Duke University; and Richard Murray, professor of control and dynamical systems and bioengineering at Caltech. McQuade introduced the session saying that AI is more than a technology, but rather a social element that requires both obligations and responsibilities. In a press conference following the public session, Murray said the U.S. would be remiss if it did not recognize its responsibility to declare a moral and ethical stance to the use of AI. Medin added that the work done by the DIB will only be the first step. The rapidly changing nature of technology will require the policy to continue to iterate going forward. “In this day and age, and with a technology as broadly implicative as AI, it is absolutely necessary to encourage broad public dialogue on the topic,” said McQuade. At the listening session, the public brought forth the following concerns and comments: The human is not the ideal decision maker. AI should be implemented into military systems to minimize civilian casualties, to enhance national security and to protect people in the military. AI and sensor tech can be used to determine if a target is adult-sized or carrying a weapon before a remote explosive detonates or on a missile to determine whether civilian or military personnel are aboard an aircraft. “Machinewashing.” Similar to the “greenwashing” that took place when the fossil fuel industry came under fire and began to launch ad campaigns promoting its companies as earth-friendly, tech companies may be doing the same thing when it comes to AI. AI has been linked to issues like racial bias and election fraud, notably in the case of Facebook’s Cambridge Analytica scandal. Tech companies are spending millions to treat these issues like public relations challenges. As with climate change, if AI is controlled only by the tech giants, the public will see more crises like this in the coming decades. Laws may not express our social values A lot of emphasis is put on adhering to the legal framework around conducting military operations and war, but laws do not necessarily adequately express the values and morals of a society. As we deploy AI into our lives and into national security, it’s important to remember this distinction. A system of checks and balances. Any powerful AI systems should come with three other equally powerful systems to provide a system of checks and balances. If one begins to act strangely, the other two can override it. One can never override the other two. Humans should have a kill switch or back door code that allows them to delete all three. It won’t be perfect. No AI system will ever be perfect, but no military system is perfect. One of the most unpredictable is the individual soldier, who is subject to hunger, exhaustion and fear. We only need to determine whether AI can be as good as or better than what humans can perform by themselves. Read the source article at the Pittsburgh Business Times. Read more »
  • Chief Safety Officers Needed in AI: The Case of AI Self-Driving Cars
    By Lance Eliot, the AI Trends Insider Many firms think of a Chief Safety Officer (CSO) in a somewhat narrow manner as someone that deals with in-house occupational health and safety aspects occurring solely in the workplace. Though adherence to proper safety matters within a company are certainly paramount, there is an even larger role for CSO’s that has been sparked by the advent of Artificial Intelligence (AI) systems. Emerging AI systems that are being embedded into a company’s products and services has stoked the realization that a new kind of Chief Safety Officer is needed, one with wider duties and requiring a dual internal/external persona and focus. In some cases, especially life-or-death kinds of AI-based products such as AI self-driving cars, it is crucial that there be a Chief Safety Officer at the highest levels of a company. The CSO needs to be provided with the kind of breadth and depth of capability required to carry out their now fuller charge. By being at or within the top executive leadership, they can aid in shaping the design, development, and fielding of these crucial life-determining AI systems. Gradually, auto makers and tech firms in the AI self-driving car realm are bringing on-board a Chief Safety Officer or equivalent. It’s not happening fast enough, I assert, yet at least it is a promising trend and one that needs to speed along. Without a prominent position of Chief Safety Officer, it is doubtful that auto makers and tech firms will give the requisite attention and due care toward safety of AI self-driving cars. I worry too that those firms not putting in place an appropriate Chief Safety Officer are risking not only the lives of those that will use their AI self-driving cars, but also putting into jeopardy the advent of AI self-driving cars all told. In essence, those firms that give lip service to safety of AI self-driving car systems or inadvertently fail to provide the upmost attention to safety, they are likely to bring forth adverse safety events on our roadways, and for which the public and regulators will react not just toward that offending firm, such incidents will become an outcry and overarching barrier to any furtherance of AI self-driving cars. Simply stated, for AI self-driving cars, the chances of a bad apple spoiling the barrel is quite high and something that all of us in this industry live on the edge of each day. In speaking with Mark Rosekind, Chief Safety Innovation Officer at Zoox, doing so at a recent Autonomous Vehicle event in Silicon Valley, he emphasized how safety considerations are vital in the AI self-driving car arena. His years as an administrator for the National Highway Traffic Safety Administration (NHTSA) and his service on the board of the National Transportation Safety Board (NTSB) provide a quite on-target skillset and base of experience for his role. For those of you interested in the overall approach to safety that Zoox is pursuing, you can take a look at their posted report: Those of you that follow closely my postings will remember that I had previously mentioned the efforts of Chris Hart in the safety aspects of AI self-driving cars. As a former chairman of the NTSB, he brings key insights to what the auto makers and tech firms need to be doing about safety, along with offering important views that can help shape regulations and regulatory actions (see his web site: You might find of interest his recent blog post about the differences between aviation automation and AI self-driving cars, which dovetails too into my remarks about the same topic. For Chris Hart’s recent blog post, see: For my prior posting about AI self-driving car safety and Chris Hart’s remarks on the matter, see: For my posting about how airplane automation is not the same as what is needed for AI self-driving cars, see: Waymo, Google/Alphabet’s entity well-known for its prominence in the AI self-driving car industry, has also brought on-board a Chief Safety Officer, namely Debbie Hersman. Besides her having served on the NTSB and having been its chairman, she also was the CEO and President of the National Safety Council. It was with welcome relief that she has come on-board to Waymo since it also sends a signal or sign to the rest of the AI self-driving car makers that this is a crucial role and one they too need to make sure they are embracing if they aren’t already doing so. Uber recently brought on-board Nat Beuse to head their safety efforts. He had been with the U.S. Department of Transportation and oversaw vehicle safety efforts there for many years. For those of you interested in the safety report that Uber produced last year, coming after their internal review of the Uber self-driving car incident, you can find the report posted here: I’d also like to mention the efforts of Alex Epstein, Director of Transportation at the National Safety Council (NSC). We met at an inaugural conference on the safety of AI self-driving cars and his insights and remarks were spot-on about where the industry is and where it needs to go. At the NSC he is leading their Advanced Automotive Safety Technology initiative. His efforts of public outreach are notable and the public campaign of MyCarDoesWhat is an example of how we need to aid the public in understanding the facets of car automation: Defining the Chief Safety Officer Role I have found it useful to clarify what I mean by the role of a Chief Safety Officer in the context of a firm that has an AI-based product or service, particularly such as the AI self-driving car industry. Take a look at my Figure 1. As shown, the Chief Safety Officer has a number of important role elements. These elements all intertwine with each other and should not be construed as independent of each other. They are an integrated mesh of the space of safety elements needed to be fostered and led by the Chief Safety Officer. Allowing one of the elements to languish or be undervalued is likely to undermine the integrity of any safety related programs or approaches undertaken by a firm. The nine core elements for a Chief Safety Officer consist of:         Safety Strategy         Safety Company Culture         Safety Policies         Safety Education         Safety Awareness         Safety External         Safety SDLC         Safety Reporting         Safety Crisis Management I’ll next describe each of the elements. I’m going to focus on the AI self-driving car industry, but you can hopefully see how these can be applied to other areas of AI that involve safety-related AI-based products or services. Perhaps you make AI-based robots that will be working in warehouses or factories, etc., which these elements would then pertain to equally. I am also going to omit the other kinds of non-AI safety matters that the Chief Safety Officer would likely encompass, which are well documented already in numerous online Chief Safety Officer descriptions and specifications. Here’s a brief indication about each element.         Safety Strategy The Chief Safety Officer establishes the overall strategy of how safety will be incorporated into the AI systems and works hand-in-hand with the other top executives in doing so. This must be done collaboratively since the rest of the executive team must “buy into” the safety strategy and be willing and able to carry it out. Safety is not an island of itself. Each of the functions of the firm must have a stake in and will be required to ensure the safety strategy is being implemented.         Safety Company Culture The Chief Safety Officer needs to help shape the culture of the company to be on a safety-first mindset. Often times, AI developers and other tech personal are not versed in safety and might have come from a university setting wherein AI systems were done as prototypes, and safety was not a particular pressing topic. Some will even potentially believe that “safety is the enemy of innovation,” which is at times a rampant false belief. The company culture might require some heavy lifting and has to be done in conjunction with the top leadership team and done in a meaningful way rather than a light-hearted or surface-level manner.         Safety Policies The Chief Safety Officer should put together a set of safety policies indicating how the AI systems need to be conceived of, designed, built, tested, and fielded to embody key principles of safety. These policies need to be readily comprehensible and there needs to a clear-cut means to abide by the policies. If the policies are overly abstract or obtuse, or if they are impractical, it will likely foster a sense of “it’s just CYA” and the rest of the firm will tend to disregard the policies.         Safety Education The Chief Safety Officer should identify the kinds of educational means that can be made available throughout the firm to increase an understanding of what safety means in the context of developing and fielding AI systems. This can be a combination of internally prepared AI safety classes and externally provided ones. The top executives should also participate in the educational programs to showcase their belief in and support for the educational aspects, and they should work with the Chief Safety Officer in scheduling and ensuring that the teams and staff undertake the classes, along with follow-up to ascertain that the education is being put into active use.         Safety Awareness The Chief Safety Officer should undertake to have safety awareness become an ongoing activity, often fostered by posting AI safety related aspects on the corporate Intranet, along with providing other avenues in which AI safety is discussed and encouraged such as brown bag lunch sessions, sharing of AI safety tips and suggestions from within the firm, and so on. This needs to be an ongoing effort and not allow a one-time push of safety that then decays or becomes forgotten.         Safety External The Chief Safety Officer should be proactive in representing the company and its AI safety efforts to external stakeholders. This includes doing so with regulators, possibly participating in regulatory efforts or reviews when appropriate, along with speaking at industry events about the safety related work being undertaken and conferring with the media. As the external face of the company, the CSO will also likely get feedback from the external stakeholders, which then should be refed into the company and be especially discussed with the top leadership team.         Safety SDLC The Chief Safety Officer should help ensure that the Systems Development Life Cycle (SDLC) includes safety throughout each of the stages. This includes whether the SDLC is agile-oriented or waterfall or in whatever method or manner being undertaken. Checkpoints and reviews need to include the safety aspects and have teeth, meaning that if safety is either not being included or being shortchanged, this becomes an effort stopping criteria that cannot be swept under the rug. It is easy during the pressures of development to shove aside safety portions and coding, under the guise of “getting on with the real coding,” but that’s not going to cut it in AI systems involving life-or-death systems consequences.         Safety Reporting The Chief Safety Officer needs to put in place a means to keep track of safety aspects that are being considered and included into the AI systems. This is typically an online tracking and reporting system. Out of the tracking system, reporting needs to be made available on an ongoing basis. This includes dashboards and flash reporting, which is vital since if the reporting is overly delayed or difficult to obtain or interpret, it will be considered “too late to deal with” and the cost or effort to make safety related corrections or additions will be subordinated.         Safety Crisis Management The Chief Safety Officer should establish a crisis management approach to deal with any AI safety related faults or issues that arise. Firms often seem to scramble when their AI self-driving car has injured someone, yet this is something that could have been anticipated as a possibility, and preparations could have been made beforehand. The response to an AI safety adverse act needs to be carefully coordinated and the company will likely be seen as either doing sincere efforts about the incident or if ill-prepared might make matters untoward and undermine the company efforts and those of other AI self-driving car makers. In the Figure 1, I’ve also included my framework of AI self-driving cars. Each of the nine elements that I’ve just described can be applied to each of the aspects of the framework. For example, how is safety being included into the sensors design, development, testing, and fielding? How is safety being included into the sensor fusion design, development, testing, and fielding? How is safety being included into the virtual world model design, development, testing, and fielding. You are unlikely to have many safety related considerations in say the sensors if there isn’t an overarching belief at the firm that safety is important, which is showcased by having a Chief Safety Officer, and by having a company culture that embraces safety, and by educating the teams that are doing the development about AI safety, etc. This highlights my earlier point that each of the elements must work as an integrative whole. Suppose the firm actually does eight of the elements but doesn’t do anything about how to incorporate AI safety into the SDLC. What then? This means that the AI developers are left to their own to try and devise how to incorporate safety into their development efforts. They might fumble around doing so, or take bona fide stabs at it, though it is fragmented and disconnected from the rest of the development methodology. Furthermore, worse still, the odds are that the SDLC has no place particularly for safety, which means no metrics about safety, and therefore the pressure to not do anything related to safety is enhanced, due to the metrics measuring the AI developers in other ways that don’t necessarily have much to do with safety. The point being that each of the nine elements need to work collectively. Resources on Baking AI Safety Into AI Self-Driving Car Efforts At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. We consider AI safety aspects as essential to our efforts and urge auto makers and tech firms to do likewise. I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: For the levels of self-driving cars, see my article: For why AI Level 5 self-driving cars are like a moonshot, see my article: For the dangers of co-sharing the driving task, see my article: Though I often tend to focus more so on the true Level 5 self-driving car, the safety aspects of the less than Level 5 are especially crucial right now. I’ve repeatedly cautioned that as the Level 3 advanced automation becomes more prevalent, which we’re just now witnessing coming into the marketplace, we are upping the dangers associated with the interfacing between AI systems and humans. This includes issues associated with cognitive disconnects of AI-humans and the human mindset dissonance, all of which can be disastrous from a safety perspective. Co-sharing and hand-offs of the driving task, done in real-time at freeway speeds, nearly points a stick in the eye of safety. Auto makers and tech firms must get ahead of the AI safety curve, rather than wait until the horse is already out of the barn and it becomes belated to act. Here’s the usual steps involved in the AI driving task:         Sensor data collection and interpretation         Sensor fusion         Virtual world model updating         AI action planning         Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period. For my article about the grand convergence that has led us to this moment in time, see: See my article about the ethical dilemmas facing AI self-driving cars: For potential regulations about AI self-driving cars, see my article: For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: Returning to the safety topic, let’s consider some additional facets. Take a look at Figure 2. I’ve listed some of the publicly available documents that are a useful cornerstone to getting up-to-speed about AI self-driving car safety. The U.S. Department of Transportation (DOT) NHTSA has provided two reports that I especially find helpful about the foundations of safety related to AI self-driving cars. Besides providing background context, these documents also indicate the regulatory considerations that any auto maker or tech firm will need to be incorporating into their efforts. Both of these reports have been promulgated under the auspices of DOT Secretary Elaine Chao. The version 2.0 report is here: The version 3.0 report is here: I had earlier mentioned the Uber safety report, which is here: I also had mentioned the Zoox safety report, which is here: You would also likely find of us the Waymo safety report, which is here: I’d also like to give a shout out to Dr. Philip Koopman, a professor at CMU that has done extensive AI safety related research, which you can find at his CMU web site or at this company web site: As a former university professor, I too used to do research while at my university and also did so via an outside company. It’s a great way to try and infuse the core foundational research that you typically do in a university setting with the more applied kind of efforts that you do while in industry. I found it a handy combination. Philip and I seem to also end-up at many of the same AI self-driving car conferences and do so as speaker, panelists, or interested participants. Conclusion For those Chief Safety Officers of AI self-driving car firms that I’ve not mentioned herein, you are welcome to let me know that you’d like to be included in future updates that I do on this topic. Plus, if you have safety reports akin to the ones I’ve listed, I welcome taking a look at those reports and will be glad to mention those too. One concern being expressed about the AI self-driving car industry is whether the matter of safety is being undertaken in a secretive manner that tends to keep each other of the auto makers or tech firms in the dark about what the other firms are doing. When you look at the car industry, clearly it is apparent that the auto makers have traditionally competed on their safety records and used that to their advantage in trying to advertise and sell their wares. Critics have voiced that if the AI self-driving car industry perceives itself to also be competing with each other on safety, naturally there would be a basis to purposely avoid sharing about safety aspects with each other. You can’t seemingly have it both ways, in that if you are competing on safety then it is presumed to be a zero-sum game, those that do better on safety will sell more than those that do not, and why help a competitor to get ahead. This mindset needs to be overcome. As mentioned earlier, it won’t take much in terms of a few safety related bad outcomes to potentially stifle the entire AI self-driving car realm. If there is a public outcry, you can expect that this will push back at the auto makers and tech firms. The odds are that regulators would opt to come into the industry with a much heavier hand. Funding for AI self-driving car efforts might dry up. The engine driving the AI self-driving car pursuits could grind to a halt. I’ve described the factors that cane aid or impede the field: Existing disengagement reporting is weak and quite insufficient: A few foul incidents will be perceived as a contagion, see my article: For my Top 10 predictions, see: There are efforts popping up to try and see if AI safety can become more widespread as an overt topic in the AI self-driving car industry. It’s tough though to overcome all of those NDA (Non-Disclosure Agreements) and concerns that proprietary matters might be disclosed. Regrettably, it might take a calamity to get enough heat to make things percolate more so, but I hope it doesn’t come down to that. The adoption of Chief Safety Officers into the myriad of auto makers and tech firms that are pursuing AI self-driving cars is a healthy sign that safety is rising in importance. These positions have to be adopted seriously and with a realization at the firms that they cannot just put in place a role to somehow checkmark that they did so. For Chief Safety Officers to do their job, they need to be at the top executive table and be considered part-and-parcel of the leadership team. I am also hoping that these Chief Safety Officers will bind together and become an across-the-industry “club” that can embrace a safety sharing mantra and use their positions and weight to get us further along on permeating safety throughout all aspects of AI self-driving cars.  Let’s make that into reality. Copyright 2019 Dr. Lance Eliot This content is originally posted on AI Trends.   Read more »
  • The Cognitive Intersect of Human and Artificial Intelligence – Symbiotic Nature of AI and Neuroscience
    Neuroscience and artificial intelligence (AI) are two very different scientific disciplines. Neuroscience traces back to ancient civilizations, and AI is a decidedly modern phenomenon. Neuroscience branches from biology, whereas AI branches from computer science. At a cursory glance, it would seem that a branch of science of living systems would have little in common with one that springs from inanimate machines wholly created by humans. Yet discoveries in one field may result in breakthroughs in the other— the two fields share a significant problem, and future opportunities. The origins of modern neuroscience is rooted in ancient human civilizations. One of the first descriptions of the brain’s structure and neurosurgery can be traced back to 3000 – 2500 B.C. largely due to the efforts of the American Egyptologist Edwin Smith. In 1862 Smith purchased an ancient scroll in Luxor, Egypt. In 1930 James H. Breasted translated the Egyptian scroll due to a 1906 request from the New York Historical Society via Edwin Smith’s daughter. The Edwin Smith Surgical Papyrus is an Egyptian neuroscience handbook circa 1700 B.C. that summarizes a 3000 – 2500 B.C ancient Egyptian treatise describing the brain’s external surfaces, cerebrospinal fluid, intracranial pulsations, the meninges, the cranial sutures, surgical stitching, brain injuries, and more. In contrast, the roots of artificial intelligence sit squarely in the middle of the twentieth century. American computer scientist John McCarthy is credited with creating the term “artificial intelligence” in a 1955 written proposal for a summer research project that he co-authored with Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. The field of artificial intelligence was subsequently launched at a 1956 conference held at Dartmouth College. The history of artificial intelligence is a modern one. In 1969 Marvin Minsky and Seymour Papert published a research paper titled “Perceptrons: an introduction to computational geometry” that hypothesized the possibility of a powerful artificial learning technique for more than two artificial neural layers. During the 1970s and 1980s, AI machine learning was in relative dormancy. In 1986 Geoffrey Hinton, David E. Rumelhart, and Ronald J. Williams published “Learning representations by back-propagating errors” which illustrated how deep neural networks consisting of more than two layers could be trained via backpropagation. During the 1980s to early 2000s, the graphics processing unit (GPU) have evolved from gaming purpose towards general computing, enabling parallel processing for faster computing. In 1990s, the internet spawned entire new industries such as cloud-computing based Software-as-a-Service (SaaS). These trends enabled faster, cheaper, and more powerful computing. In 2000s, big data sets emerged along with the rise and proliferation of internet-based social media sites. Training deep learning requires data sets, and the emergence of big data accelerated machine learning. In 2012, a major milestone in AI deep learning was achieved when Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever trained a deep convolutional neural network with 60 million parameters, 650,000 neurons, and five convolutional layers, to classify 1.2 million high-resolution images into 1,000 different classes. The team made AI history by through their demonstration of backpropagation on a GPU implementation on such an impressive scale of complexity. Since then, there has been a worldwide gold rush to deploy state-of-the-art deep learning techniques across nearly all industries and sectors. In the future, the opportunities that neuroscience and AI offer are significant. Global spending on cognitive and AI systems is expected to reach $57.6 billion by 2021 according to IDC estimates. The current AI renaissance, largely due to deep learning, is a global movement with worldwide investment from corporations, universities, and governments. The global neuroscience market is projected to reach $30.8 billion by 2020, according to figures from Grand View Research. Venture capitalists, angel investors, and pharmaceutical companies are making significant investments in neuroscience startups. Today’s wellspring of the global commercial, financial and geopolitical investments in artificial intelligence owes, in some part, to the human brain. Deep learning, a subset of AI machine learning, pays homage to the biological brain structure. Deep neural networks (DNNs) consist of two or more “neural” processing layers with artificial neurons (nodes). A DNN will have an input layer, an output layer, and many layers in between—the more artificial neural layers, the deeper the network. The human brain and its associated functions are complex. Neuroscientists do not know many of the exact mechanisms of how the human brain works. For example, scientists do not know the neurological mechanisms of exactly how general anesthesia works on the brain, or why we sleep or dream. Similarly, computer scientists do not know exactly how deep learning arrives at its conclusions due to complexity. An artificial neural network may have billions or more parameters based on the intricate connections between the nodes—the exact path is a black-box. Read the source article in Psychology Today. Read more »
  • Avoiding a Society on Autopilot with Artificial Intelligence
    By Katherine Maher, Executive Director, Wikimedia Foundation The year 1989 is often remembered for events that challenged the Cold War world order, from the protests in Tiananmen Square to the fall of the Berlin Wall. It is less well remembered for what is considered the birth of the World Wide Web. In March of 1989, the British researcher Tim Berners-Lee shared the protocols, including HTML, URL and HTTP that enabled the internet to become a place of communication and collaboration across the globe. Katherine Maher, Executive Director, Wikimedia Foundation As the World Wide Web marked its 30th birthday on March 12, public discourse is dominated by alarm about Big Tech, data privacy and viral disinformation. Tech executives have been called to testify before Congress, a popular campaign dissuaded Amazon from opening a second headquarters in New York and the United Kingdom is going after social media companies that it calls “digital gangsters.” Implicit in this tech-lash is nostalgia for a more innocent online era. But longing for a return to the internet’s yesteryear isn’t constructive. In the early days, access to the web was expensive and exclusive, and it was not reflective or inclusive of society as a whole. What is worth revisiting is less how it felt or operated, but what the early web stood for. Those first principles of creativity, connection and collaboration are worth reconsidering today as we reflect on the past and the future promise of our digitized society. The early days of the internet were febrile with dreams about how it might transform our world, connecting the planet and democratizing access to knowledge and power. It has certainly affected great change, if not always what its founders anticipated. If a new democratic global commons didn’t quite emerge, a new demos certainly did: An internet of people who created it, shared it and reciprocated in its use. People have always been the best part of the internet, and to that end, we have good news. New data from the Pew Research Center show that more than 5 billion people now have a mobile device and more than half of those can connect to the internet. We have passed a tipping point where more people are now connected to the internet than not. In low- and middle-income countries, however, a new report shows women are 23 percent less likely than men to use the mobile internet. If we can close that gender gap it would lead to a $700 billion economic opportunity. The web’s 30th anniversary gives us a much-needed chance to examine what is working well on the internet — and what isn’t. It is clear that people are the common denominator. Indeed, many of the internet’s current problems stem from misguided efforts to take the internet away from people, or vice versa. Sometimes this happens for geopolitical reasons. Nearly two years ago, Turkey fully blocked Wikipedia, making it only the second country after China to do so. Reports suggest a Russian proposal to unplug briefly from the internet to test its cyber defenses could actually be an effort to set up a mass censorship program. And now there is news that Prime Minister Narendra Modi of India is trying to implement government controls that some worry will lead to Chinese-style censorship. But people get taken out of the equation in more opaque ways as well. When you browse social media, the content you see is curated, not by a human editor but by an algorithm that puts you in a box. Increasingly, algorithms can help decide what we read, who we date, what we buy and, more worryingly, the services, credit or even liberties for which we’re eligible. Too often, artificial intelligence is presented as an all-powerful solution to our problems, a scalable replacement for people. Companies are automating nearly every aspect of their social interfaces, from creating to moderating to personalizing content. At its worst, A.I. can put society on autopilot that may not consider our dearest values. Without humans, A.I. can wreak havoc. A glaring example was Amazon’s A.I.-driven human resources software that was supposed to surface the best job candidates, but ended up being biased against women. Built using past resumes submitted to Amazon, most of which came from men, the program concluded men were preferable to women. Rather than replacing humans, A.I. is best used to support our capacity for creativity and discernment. Wikipedia is creating A.I. that will flag potentially problematic edits — like a prankster vandalizing a celebrity’s page — to a human who can then step in. The system can also help our volunteer editors evaluate a newly created page or suggest superb pages for featuring. In short, A.I. that is deployed by and for humans can improve the experience of both people consuming information and those producing it. Read the source article in The New York Times. Read more »
WordPress RSS Feed Retriever by Theme Mason

Artificial Intelligence Technology and the Law

  • Government Plans to Issue Technical Standards For Artificial Intelligence Technologies
    On February 11, 2019, the White House published a plan for developing and protecting artificial intelligence technologies in the United States, citing economic and national security concerns among other reasons for the action.  Coming two years after Beijing’s 2017 announcement that China intends to be the global leader in AI by 2030, President Trump’s Executive Order on Maintaining American Leadership in Artificial Intelligence lays out five principles for AI, including “development of appropriate technical standards and reduc[ing] barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.”  The Executive Order, which lays out a framework for an “American AI Initiative” (AAII), tasks the White House’s National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence, established in 2018, with identifying federal government agencies to develop and implement the technical standards (so-called “implementing agencies”). Unpacking the AAII’s technical standards principle suggests two things.  First, federal governance of AI under the Trump Administration will favor a policy and standards governance approach over a more onerous command-and-control-type regulatory agency rulemaking approach leading to regulations (which the Trump administration often refers to as “barriers”).  Second, no technical standards will be adopted that stand in the way of the development or use of AI technologies at the federal level if they impede economic and national security goals. So what sort of technical standards might the Select Committee on AI and the implementing agencies come up with?  And how might those standards impact government agencies, government contractors, and even private businesses from a legal perspective? The AAII is short on answers to those questions, and we won’t know more until at least August 2019 when the Secretary of Commerce, through the Director of the National Institute of Standards and Technology (NIST), is required by the AAII to issue a plan “for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.”  Even so, it is instructive to review some relevant technical standards and related legal issues in anticipation of what might lie ahead for the United States AI industry. A survey of technical standards used across a spectrum of different industries shows that they can take many different forms, but often they classify as prescriptive or performance-based.  Pre-determined prescriptive metrics may specify requirements for things like accuracy, quality, output, materials, composition, and consumption.  In the AI space, a prescriptive standard could involve a benchmark for classification accuracy (loss or error) using a standardized data set (i.e., how well does the system work), or a numerical upper limit on power consumption, latency, weight, and size.  Prescriptive standards can be one-size-fits-all, or they can vary. Performance-based standards describe practices (minimum, best, commercially reasonable, etc.) focusing on results to be achieved.  In many situations, the performance-based approach provides more flexibility compared to using prescriptive standards.  In the context of AI, a performance-based standard could require a computer vision system to detect all objects in a specified field of view, and tag and track them for a period of time.  How the developer achieves that result is less important in performance-based standards. Technical standards may also specify requirements for the completion of risk assessments to numerically compare an AI system’s expected benefits and impacts to various alternatives.  Compliance with technical standards may be judged by advisory committees who follow established procedures for independent and open review.  Procedures may be established for enforcement of technical standards when non-compliance is observed.  Depending on the circumstances, technical standards may be published for the public to see or they may be maintained in confidence (e.g., in the case of national security).  Technical standards are often reviewed on an on-going or periodic basis to assess the need for revisions to reflect changes in previous assumptions (important in cases when rapid technological improvements or shifts in priorities occur). Under the direction of the AAII, the White House’s Select Committee and various designated implementing agencies could develop new technical standards for AI technologies, but they could also adopt (and possibly modify) standards published by others.  The International Organization for Standards (ISO), American National Standards Institute (ANSI), National Institute of Standards and Technology (NIST), and the Institute for Electronics and Electrical Engineers (IEEE) are among the few private and public organizations that have developed or are developing AI standards or guidance.  Individual state legislatures, academic institutions, and tech companies have also published guidance, principles, and areas of concern that could be applicable to the development of technical and non-technical standards for AI technologies.  By way of example, the ISO’s technical standard for “big data” architecture includes use cases for deep learning applications and large scale unstructured data collection.  The Partnership on AI, a private non-profit organization whose board consists of representatives from IBM, Google, Microsoft, Apple, Facebook, Amazon, and others, has developed what it considers “best practices” for AI technologies. Under the AAII, the role of technical standards, in addition to helping build an AI industry, will be to “minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies.”  It is hard to imagine a purely technical standard addressing trust and confidence, though a non-technical standards-setting process could address those issues by, for example, introducing measures related to fairness, accountability, and transparency.  Consider the example of delivering AI-based healthcare services at Veterans Administration facilities, where trust and confidence could be reflected in non-technical standards that provide for the publication of clear, understandable explanations about how an AI system works and how it made a decision that affected a patent’s care.  Addressing trust and confidence could also be reflected in requirements for open auditing of AI systems.  The IEEE’s “Ethically Aligned Design” reference considers these and related issues. Another challenge in developing technical standards is to avoid incorporating patented technologies “essential” to the standards adopted by the government, or if unavoidable, to develop rules for disclosure and licensing of essential patents.  As the court in Apple v. Motorola explained, “[s]ome technological standards incorporate patented technology. If a patent claims technology selected by a standards-setting organization, the patent is called an ‘essential patent.’ Many standards-setting organizations have adopted rules related to the disclosure and licensing of essential patents. The policies often require or encourage members of the organization to identify patents that are essential to a proposed standard and to agree to license their essential patents on fair, reasonable and nondiscriminatory terms to anyone who requests a license. (These terms are often referred to by the acronyms FRAND or RAND.)  Such rules help to insure that standards do not allow the owners of essential patents to abuse their market power to extort competitors or prevent them from entering the marketplace.”  See Apple, Inc. v. Motorola Mobility, Inc., 886 F. Supp. 2d 1061 (WD Wis. 2012).  Given the proliferation of new AI-related US patents issued to tech companies in recent years, the likelihood that government technical standards will encroach on some of those patents seems high. For government contractors, AI technical standards could be imposed on them through the government contracting process.  A contracting agency could incorporate new AI technical standards by reference in government contracts, and those standards would flow through to individual task and work orders performed by contractors under those contracts.  Thus, government contractors would need to review and understand the technical standards in the course of executing a written scope of work to ensure they are in compliance.  Sponsoring agencies would likely be expected to review contractor deliverables to measure compliance with applicable AI technical standards.  In the case of non-compliance, contracting officials and their sponsoring agency would be expected to deploy their enforcement authority to ensure problems are corrected, which could include monetary penalties assessed against contractors. Although private businesses (i.e., not government contractors) may not be directly affected by agency-specific technical standards developed under the AAII, customers of those private businesses could, absent other relevant or applicable technical standards, use the government’s AI technical standards as a benchmark when evaluating a business’s products and services.  Moreover, even if federal AI-based technical standards do not directly apply to private businesses, there is certainly the possibility that Congress could legislatively mandate the development of similar or different technical and non-technical standards and other requirements applicable to a business’ AI technologies sold and used in commerce. The president’s Executive Order on AI has turned an “if” into a “when” in the context of federal governance of AI technologies.  If you are a stakeholder, now is a good time to put resources into closely monitoring developments in this area to prepare for possible impacts. Read more »
  • Washington State Seeks to Root Out Bias in Artificial Intelligence Systems
    The harmful effects of biased algorithms have been widely reported.  Indeed, some of the world’s leading tech companies have been accused of producing applications, powered by artificial intelligence (AI) technologies, that were later discovered to exhibit certain racial, cultural, gender, and other biases.  Some of the anecdotes are quite alarming, to say the least.  And while not all AI applications have these problems, it only takes a few concrete examples before lawmakers begin to take notice. In New York City, lawmakers began addressing algorithmic bias in 2017 with the introduction of legislation aimed at eliminating bias from algorithmic-based automated decision systems used by city agencies.  That effort led to the establishment of a Task Force in 2018 under Mayor de Blasio’s office to examine the issue in detail.  A report from the Task Force is expected this year. At the federal level, an increased focus by lawmakers on algorithmic bias issues began in 2018, as reported previously on this website (link) and elsewhere.  Those efforts, by both House and Senate members, focused primarily on gathering information from federal agencies like the FTC, and issuing reports highlighting the bias problem.  Expect congressional hearings in the coming months. Now, Washington State lawmakers are addressing bias concerns.  In companion bills SB-5527 and HB-1655, introduced on January 23, 2019, lawmakers in Olympia drafted a rather comprehensive piece of legislation aimed at governing the use of automated decision systems by state agencies, including the use of automated decision-making in the triggering of automated weapon systems.  As many in the AI community have discussed, eliminating algorithmic-based bias requires consideration of fairness, accountability, and transparency, issues the Washington bills appear to address.  But the bills also have teeth, in the form of a private right of action allowing those harmed to sue. Although the aspirational language of legislation often only provides a cursory glimpse at how stakeholders might be affected under a future law, especially in those instances where, as here, an agency head is tasked with producing implementing regulations, an examination of automated decisions system legislation like Washington’s is useful if only to understand how  states and the federal government might choose to regulate aspects of AI technologies and their societal impacts. Purpose and need for anti-bias algorithm legislation According to the bills’ sponsors, in Washington, automated decision systems are rapidly being adopted to make or assist in core decisions in a variety of government and business functions, including criminal justice, health care, education, employment, public benefits, insurance, and commerce.  These systems, the lawmakers say, are often deployed without public knowledge and are unregulated.  Their use raises concerns about due process, fairness, accountability, and transparency, as well as other civil rights and liberties.  Moreover, reliance on automated decision systems without adequate transparency, oversight, or safeguards can undermine market predictability, harm consumers, and deny historically disadvantaged or vulnerable groups the full measure of their civil rights and liberties. Definitions, Prohibited Actions, and Risk Assessments The new Washington law would define “automated decision systems” as any algorithm, including one incorporating machine learning or other AI techniques, that uses data-based analytics to make or support government decisions, judgments, or conclusions.  The law would distinguish “automated final decision system,” which are systems that make “final” decisions, judgments, or conclusions without human intervention, and “automated support decision system,” which provide information to inform the final decision, judgment, or conclusion of a human decision maker. Under the new law, in using an automated decision system, an agency would be prohibited from discriminating against an individual, or treating an individual less favorably than another, in whole or in part, on the basis of one or more factors enumerated in RCW 49.60.010.  An agency would be outright prohibited from developing, procuring, or using an automated final decision system to make a decision impacting the constitutional or legal rights, duties, or privileges of any Washington resident, or to deploy or trigger any weapon. Both versions of the bill include lengthy provisions detailing algorithmic accountability reports that agencies would be required to produce and publish for public comment.  Among other things, these reports must include clear information about the type or types of data inputs that a technology uses; how that data is generated, collected, and processed; and the type or types of data the systems are reasonably likely to generate, which could help reveal the degree of bias inherent in a system’s black box model.  The accountability reports also must identify and provide data showing benefits; describe where, when, and how the technology is to be deployed; and identify if results will be shared with other agencies. An agency that deploys an approved report would then be required to follow conditions that are set forth in the report. Although an agency’s choice to classify its automated decision system as one that makes “final” or “support” decisions may be given deference by courts, the designations are likely to be challenged if the classification is not justified.  One reason a party might challenge designations is to obtain an injunction, which may be available in the case where an agency relies on a final decision made by an automated decision system, whereas an injunction may be more difficult to obtain in the case of algorithmic decisions that merely support a human decision-maker.  The distinction between the two designations may also be important during discovery, under a growing evidentiary theory of “machine testimony” that includes cross-examining machines witnesses by gaining access to source code and, in the case of machine learning models, the developer’s data used to train a machine’s model.  Supportive decision systems involving a human making a final decision may warrant a different approach to discovery. Conditions impacting software makers Under the proposed law, public agencies that use automated decision systems would be required to publicize the system’s name, its vendor, and the software version, along with the decision it will be used to make or support.  Notably, a vendor must make its software and the data used in the software “freely available” before, during, and after deployment for agency or independent third-party testing, auditing, or research to understand its impacts, including potential bias, inaccuracy, or disparate impacts.  The law would require any procurement contract for an automated decision system entered into by a public agency to include provisions that require vendors to waive any legal claims that may impair the “freely available” requirement.  For example, contracts with vendors could not contain nondisclosure impairment provisions, such as those related to assertions of trade secrets. Accordingly, software companies who make automated decision systems will face the prospect of waiving proprietary and trade secret rights and opening up their algorithms and data to scrutiny by agencies, third parties, and researchers (presumably, under terms of confidentiality).  If litigation were to ensue, it could be difficult for vendors to resist third-party discovery requests on the basis of trade secrets, especially if information about auditing of the system by the state agency and third-party testers/researchers is available through administrative information disclosure laws.  A vendor who chooses to reveal the inner workings of a black box software application without safeguards should consider at least financial, legal, and market risks associated with such disclosure. Contesting automated decisions and private right of action Under the proposed law, public agencies would be required to announce procedures how an individual impacted by a decision made by an automated decision system can contest the decision.  In particular, any decision made or informed by an automated decision system will be subject to administrative appeal, an immediate suspension if a legal right, duty, or privilege is impacted by the decision, and a potential reversal by a human decision-maker through an open due process procedure.  The agency must also explain the basis for its decision to any impacted individual in terms “understandable” to laypersons including, without limitation, by requiring the software vendor to create such an explanation.  Thus, vendors may become material participants in administrative proceedings involving a contested decision made by its software. In addition to administrative relief, the law would provide a private right of action for injured parties to sue public agencies in state court.  In particular, any person who is injured by a material violation of the law, including denial of any government benefit on the basis of an automated decision system that does not meet the standards of the law, may seek injunctive relief, including restoration of the government benefit in question, declaratory relief, or a writ of mandate to enforce the law. For litigators representing injured parties in such cases, dealing with evidentiary issues involving information produced by machines would likely follow Washington judicial precedent in areas of administrative law, contracts, tort, civil rights, the substantive law involving the agency’s jurisdiction (e.g., housing, law enforcement, etc.), and even product liability.  In the case of AI-based automated decision systems, however, special attention may need to be given to the nuances of machine learning algorithms to prepare experts and take depositions in cases brought under the law.  Although the aforementioned algorithmic accountability report could be useful evidence for both sides in an automated decision system lawsuit, merely understanding the result of an algorithmic decision may not be sufficient when assessing if a public agency was thorough in its approach to vetting a system.  Being able to describe how the automated decision system works will be important.  For agencies, understanding the nuances of the software products they procure will be important to establish that they met their duty to vet the software under the new law. For example, where AI machine learning models are involved, new data, or even previous data used in a different way (i.e., a different cross-validation scheme or a random splitting of data into new training and testing subsets), can generate models that produce slightly different outcomes.  While small, the difference could mean granting or denying agency services to constituents.  Moreover, with new data and model updates comes the possibility of introducing or amplifying bias that was not previously observed.  The Washington bills do not appear to include provisions imposing an on-going duty on vendors to inform agencies when bias or other problems later appear in software updates (though it’s possible the third party auditors or researchers noted above might discover it).  Thus, vendors might expect agencies to demand transparency as a condition set forth in acquisition agreements, including software support requirements and help with developing algorithmic accountability reports.  Vendors might also expect to play a role in defending against claims by those alleging injury, should the law pass.  And they could be asked to shoulder some of the liability either through indemnification or other means of contractual risk-shifting to the extent the bills add damages as a remedy. Read more »
  • What’s in a Name? A Chatbot Given a Human Name is Still Just an Algorithm
    Due in part to the learned nature of artificial intelligence technologies, the spectrum of things that exhibit “intelligence” has, in debates over such things, expanded to include certain advanced AI systems.  If a computer vision system can “learn” to recognize real objects and make decisions, the argument goes, its ability to do so can be compared to that of humans and thus should not be excluded from the intelligence debate.  By extension, AI systems that can exhibit intelligence traits should not be treated like mere goods and services, and thus laws applicable to such good and services ought not to apply to them. In some ways, the marketing of AI products and services using names commonly associated with humans, such as “Alexa,” “Sophia,” and “Siri,” buttresses the argument that laws applicable to non-human things should not strictly apply to AI.  For now, however, lawmakers and the courts struggling with practical questions about regulating AI technologies can justifiably apply traditional goods and services laws to named AI systems just as they do to non-named system.  After all, a robot or chatbot doesn’t become more humanlike and less like a man-made product merely because it’s been anthropomorphized.  Even so, when future technological breakthroughs suggest artificial general intelligence (AGI) is on the horizon, lawmakers and the courts will be faced with the challenge of amending laws to account for the differences between AGI systems and today’s narrow AI and other “unintelligent” goods and services.  For now, it’s instructive to consider why the rise in the use of names for AI system is not a good basis for triggering greater attention by lawmakers.  Indeed, as suggested below, other characteristics of AI system may be more useful in deciding when laws need to be amended.  To begin, the recent case of a chatbot named “Erica” is presented. The birth of a new bot In 2016, machine learning developers at Bank of America created a “virtual financial assistant” application called “Erica” (derived from the bank’s name America).  After conducting a search of existing uses of the name Erica in other commercial endeavors, and finding none in connection with a chatbot like theirs, BoA sought federal trademark protection for the ERICA mark in October 2016.  The US Patent and Trademark Office concurred with BoA’s assessment of prior uses and registered the mark on July 31, 2018.  Trademarks are issued in connection with actual uses of words, phrases, and logos in commerce, and in the case of BoA, the ERICA trademark was registered in connection with computer financial software, banking and financial services, and personal assistant software in banking and financial SaaS (software as a service).  The Erica app is currently described as possessing the utility to answer customer questions and make banking easier.  During its launch, BoA used the “she” pronoun when describing the app’s AI and predictive analytics capabilities, ostensibly because the name Erica is a stereotypical female gender name, but also because of the apparent female-sounding voice the app outputs as part of its human-bot interface. One of the existing uses of an Erica-like mark identified by BoA was an instance of “E.R.I.C.A,” which appeared in October 2010 when Erik Underwood, a Colorado resident, filed a Georgia trademark registration application for “E.R.I.C.A. (Electronic Repetitious Informational Clone Application).”  See Underwood v. Bank of Am., slip op., No. 18-cv-02329-PAB-MEH (D. Colo. Dec. 19, 2018).  On his application, Mr. Underwood described E.R.I.C.A. as “a multinational computer animated woman that has slanted blue eyes and full lips”; he also attached a graphic image of E.R.I.C.A. to his application.  Mr. Underwood later sought a federal trademark application (filed in September 2018) for an ERICA trademark (without the separating periods).  At the time of his lawsuit, his only use of E.R.I.C.A. was on a searchable movie database website. In May 2018, Mr. Underwood sent a cease-and-desist letter to BoA regarding BoA’s use of Erica, and then filed a lawsuit in September 2018 against the bank alleging several causes of action, including “false association” under § 43(a) of the Lanham Act, 15 U.S.C. § 1125(a)(1)(A).  Section 43(a) states, in relevant part, that any person who, on or in connection with any goods or services, uses in commerce a name or a false designation of origin which is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection, or association of such person with another person, or as to the origin, sponsorship, or approval of his or her goods, services, or commercial activities by another person, shall be liable in a civil action by a person who believes that he or she is likely to be damaged by such act.  In testimony, Mr. Underwood stated that the E.R.I.C.A. service mark was being used in connection with “verbally tell[ing] the news and current events through cell phone[s] and computer applications” and he described plans to apply an artificial intelligence technology to E.R.I.C.A.  Mr. Underwood requested the court enter a preliminary injunction requiring BoA to cease using the Erica name. Upon considering the relevant preliminary injunction factors and applicable law, the District Court denied Mr. Underwood’s request for an injunction on several grounds, including the lack of relevant uses of E.R.I.C.A. in the same classes of goods and services that BoA’s Erica was being used in. Giving AI a persona may boost its economic value and market acceptance Not surprisingly, the District Court’s preliminary injunction analysis rested entirely on perception and treatment of the Erica and E.R.I.C.A. systems as nothing more than services, something neither party disputed or challenged.  Indeed, each party’s case-in-chief depended on their convincing the court that their applications fit squarely in the definition of goods and services despite the human-sounding names they chose to attach to them.  The court’s analysis, then, illuminated one of the public policies underlying laws like the Lanham Act, which is the protection of the economic benefits associated with goods and services created by people and companies.  The name Erica provides added economic value to each party’s creation and is an intangible asset associated with their commercial activities. The use of names has long been found to provide value to creators and owners, and not just in the realm of hardware and software.  Fictional characters like “Harry Potter,” which are protected under copyright and trademark laws, can be intellectual assets having tremendous economic value.  Likewise, namesake names carried over to goods and services, like IBM’s “Watson”–named after the company’s first CEO, John Watson–provide real economic benefits that might not have been achieved without a name, or even with a different name.  In the case of humanoid robots, like Hanson Robotics’ “Sophia,” which is endowed with aspects of AI technologies and was reportedly granted “citizenship” status in Saudi Arabia, certain perceived and real economic value is created by distinguishing the system from all other robots by using a real name (as compared to, for example, a simple numerical designation). On the other end of the spectrum are names chosen for humans, the uses of which are generally unrestricted from a legal perspective.  Thus, naming one’s baby “Erica” or even “Harry Potter” shouldn’t land a new parent in hot water.  At the same time, those parents aren’t able to stop others from using the same names for other children.  Although famous people may be able to prevent others from using their names (and likenesses) for commercial purposes, the law only recognizes those situations when the economic value of the name or likeness is established (though demonstrating economic value is not always necessary under some state right of publicity laws).  Some courts have gone so far as to liken the right to protect famous personas to a type of trademark in a person’s name because of the economic benefits attached to it, much the same way a company name, product name, or logo attached to a product or service can add value. Futurists might ask whether a robot or chatbot demonstrating a degree of intelligence and that endowed with unique human-like traits, including a unique persona (e.g., name and face generated from a generative-adversarial network) and the ability to recognize and respond to emotions (e.g., using facial coding algorithms in connection with a human-robot interface), thus making them sufficiently differentiable from all other robots and chatbots (at least superficially), should have special treatment.  So far, endowing AI technologies with a human form, gender, and/or a name has not motivated lawmakers and policymakers to pass new laws aimed at regulating AI technologies.  Indeed, lawmakers and regulators have so far proposed, and in some cases passed, laws and regulations placing restrictions on AI technologies based primarily on their specific applications (uses) and results (impacts on society).  For example, lawmakers are focusing on bot-generated spread and amplification of disinformation on social media, law enforcement use of facial recognition, the private business collection and use of face scans, users of drones and highly automated vehicles in the wild, production of “deepfake” videos, the harms caused by bias in algorithms, and others.  This application/results-focused approach, which acknowledges explicitly or implicitly certain normative standards or criteria for acceptable actions, as a means to regulate AI technology is consistent with how lawmakers have treated other technologies in the past. Thus, marketers, developers, and producers of AI systems who personify their chatbots and robots may sleep well knowing their efforts may add value to their creations and alter customer acceptance and attitudes about their AI systems, but they are unlikely to cause lawmakers to suddenly consider regulating them. At some point, however, advanced AI systems will need to be characterized in some normative way if they are to be governed as a new class of things.  The use of names, personal pronouns, personas, and metaphors associating bots to humans may frame bot technology in a way that ascribes particular values and norms to it (Jones 2017).  These might include characteristics such as utility, usefulness (including positive benefits to society), adaptability, enjoyment, sociability, companionship, and perceived or real “behavioral” control, which some argue are important in evaluating user acceptance of social robots.  Perhaps these and other factors, in addition to some measure of intelligence, need to be considered when deciding if an advanced AI bot or chatbot should be treated under the law as something other than a mere good or service.  The subjective nature of those factors, however, would obviously make it challenging to create legally-sound definitions of AI for governance purposes.  Of course, laws don’t have to be precise (and sometimes they are intentionally written without precision to provide flexibility in their application and interpretation), but a vague law won’t help an AI developer or marketer know whether his or her actions and products are subject to an AI law.  Identifying whether to treat bots as goods and services or as something else deserving of a different set of regulations, like those applicable to humans, is likely to involve a suite of factors that permit classifying advanced AI on the spectrum somewhere between goods/services and humans. Recommended reading  The Oxford Handbook of Law, Regulation, and Technology is one of my go-to references for timely insight about topics discussed on this website.  In the case of this post, I drew inspiration from Chapter 25: Hacking Metaphors in the Anticipatory Governance of Emerging Technology: The Case of Regulating Robots, by Meg Leta Jones and Jason Millar. Read more »
  • The Role of Explainable Artificial Intelligence in Patent Law
    Although the notion of “explainable artificial intelligence” (AI) has been suggested as a necessary component of governing AI technology, at least for the reason that transparency leads to trust and better management of AI systems in the wild, one area of US law already places a burden on AI developers and producers to explain how their AI technology works: patent law.  Patent law’s focus on how AI systems work was not borne from a Congressional mandate. Rather, the Supreme Court gets all the credit–or blame, as some might contend–for this legal development, which began with the Court’s 2014 decision in Alice Corp. Pty Ltd. v. CLS Bank International. Alice established the legal framework for assessing whether an invention fits in one of the patent law’s patent-eligible categories (i.e., any “new and useful process, machine, manufacture, or composition of matter” or improvements thereof) or is a patent-ineligible concept (i.e., law of nature, natural phenomenon, or abstract idea).  Alice Corp. Pty Ltd. v. CLS Bank International, 134 S. Ct. 2347, 2354–55 (2014); 35 USC § 101. Understanding how the idea of “explaining AI” came to be following Alice, one must look at the very nature of AI technology.  At their core, AI systems based on machine learning models generally transform input data into actionable output data, a process US courts and the Patent Office have historically found to be patent-ineligible.  Consider a decision by the US Court of Appeals for the Federal Circuit, whose judges are selected for their technical acumen as much as for their understanding of the nuances of patent and other areas of law, that issued around the same time as Alice: “a process that employs mathematical algorithms to manipulate existing information to generate additional information is not patent eligible.”  Digitech Image Techs, LLC v. Elecs. v. Imaging, Inc., 758 F.3d 1344, 1351 (Fed. Cir. 2014).  While Alice did not specifically address AI or mandate anything resembling explainable AI, it nevertheless spawned a progeny of Federal Circuit, district court, and Patent Office decisions that did just that.  Notably, those decisions arose not because of notions that individuals impacted by AI algorithmic decisions ought to have the right to understand how those decisions were made or why certain AI actions were taken, but because explaining how AI systems works helps satisfy the quid pro quo that is fundamental to patent law: an inventor who discloses to the world details of what she has invented is entitled to a limited legal monopoly on her creation (provided, of course, the invention is patentable). The Rise of Algorithmic Scrutiny Alice arrived not long after Congress passed patent reform legislation called the America Invents Act (AIA) of 2011, provisions of which came into effect in 2012 and 2013.  In part, the AIA targeted a decade of what many consider a time of abusive patent litigation brought against some of the largest tech companies in the world and thousands of mom-and-pop and small business owners who were sued for doing anything computer-related.  This litigious period saw the term “patent troll” used more often to describe patent assertion companies that bought up dot-com-era patents covering the very basics of using the Internet and computerized business methods and then sued to collect royalties for alleged infringement. Not surprisingly, some of the same big tech companies that pushed for patent reform provisions now in the AIA to curb patent litigation in the field of computer technology also filed amicus curiae briefs in the Alice case to further weaken software patents.  The Supreme Court’s unanimous decision in Alice helped curtail troll-led litigation by formalizing a procedure, one that lower court judges could easily adopt, for excluding certain software-related inventions from the list of inventions that are patentable. Under Alice, a patent claim–the language used by inventors to describe what he or she claims to be his or her invention–falls outside § 101 when it is “directed to” one of the patent-ineligible concepts noted above.  If so, Alice requires consideration of whether the particular elements of the claim, evaluated “both individually and ‘as an ordered combination,'” add enough to “‘transform the nature of the claim'” into one of the patent-eligible categories.  Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed.Cir. 2016) (quoting Alice, 134 S. Ct. at 2355).  While simple in theory, it took years of court and Patent Office decisions to explain how that 2-part test is to be employed, and only more recently how it applies to AI technologies.  Today, the Patent Office and courts across the US routinely find that algorithms are abstract (even though algorithms, including certain mental processes embodied in algorithmic form performed by a computer, are by most measures useful processes).  According to the Federal Circuit, algorithmic-based data collection, manipulation, and communication–functions most AI algorithms perform–are abstract. Artificial Intelligence, Meet Alice In a bit of ironic foreshadowing, the Supreme Court issued Alice in the same year that major advances in AI technologies were being announced, such as Google’s deep neural network architecture that prevailed in the 2014 ImageNet challenge (ILSVCR) and Ian Goodfellow’s generative adversarial network (GAN) model, both of which were major contributions to the field of computer vision. Even as more breakthroughs were being announced, US courts and the Patent Office began issuing Alice decisions regarding AI technologies and explaining why it’s crucial for inventors to explain how their AI inventions work to satisfy the second half of Alice’s 2-part test. In Purepredictive, Inc. v. H2O.AI, Inc., for example, the US District Court for the Northern District of California considered the claims of US Patent 8,880,446, which, according to the patent’s owner, involves “AI driving machine learning ensembling.”  The district court characterized the patent as being directed to a software method that performs “predictive analytics” in three steps.  Purepredictive, Inc. v. H2O.AI, Inc., slip op., No. 17-cv-03049-WHO (N.D. Cal. Aug. 29, 2017).  In the method’s first step, it receives data and generates “learned functions,” or, for example, regressions from that data. Second, it evaluates the effectiveness of those learned functions at making accurate predictions based on the test data. Finally, it selects the most effective learned functions and creates a rule set for additional data input. The court found the claims invalid on the grounds that they “are directed to the abstract concept of the manipulation of mathematical functions and make use of computers only as tools, rather than provide a specific improvement on a computer-related technology.” The claimed method, the district court said, is merely “directed to a mental process” performed by a computer, and “the abstract concept of using mathematical algorithms to perform predictive analytics” by collecting and analyzing information.  The court explained that the claims “are mathematical processes that not only could be performed by humans but also go to the general abstract concept of predictive analytics rather than any specific application.” In Ex Parte Lyren, the Patent Office’s Appeals Board, made up of three administrative law judges, rejected a claim directed to customizing video on a computer as being abstract and thus not patent-eligible.  In doing so, the board disagreed with the inventor, who argued the claimed computer system, which generated and displayed a customized video by evaluating a user’s intention to purchase a product and information in the user’s profile, was an improvement in the technical field of generating videos. The claimed customized video, the Board found, could be any video modified in any way.  That is, the rejected claims were not directed to the details of how the video was modified, but rather to the result of modifying the video.  Citing precedent, the board reiterated that “[i]n applying the principles emerging from the developing body of law on abstract ideas under section 101, … claims that are ‘so result-focused, so functional, as to effectively cover any solution to an identified problem’ are frequently held ineligible under section 101.”  Ex ParteLyren, No. 2016-008571 (PTAB, June 25, 2018) (citing Affinity Labs of Texas,LLC v. DirecTV, LLC, 838 F.3d 1253, 1265 (Fed. Cir. 2016) (quoting Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1356 (Fed. Cir, 2016)); see also Ex parte Colcernian et al., No. 2018-002705 (PTAB, Oct. 1, 2018) (rejecting claims that use result-oriented language as not reciting the specificity necessary to show how the claimed computer processor’s operations differ from prior human methods, and thus are not directed to a technological improvement but rather are directed to an abstract idea). Notably, the claims in Ex Parte Lyren were also initially rejected as failing to satisfy a different patentability test–the written description requirement.  35 USC § 112.  In rejecting the claims as lacking sufficient description of the invention, the Patent Office Examiner found that the algorithmic features of the inventor’s claim were “all implemented inside a computer, and therefore all require artificial intelligence [(AI)] at some level” and thus require extensive implementation details “subject of cutting-edge research, e.g.[,] natural language processing and autonomous software agents exhibiting intelligent behavior.” The Examiner concluded that “one skilled in the art would not be persuaded that Applicant possessed the invention” because “it is not readily apparent how to make a device [to] analyze natural language.”  The Appeals Board disagreed and sided with the inventor who argued that his invention description was comprehensive and went beyond just artificial intelligence implementations.  Thus, while the description of how the invention worked was sufficiently set forth, Lyren’s claims focused too much on the results or application of the technology and thus were found to be abstract. In Ex Parte Homere, claims directed to “a computer-implemented method” involving “establishing a communication session between a user of a computer-implemented marketplace and a computer-implemented conversational agent associated with the market-place that is designed to simulate a conversation with the user to gather listing information, the Appeals Board affirmed an Examiner’s rejection of the claims as being abstract.  Ex Parte Homere, Appeal No. 2016-003447 (PTAB Mar. 29, 2018).  In doing so, the Appeals Board noted that the inventor had not identified anything in the claim or in the written description that would suggest the computer-related elements of the claimed invention represent anything more than “routine and conventional” technologies.  The most advanced technologies alluded to, the Board found, seemed to be embodiments in which “a program implementing a conversational agent may use other principles, including complex trained Artificial Intelligence (AI) algorithms.”  However, the claimed conversational agent was not so limited.  Instead, the Board concluded that the claims were directed to merely using recited computer-related elements to implement the underlying abstract idea, rather than being limited to any particular advances in the computer-related elements. In Ex Parte Hamilton, a rejection of a claim directed to “a method of planning and paying for advertisements in a virtual universe (VU), comprising…determining, via the analysis module, a set of agents controlled by an Artificial Intelligence…,” was affirmed as being patent ineligible.  Ex Parte Hamilton et al., Appeal No.2017-008577 (PTAB Nov. 20, 2018).  The Appeals Board found that the “determining” step was insufficient to transform the abstract idea of planning and paying for advertisements into patent-eligible subject matter because the step represented an insignificant data-gathering step and thus added nothing of practical significance to the underlying abstract idea. In Ex Parte Pizzorno, the Appeals Board affirmed a rejection of a claim directed to “a computer implemented method useful for improving artificial intelligence technology” as abstract.  Ex Parte Pizzorno, Appeal No. 2017-002355 (PTAB Sep. 21, 2018).  In doing so, the Board determined that the claim was directed to the concept of using stored health care information for a user to generate personalized health care recommendations based on Bayesian probabilities, which the Board said involved “organizing human activities and an idea in itself, and is an abstract idea beyond the scope of § 101.”  Considering each of the claim elements in turn, the Board also found that the function performed by the computer system at each step of the process was purely conventional in that each step did nothing more than require a generic computer to perform a generic computer function. Finally, in Ex Parte McAfee, the Appeals Board affirmed a rejection of a claim on the basis that it was “directed to the abstract idea of receiving, analyzing, and transmitting data.”  Ex Parte McAfee, Appeal No. 2016-006896 (PTAB May 22, 2018).  At issue was a method that included “estimating, by the ad service circuitry, a probability of a desired user event from the received user information, and the estimate of the probability of the desired user event incorporating artificial intelligence configured to learn from historical browsing information in the received user information, the desired user event including at least one of a conversion or a click-through, and the artificial intelligence including regression modeling.”  In affirming the rejection, the Board found that the functions performed by the computer at each step of the claimed process was purely conventional and did not transform the abstract method into a patent-eligible one. In particular, the step of estimating the probability of the desired user event incorporating artificial intelligence was found to be merely “a recitation of factors to be somehow incorporated, which is aspirational rather than functional and does not narrow the manner of incorporation, so it may include no more than incorporating results from some artificial intelligence outside the scope of the recited steps.” The above and other Alice decisions have led to a few general legal axioms, such as: a claim for a new abstract idea is still an abstract idea; a claim for a beneficial abstract idea is still an abstract idea; abstract ideas do not become patent-eligible because they are new ideas, are not previously well known, and are not routine activity; and, the “mere automation of manual processes using generic computers does not constitute a patentable improvement in computer technology.”  Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016); Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371, 1379-80 (Fed. Cir. 2015); Ultramercial, Inc. v. Hulu, LLC, 772 F.3d. 709, 715-16 (Fed. Cir. 2014); Credit Acceptance Corp. v. Westlake Servs., 859 F.3d 1044, 1055 (Fed. Cir. 2017); see also SAP Am., Inc. v. Investpic, LLC, slip op. No. 2017-2081, 2018 WL2207254, at *2, 4-5 (Fed. Cir. May 15, 2018) (finding financial software patent claims abstract because they were directed to “nothing but a series of mathematical calculations based on selected information and the presentation of the results of those calculations (in the plot of a probability distribution function)”); but see Apple, Inc. v.Ameranth, Inc., 842 F.3d 1229, 1241 (Fed. Cir. 2016) (noting that “[t]he Supreme Court has recognized that all inventions embody, use,reflect, rest upon, or apply laws of nature, natural phenomena, or abstractideas[ ] but not all claims are directed to an abstract idea.”). The Focus on How, not the Results Following Alice, patent claims directed to an AI technology must recite features of the algorithm-based system that represent how the algorithm improves a computer-related technology and is not previously well-understood, routine, and conventional.  In PurePredictive, for example, the Northern California district court, which sees many software-related cases due to its proximity to the Bay Area and Silicon Valley, found that the claims of a machine learning ensemble invention were not directed to an invention that “provide[s] a specific improvement on a computer-related technology.”  See also Neochloris, Inc. v. Emerson Process Mgmt LLLP, 140 F. Supp. 3d 763, 773 (N.D. Ill. 2015) (explaining that patent claims including “an artificial neural network module” were invalid under § 101 because neural network modules were described as no more than “a central processing unit – a basic computer’s brain”). Satisfying Alice, thus, requires claims focusing on a narrow application of how an AI algorithmic model works, rather than the broader and result-oriented nature of what the model is used for.  This is necessary where the idea behind the algorithm itself could be used to achieve many different results.  For example, a claim directed to a mathematical process (even one that is said to be “computer-implemented”), and that could be performed by humans (even if it takes a long time), and that is directed to a result achieved instead of a specific application, will seemingly be patent-ineligible under today’s Alice legal framework. To illustrate, consider an image classification system, one that is based on a convolutional neural network.  Such a system may be patentable if the claimed system improves the field of computer vision technology. Claiming the invention in terms of how the elements of the computer are technically improved by its deep learning architecture and algorithm, rather than simply claiming a deep learning model using results-oriented language, may survive an Alice challenge, provided the claim does not merely cover an automated process that human used to do.  Moreover, claims directed to the multiple hidden layers, convolutions, recurrent connections, hyperperameters, and weights could also be claimed. By way of another example, a claim reciting “a computer-implemented process using artificial intelligence to generate an image of a person,” is likely abstract if it does not explain how the image is generated and merely claims a computerized process a human could perform.  But a claim that describes a unique AI system that specifies how it generates the image, including the details of a generative adversarial network architecture and its various inputs provided by physical devices (not routine data collection), its connections and hyperparameters, has a better chance of passing muster (keeping in mind, this only addresses the question of whether the claimed invention is eligible to be patented, not whether it is, in fact, patentable, which is an entirely different analysis and requires comparing the claim to prior art). Uncertainty Remains Although the issue of explaining how an AI system works in the context of patent law is still in flux, the number of US patents issued by the Patent Office mentioning “machine learning,” or the broader term “artificial intelligence,” has jumped in recent years. Just this year alone, US machine learning patents are up 27% compared to the same year-to-date period in 2017 (thru the end of November), according to available Patent Office records.  Even if machine learning is not the focus of many of them, the annual upward trend in patenting AI over the last several years appears unmistakable. But with so many patents invoking AI concepts being issued, questions about their validity may arise.  As the Federal Circuit has stated, “great uncertainty yet remains” when it comes to the test for deciding whether an invention like AI is patent-eligible under Alice, this despite the large number of cases that have “attempted to provide practical guidance.”  Smart Systems Innovations, LLC v. Chicago Transit Authority, slip. op. No. 2016-1233 (Fed. Cir. Oct. 18, 2017).  Calling the uncertainty “dangerous” for some of today’s “most important inventions in computing,” specifically mentioning AI, the Federal Circuit expressed concern that perhaps the application of the Alice test has gone too far, a concern mirrored in testimony by Andrei Iancu, Director of the Patent Office, before Congress in April 2018 (stating, in response to Judiciary Committee questions, that Alice and its progeny have introduced a degree of uncertainty into the area of subject matter eligibility, particularly as it relates to medical diagnostics and software-related inventions, and that Alice could be having a negative impact on innovation). Absent legislative changes abolishing or altering Alice, a solution to the uncertainty problem, at least in the context of AI technologies, lies in clarifying existing decisions issued by the Patent Office and courts, including the decisions summarized above.  While it can be challenging to explain why an AI algorithm made a particular decision or took a specific action (due to the black box nature of such algorithms once they are fully trained), it is generally not difficult to describe the structure of a deep learning or machine learning algorithm or how it works. Even so, it remains unclear whether and to what extent fully describing how one’s AI technology and including “how” features in patent claims will ever be sufficient to “add[] enough to transform the nature of an abstract algorithm into a patent-eligible [useful process].” If explaining how AI works is to have a future meaningful role in patent law, the courts or Congress will need to provide clarity. Read more »
  • California Appeals Court Denies Defendant Access to Algorithm That Contributed Evidence to His Conviction
    One of the concerns expressed by those studying algorithmic decision-making is the apparent lack of transparency. Those impacted by adverse algorithmic decisions often seek transparency to better understand the basis for the decisions. In the case of software used in legal proceedings, parties who seek explanations about software face a number of obstacles, including those imposed by evidentiary rules, criminal or civil procedural rules, and by software companies that resist discovery requests. The closely-followed issue of algorithmic transparency was recently considered by a California appellate court in People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018), in which the People sought relief from a discovery order requiring the production of software and source code used in the conviction of Florencio Jose Dominguez. Following a hearing and review of the record and amicus briefs in support of Dominguez filed by the American Civil Liberties Union, the American Civil Liberties Union of San Diego and Imperial Counties, the Innocence Project, Inc., the California Innocence Project, the Northern California Innocence Project at Santa Clara University School of Law, Loyola Law School’s Project for the Innocent, and the Legal Aid Society of New York City, the appeals court granted the People’s relief. In doing so, the court considered, but was not persuaded by, the defense team’s “black box” and “machine testimony” arguments. At issue on appeal was Dominguez’s motion to compel production of a DNA testing program called STRmix used by local prosecutors in their analysis of forensic evidence (specifically, DNA found on the inside of gloves). STRmix is a “probabilistic genotyping” program that expresses a match between a suspect and DNA evidence in terms the probability of a match compared to a coincidental match. Probabilistic genotyping is said to reduce subjectivity in the analysis of DNA typing results. Dominguez’s counsel moved the trial court for an order compelling the People to produce the STRmix software program and related updates as well as its source code, arguing that defendant had a right to look inside the software’s “black box.” The trial court granted the motion and the People sought writ relief from the appellate court. On appeal, the appellate court noted that “computer software programs are written in specialized languages called source code” and “source code, which humans can read, is then translated into [a] language that computers can read.” Cadence Design Systems, Inc. v. Avant! Corp., 29 Cal. 4th 215, 218 at fn.3 (2002). The lab that used STRmix testified that it had no way to access the source code, which it licensed from a software authorized seller.  Thus,  the court considered whether the company that created the software should produce it. In concluding that the company was not obligated to produce the software and source code, the court, citing precedent, found that the company would have had no knowledge of the case but for the defendant’s  subpoena duces tecum, and it did not act as part of the prosecutorial team such that it was obligated to turn over exculpatory evidence (assuming software itself is exculpatory, which the court was reluctant to find). With regard to the defense team’s “black box” argument, the appellate court found nothing in the record to indicate that the STRmix software suffered a problem, as the defense team argued, that might have affected its results. Calling this allegation speculative, the court concluded that the “black box” nature of STRmix was not itself sufficient to warrant its production. Moreover, the court was unpersuaded by the defense team’s argument that the STRmix program essentially usurped the lab analyst’s role in providing the final statistical comparison, and so the software program—not the analyst using the software—was effectively the source of the expert opinion rendered at trial. The lab, the defense argued, merely acted in a scrivener’s capacity for STRmix’s analysis, and since the machine was providing testimony, Dominguez should be able to evaluate the software to defend against the prosecution’s case against him. The appellate court disagreed. While acknowledging the “creativity” of the defense team’s “machine testimony” argument (which relied heavily on Berkeley law professor Andrea Roth’s “Machine Testimony” article (126 Yale L.J. 1972 (2017)), the panel noted the testimony that STRmix did not act alone, that there were humans in the loop: “[t]here are still decisions that an analyst has to make on the front end in terms of determining the number of contributors to a particular sample and determin[ing] which peaks are from DNA or from potentially artifacts” and that the program then performs a “robust breakdown of the DNA samples,” based at least in part on “parameters [the lab] set during validation.” Moreover, after STRmix renders “the diagnostics,” the lab “evaluate[s] … the genotype combinations … . to see if that makes sense, given the data [it’s] looking at.” After the lab “determine[s] that all of the diagnostics indicate that the STRmix run has finished appropriately,” it can then “make comparisons to any person of interest or … database that [it’s] looking at.” While the appellate court’s decision mostly followed precedent and established procedure, it could easily have gone the other way and affirmed the trial judge’s decision granting Defendant’s motion to compel the STRmix software and source code, which would have given Dominguez better insight into the nature of the software’s algorithms, its parameters and limitations in view of validation studies, and the various possible outputs the model could have produced given a set of inputs. In particular, the court might have affirmed the trial judge’s decision to grant access to the STRmix software if the policy of imposing transparency in STRmix’s algorithmic decisions were given more consideration from the perspective of actual harm that might occur if software and source code are produced. Here, the source code owner’s objection to production was based in part on trade secret and other confidentiality concerns; however, procedures already exist to handle those concerns. Indeed, source code reviews happen all the time in the civil context, such as in patent infringement matters involving software technologies. While software makers are right to be concerned about the harm to their businesses if their code ends up in the wild, the real risk of this happening can be low if proper procedures, embodied in a suitable court-issued Protective Order, are followed by lawyers on both sides of a matter and if the court maintains oversight and demands status updates from the parties to ensure compliance and integrity in the review process. Instead of following the trial court’s approach, however, the appellate court conditional access to STRmix’s “black box” on the demonstration of specific errors in the program’s results, which seems intractable: only by looking into the black box in the first place is a party able to understand whether problems exist that affect the result. Interestingly, artificial intelligence had nothing to do with the outcome of the appellate court’s decision, yet the panel noted that “We do not underestimate the challenges facing the legal system as it confronts developments in the field of artificial intelligence.” The judges acknowledged that the notion of “machine testimony” in algorithmic decision-making matters is a subject about which there are widely divergent viewpoints in the legal community, a possible prelude to what is ahead when artificial intelligence software cases make their way through the courts in criminal or non-criminal cases.  To that, the judges cautioned, “when faced with a novel method of scientific proof, we have required a preliminary showing of general acceptance of the new technique in the relevant scientific community before the scientific evidence may be admitted at trial.” Lawyers in future artificial intelligence cases should consider how best to frame arguments concerning machine testimony in both civil and criminal contexts to improve their chances of overcoming evidentiary obstacles. Lawyers will need to effectively articulate the nature of artificial intelligence decision-making algorithms, as well as the relative roles of data scientists and model developers who make decisions about artificial intelligence model architecture, hyperparameters, data sets, model inputs, training and testing procedures, and the interpretation of results. Today’s artificial intelligence systems do not operate autonomously; there will always be humans associated with a model’s output or result and those persons may need to provide expert testimony beyond the machine’s testimony.  Even so, transparency will be important to understanding algorithmic decisions and for developing an evidentiary record in artificial intelligence cases. Read more »
  • Thanks to Bots, Transparency Emerges as Lawmakers’ Choice for Regulating Algorithmic Harm
    Digital conversational agents, like Amazon’s Alexa and Apple’s Siri, and communications agents, like those found on customer service website pages, seem to be everywhere.  The remarkable increase in the use of these and other artificial intelligence-powered “bots” in everyday customer-facing devices like smartphones, websites, desktop speakers, and toys, has been exceeded only by bots in the background that account for over half of the traffic visiting some websites.  Recently reported harms caused by certain bots have caught the attention of state and federal lawmakers.  This post briefly describes those bots and their uses and suggests reasons why new legislative efforts aimed at reducing harms caused by bad bots have so far been limited to arguably one of the least onerous tools in the lawmaker’s toolbox: transparency. Bots Explained Bots are software programmed to receive percepts from their environment, make decisions based on those percepts, and then take (preferably rational) action in their environment.  Social media bots, for example, may use machine learning algorithms to classify and “understand” incoming content, which is subsequently posted and amplified via a social media account.  Companies like Netflix uses bots on social media platforms like Facebook and Twitter to automatically communicate information about their products and services. While not all bots use machine learning and other artificial intelligence (AI) technologies, many do, such as digital conversational agents, web crawlers, and website content scrappers, the latter being programmed to “understand” content on websites using semantic natural language processing and image classifiers.  Bots that use complex human behavioral data to identify and influence or manipulate people’s attitudes or behavior (such as clicking on advertisements) often use the latest AI tech. One attribute many bots have in common is that their functionality resides in a black box.  As a result, it can be challenging (if not impossible) for an observer to explain why a bot made a particular decision or took a specific action.  While intuition can be used to infer what happens, secrets inside a black box often remain secret. Depending on their uses and characteristics, bots are often categorized by type, such as “chatbot,” which generally describes an AI technology that engages with users by replicating natural language conversations, and “helper bot,” which is sometimes used when referring to a bot that performs useful or beneficial tasks.  The term “messenger bot” may refer to a bot that communicates information, while “cyborg” is sometimes used when referring to a person who uses bot technology. Regardless of their name, complexity, or use of AI, one characteristic common to most bots is their use as agents to accomplish tasks for or on behalf of a real person.  This anonymity of agent bots makes them attractive tools for malicious purposes. Lawmakers React to Bad Bots While the spread of beneficial bots has been impressive, bots with questionable purposes have also proliferated, such as those behind disinformation campaigns used during the 2016 presidential election.  Disinformation bots, which operate social media accounts on behalf of a real person or organization, can post content to public-facing accounts.  Used extensively in marketing, these bots can receive content, either automatically or from a principal behind the scenes, related to such things as brands, campaigns, politicians, and trending topics.  When organizations create multiple accounts and use bots across those accounts to amplify each account’s content, the content can appear viral and attract attention, which may be problematic if the content is false, misleading, and biased. The success of social media bots in spreading disinformation is evident in the degree to which they have proliferated.  Twitter recently produced data showing thousands of bot-run Twitter accounts (“Twitter bots”) were created before and during the 2016 US presidential campaign by foreign actors to amplify and spread disinformation about the campaign, candidates, and related hot-button campaign issues.  Users who received content from one of these bots would have had no apparent reason to know that it came from a foreign actor. Thus, it’s easy to understand why lawmakers and stakeholders would want to target social media bots and those that use them.  In view of a recent Pew Research Center poll that found most Americans know about social media bots, and those that have heard about them overwhelmingly (80%) believe that such bots are used for malicious purposes, and with technologies to detect fake content at its source or the bias of a news source standing at only about 65-70 percent accuracy, politicians have plenty of cover to go after bots and their owners. Why Use Transparency to Address Bot Harms? The range of options for regulating disinformation bots to prevent or reduce harm could include any number of traditional legislative approaches.  These include imposing on individuals and organizations various specific criminal and civil liability standards related to the performance and uses of their technologies; establishing requirements for regular recordkeeping and reporting to authorities (which could lead to public summaries); setting thresholds for knowledge, awareness, or intent (or use of strict liability) applied to regulated activities; providing private rights of action to sue for harms caused by a regulated person’s actions, inactions, or omissions; imposing monetary remedies and incarceration for violations; and other often seen command-and-control style governance approaches.  Transparency, which is another tool lawmakers could deploy, could impose on certain regulated persons and entities that they provide information publicly or privately to an organization’s users or customers through a mechanism of notice, disclosure, and/or disclaimer (among other techniques). Transparency is a long-used principal of democratic institutions that try to balance open and accountable government action and the notion of free enterprise with the public’s right to be informed.  Examples of transparency may be found in the form of information labels on consumer products and services under consumer laws, disclosure of product endorsement interests under FTC rules, notice and disclosures in financial and real estate transactions under various related laws, employee benefits disclosures under labor and tax laws, public review disclosures in connection with laws related to government decision-making, property ownership public records disclosures under various tax and land ownership/use laws, various healthcare disclosures under state and federal health care laws, and laws covering many other areas of public life.  Of particular relevance to the disinformation problem noted above, and why transparency seems well-suited to social media bots, is current federal campaign finance laws that require those behind political ads to reveal themselves.  See 52 USC §30120 (Federal Campaign Finance Law; publication and distribution of statements and solicitations; disclaimer requirements). A recent example of a transparency rule affecting certain bot use cases is California’s bot law (SB-1001; signed by Gov. Brown on September 28, 2018).  The law, which goes into effect July 2019, will, with certain exceptions, make it unlawful for any person (including corporations or government agencies) to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.  A person using a bot will not be liable, however, if the person discloses using clear, conspicuous, and reasonably designed notice to inform persons with whom the bot communicates or interacts that it is a bot.  Similar federal legislation may follow, especially if legislation proposed this summer by Sen. Diane Feinstein (D-CA) and legislative proposals by Sen. Warner and others gain traction in Congress. So why would lawmakers choose transparency to regulate malicious bot technology use cases rather than use an approach that is arguably more onerous?  One possibility is that transparency is seen as minimally controversial, and therefore less likely to cause push-back by those with ties to special interests that might negatively respond to lawmakers who advocate for tougher measures.  Or, perhaps lawmakers are choosing a minimalist approach just to demonstrate that they are taking action (versus the optics associated with doing nothing).  Maybe transparency is seen as a shot across the bow warning to industry leaders: work hard to police themselves and those that use their platforms by finding technological solutions to preventing the harms caused by bots or else be subject to a harsher regulatory spotlight.  Whatever the reason(s), even something viewed as relatively easy to implement as transparency is not immune from controversy. Transparency Concerns The arguments against the use of transparency applied to bots include loss of privacy, unfairness, unnecessary disclosure, and constitutional concerns, among others. Imposing transparency requirements can potentially infringe upon First Amendment protections if drafted with one-size-fits-all applicability.  Even before California’s bots measure was signed into law, for example, critics warned of the potential chilling effect on protected speech if anonymity is lifted in the case of social media bots. Moreover, transparency may be seen as unfairly elevating the principals of openness and accountability over notions of secrecy and privacy.  Owners of agent-bots, for example, would prefer to not to give up anonymity when doing so could expose them to attacks by those with opposing viewpoints and cause more harm than the law prevents. Both concerns could be addressed by imposing transparency in a narrow set of use cases and, as in California’s bot law, using “intent to mislead” and “knowingly deceiving” thresholds for tailoring the law to specific instances of certain bad behaviors. Others might argue that transparency places too much of the burden on users to understand the information being disclosed to them and to take appropriate responsive actions.  Just ask someone who’s tried to read a financial transaction disclosure or a complex Federal Register rule-making analysis whether the transparency, openness and accountability actually made a substantive impact on their follow-up actions.  Similarly, it’s questionable whether a recipient of bot-generated content would investigate the ownership and propriety of every new posting before deciding whether to accept the content’s veracity, or whether a person engaging with an AI chatbot would forgo further engagement if he or she were informed of the artificial nature of the engagement. Conclusion The likelihood that federal transparency laws will be enacted to address the malicious use of social media bots seems low given the current political situation in the US.  And with California’s bots disclosure requirement not becoming effective until mid-2019, only time will tell whether it will succeed as a legislative tool in addressing existing bot harms or whether the delay will simply give malicious actors time to find alternative technologies to achieve their goals. Even so, transparency appears to be a leading governance approach, at least in the area of algorithmic harm, and could become a go-to approach to governing harms caused by other AI and non-AI algorithmic technologies due to its relative simplicity and ability to be narrowly tailored.  Transparency might be a suitable approach to regulating certain actions by those who publish face images using generative adversarial networks (GANs), those who create and distribute so-called “deep fake” videos, and those who provide humanistic digital communications agents, all of which involve highly-realistic content and engagements in which a user could easily be fooled into believing the content/engagement involves a person and not an artificial intelligence. Read more »
  • AI’s Problems Attract More Congressional Attention
    As contentious political issues continue to distract Congress before the November midterm elections, federal legislative proposals aimed at governing artificial intelligence (AI) have largely stalled in the Senate and House.  Since December 2017, nine AI-focused bills, such as the AI Reporting Act of 2018 (AIR Act) and the AI in Government Act of 2018, have been waiting for congressional committee attention.  Even so, there has been a noticeable uptick in the number of individual federal lawmakers looking at AI’s problems, a sign that the pendulum may be swinging in the direction favoring regulation of AI technologies. Those lawmakers taking a serious look at AI recently include Mark Warner (D-VA) and Kamala Harris (D-CA) in the Senate, and Will Hurd (R-TX) and Robin Kelly (D-IL) in the House.  Along with others in Congress, they are meeting with AI experts, issuing new policy proposals, publishing reports, and pressing federal officials for information about how government agencies are addressing AI problems, especially in hot topic areas like AI model bias, privacy, and malicious uses of AI. Sen. Warner, for example, the Senate Intelligence Committee Vice Chairman, is examining how AI technologies power disinformation.  In a draft white paper first obtained by Axios, Warner’s “Potential Policy Proposals for Regulation of Social Media and Technology Firms” raises concerns about machine learning and data collection, mentioning “deep fake” disinformation tools as one example.  Deep fakes are neural network models that can take images and video of people containing one type of content and superimpose them over different images and videos of other (or the same) people in a way that changes the original’s content and meaning.  To the viewer, the altered images and videos look like the real thing, and many who view them may be fooled into accepting the false content’s message as truth. Warner’s “suite of options” for regulating AI include one that would require platforms to provide notice when users engage with AI-based digital conversational assistants (chatbots) or visit a website the publishes content provided by content-amplification algorithms like those used during the 2016 elections.  Another Warner proposal includes modifying the Communications Decency Act’s safe harbor provisions that currently protects social media platforms who publish offending third-party content, including the aforementioned deep fakes.  This proposal would allow private rights of action against platforms who fail to take steps, after notice from victims, that prevent offending content from reappearing on their sites. Another proposal would require certain platforms to make their customer’s activity data (sufficiently anonymized) available to public interest researchers as a way to generate insight from the data that could “inform actions by regulators and Congress.”  An area of concern is the commercial use, by private tech companies, of their user’s behavior-based data (online habits) without using proper research controls.  The suggestion is that public interest researchers would evaluate a platform’s behavioral data in a way that is not driven by an underlying for-profit business model. Warner’s privacy-centered proposals include granting the Federal Trade Commission with rulemaking authority, adopting GDPR-like regulations recently implemented across the European Union states, and setting mandatory standards for algorithmic transparency (auditability and fairness). Repeating a theme in Warner’s white paper, Representatives Hurd and Kelly conclude that, even if AI technologies are immature, they have the potential to disrupt every sector of society in both anticipated and unanticipated ways.  In their “Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy” report, the co-chairs of the House Oversight and Government Reform Committee make several observations and recommendations, including the need for political leadership from both Congress and the White House to achieve US global dominance in AI, the need for increased federal spending on AI research and development, means to address algorithmic accountability and transparency to remove bias in AI models, and examining whether existing regulations can address public safety and consumer risks from AI.  The challenges facing society, the lawmakers found, include the potential for job loss due to automation, privacy, model bias, and malicious use of AI technologies. Separately, Representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL), and Carlos Curbelo (R-FL), in a September 13, 2018, letter to the Director of National Intelligence, are requesting the Director of National Intelligence provide Congress with a report on the spread of deep fakes (aka “hyper-realistic digital forgeries”), which they contend are allowing “malicious actors” to create depictions of individuals doing or saying things they never did, without those individuals’ consent or knowledge.  They want the intelligence agency’s report to touch on everything from assessing how foreign governments could use the technology to harm US national interests, what sort of counter-measures could be deployed to detect and deter actors from disseminating deep fakes, and if the agency needs additional legal authority to combat the problem. In a September 17, 2018, letter to the Equal Employment Opportunity Commission, Senators Harris, Patty Murray (D-WA), and Elizabeth Warren (D-MA) ask the EEOC Director to address the potentially discriminatory impacts of facial analysis technologies in the enforcement of workplace anti-discrimination laws.  As reported on this website and elsewhere, machine learning models behind facial recognition may perform poorly if they have been trained on data that is unrepresentative of data that the model sees in the wild.  For example, if training data for a facial recognition model contains primarily white male faces, the model may perform well when it sees new white male faces, but may perform poorly when it sees non-white male faces.  The Senators want to know if such technologies amplify bias in race, gender, disadvantaged, and vulnerable groups, and they have tasked the EEOC with developing guidelines for employers concerning fair uses of facial analysis technologies in the workplace. Also on September 17, 2018, Senators Harris, Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR), sent a similar letter to the Federal Trade Commission, expressing concerns that the bias in facial analysis technologies could be considered unfair or deceptive practices under the Federal Trade Commission Act.  Stating that “we cannot wait any longer to have a serious conversation about how we can create sound policy to address these concerns,” the Senators urge the FTC to commit to developing a set of best practices for the lawful, fair, and transparent use of facial analysis. Senators Harris and Booker, joined by Senator Cedric Richmond (D-LA), also sent a letter on September 17, 2018, to FBI Director Christopher Wray asking for the status of the FBI’s response to a 2016 General Accounting Office (GAO) comprehensive report detailing the FBI’s use of face recognition technology. The increasing attention directed toward AI by individual federal lawmakers in 2018 may merely reflect the politics of the moment rather than signal a momentum shift toward substantive federal command and control-style regulations.  But as more states join those states that have begun enacting, in the absence of federal rules, their own laws addressing AI technology use cases, federal action may inevitably follow, especially if more reports of malicious uses of AI, like election disinformation, reach more receptive ears in Congress. Read more »
  • Generative Adversarial Networks and the Rise of Fake Faces: an Intellectual Property Perspective
    The tremendous growth in the artificial intelligence (AI) sector over the last several years may be attributed in large part to the proliferation of so-called big data.  But even today, data sets of sufficient size and quality are not always available for certain applications.  That’s where a technology called generative adversarial networks (GANs) comes in.  GANs, which are neural networks comprising two separate networks (a generator and a discriminator network that face off against each another), are useful for creating new (“synthetic” or “fake”) data samples.  As a result, one of the hottest areas for AI research today involves GANs, their ever-growing use cases, and the tools to identify their fake samples in the wild.  Face image-generating GANs, in particular, have received much of the attention due to their ability to generate highly realistic faces. One of the notable features of face image-generating GANs is their ability to generate synthetic faces having particular attributes, such as desired eye and hair color, skin tone, gender, and a certain degree of “attractiveness,” among others, that by appearance are nearly indistinguishable from reality.  These fake designer face images can be combined (using feature vectors) to produce even more highly sculpted face images having custom genetic features.  A similar process using celebrity images can be used to generate fake images well-suited to targeted online or print advertisements and other purposes.  Imagine the face of someone selling you a product or service whose persona, which is customized to match your particular likes/dislikes (after all, market researchers know all about you), and which has a vague resemblance to a favorite athlete, historical figure, or celebrity.  Even though family, friends, and celebrity endorsements are seen as the best way for companies looking for high marketing conversion rates, a highly tailored GAN-generated face may one day rival those techniques. As previously discussed on this website, AI technologies involving any use of human face data, such as face detection, facial recognition, face swapping, deep fakes, and now synthetic face generation technologies, raise a number of legal (and ethical) issues.  Facial recognition (a type of regulated biometric information in some states), for example, has become a lightning rod for privacy-related laws and lawsuits.  Proponents of face image-generating GANs seem to recognize potential legal risk posed by their technology when they argue that generating synthetic faces avoids copyright restrictions (this at least implicitly acknowledges that data sets found online may contain copyrighted images scraped from the Internet).  But copyright issue may not be so clear-cut in the case of GANs.  And even if copyrights are avoided, a GAN developer may face other potential legal issues, such as those involving publicity and privacy rights. Consider the following hypothetical: GAN Developer’s face image-generating model is used to create a synthetic persona with combined features from at least two well-known public figures: Celebrity and Athlete, who own their respective publicity rights, i.e., the right to control the use of their names and likenesses, which they control through their publicity, management, legal, and/or agency teams.  Advert Co. acquires the synthetic face image sample and uses it in a national print advertising campaign that appears in leading fitness, adventure, and style magazines.  All of the real celebrity, athlete, and other images used in GAN Developer’s discriminator network are the property of Image Co.  GAN Developer did not obtain permission to use Image Co.’s images, but it also did not retain the images after its model was fully developed and used to create the synthetic face image sample. Image Co., which asserts that it owns the exclusive right to copy, reproduce, and distribute the original real images and to make derivatives thereof, sues GAN Developer and Advert Co. for copyright infringement. As a possible defense, GAN Developer might argue that its temporary use of the original copyrighted images, which were not retained after their use, was a “fair use,” and both GAN Developer and Advert Co. might further argue that the synthetic face image is an entirely new work, it is a transformative use of the original images, and it is not a derivative of the originals. With regard to their fair use argument, the Copyright Act provides a non-exhaustive list of factors to consider in deciding whether the use of a copyrighted work was an excusable fair use: “(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work.”  17 USC § 107.  Some of the many thoroughly-reasoned and well-cited court opinions concerning the fair use doctrine address its applicability to face images.  In just one example, a court granted summary judgment in favor of a defendant after finding that the defendant’s extracted outline features of a face from an online copyrighted photo of a mayor for use in opposition political ads was an excusable fair use.  Kienitz v. Sconnie Nation LLC, 766 F. 3d 756 (7th Cir. 2014).  Even so, no court has considered the specific fact pattern set forth in the above hypothetical involving GANs, so it remains to be seen how a court might apply the fair use doctrine in such circumstances. As for the other defenses, a derivative work is a work based on or derived from one or more already existing works.  Copyright Office Circular 14 at 1 (2013).  A derivative work incorporates some or all of a preexisting work and adds new original copyrightable authorship to that work.  A derivative works is one that generally involves transformation of the content of the preexisting work into an altered form, such as the translation of a novel into another language, the adaptation of a novel into a movie or play, the recasting of a novel as an e-book or an audiobook, or a t-shirt version of a print image.  See Authors Guild v. Google, Inc., 804 F. 3d 202, 215 (2nd Cir. 2015).  In the present hypothetical, a court might consider whether GAN Developer’s synthetic image sample is an altered form of Image Co.’s original Celebrity and Athlete images. With regard to the transformative use test, something is sufficiently transformative if it “adds something new, with a further purpose or different character, altering the first with new expression, meaning or message….” Campbell v. Acuff-Rose Music, Inc., 510 US 569, 579 (1994) (citing Leval, 103 Harv. L. Rev. at 1111). “[T]he more transformative the new work,” the more likely it may be viewed as a fair use of the original work. See id.  Thus, a court might consider whether GAN Developer’s synthetic image “is one that serves a new and different function from the original work and is not a substitute for it.”  Authors Guild, Inc. v. HathiTrust, 755 F. 3d 87, 96 (2nd Cir. 2014).  Depending on the “closeness” of the synthetic face to Celebrity’s and Athlete’s, whose features were used to design the synthetic face, a court might find that the new face is not a substitute for the originals, at least from a commercial perspective, and therefore it is sufficiently transformative.  Again, no court has considered the hypothetical GAN fact pattern, so it remains to be seen how a court might apply the transformative use test in such circumstances. Even if GAN Developer and Advert Co. successfully navigate around the copyright infringement issues, they may not be entirely out of the liability woods.  Getting back to the hypothetical, they still may face one or both of the Celebrity’s and Athlete’s misappropriation of publicity rights claims.  Publicity rights often arise in connection with the use of a person’s name or likeness for advertising purposes.  New York courts, which have a long history of dealing with publicity rights issues, have found that “a name, portrait, or picture is used ‘for advertising purposes’ if it appears in a publication which, taken in its entirety, was distributed for use in, or as part of, an advertisement or solicitation for patronage of a particular product or service.” See Scott v. WorldStarHipHop, Inc., No. 10-cv-9538 (S.D.N.Y. 2012) (citing cases). Right of publicity laws in some states cover not only a person’s persona, but extend to the unauthorized use and exploitation of that person’s voice, sound-alike voice, signature, nicknames, first name, roles or characterizations performed by that person (i.e., celebrity roles), personal catchphrases, identity, and objects closely related to or associated with the persona (i.e., celebrities associated with particular goods).  See Midler v. Ford Motor Co., 849 F.2d 460 (9th Cir. 1989) (finding advertiser liable for using sound-alike performers to approximate the vocal sound of actor Bette Midler); Waits v. Frito-Lay, Inc., 978 F.2d 1093 (9th Cir. 1992) (similar facts); Onassis v. Christian Dior, 122 Misc. 2d 603 (NY Supreme Ct. 1984) (finding advertiser liable for impermissibly misappropriating Jacqueline Kennedy Onassis’ identity for the purposes of trade and advertising where picture used to establish that identity was that of look-alike model Barbara Reynolds); White v. Samsung Electronics Am., Inc., 971 F.2d 1395 (9th Cir. 1992) (finding liability where defendant employed a robot that looked and replicated actions of Vanna White of “Wheel of Fortune” fame); Carson v. Here’s Johnny Portable Toilets, 698 F.2d 831 (6th Cir. 1983) (finding defendant liable where its advertisement associated its products with well-known “Here’s Johnny” introduction of television personality Johnny Carson); Motschenbacher v. R.J. Reynolds Tobacco Co., 498 F.2d 921 (9th Cir. 1974) (finding defendant liable where its advertisement used a distinctive phrase and race car in advertisements, and where public could unequivocally relate the phrase and the car to the famous individuals associated with the race car).  Some court’s, however, have drawn the line in the case of fictional names, even if it is closely related to a real name.  See Duncan v. Universal Music Group et al., No. 11-cv-5654 (E.D.N.Y. 2012). Thus, Advert Co. might argue that it did not misappropriate Celebrity’s and Athlete’s publicity rights for its own advantage because neither of their likenesses is generally apparent in the synthetic image.  Celebrity or Athlete might counter with evidence demonstrating the image contains the presence of sufficient genetic features, such as eye shape, that might make an observer think of them.  As some of the cases above suggest, a direct use of a name or likeness is not necessary for finding misappropriation of another’s persona. On the other hand, the burden of proof increases when identity is based on indirect means, such as through voice, association with objects, or in the case of a synthetic face, a mere resemblance. A court might also hear additional arguments against misappropriation. Similar to the transformative use test under a fair use query, Advert Co. might argue that its synthetic image adds significant creative elements such that the original images were transformed into something more than a mere likeness or imitation, or that its use of other’s likenesses was merely incidental (5 J. Thomas McCarthy, McCarthy on Trademarks and Unfair Competition § 28:7.50 (4th ed. 2014) (“The mere trivial or fleeting use of a person’s name or image in an advertisement will not trigger liability when such a usage will have only a de minimis commercial implication.”)). Other arguments that might be raised include First Amendment and perhaps a novel argument that output from a GAN model cannot constitute misappropriate because, at its core, the model simply learns for itself what features of an image’s pixel values are most useful for the purpose of characterizing images of human faces and thus neither the model nor GAN Developer had awareness of a real person’s physical features when generating a fake face.  But see In Re Facebook Biometric Information Privacy Litigation, slip op. (Dkt. 302), No. 3:15-cv-03747-JD (N.D. Cal. May 14, 2018) (finding unpersuasive a “learning” by artificial intelligence argument in the context of facial recognition) (more on this case here). This post barely touches the surface of some of the legal issues and types of evidence that might arise in a situation like the above GAN hypothetical.  One can imagine all sorts of other possible scenarios involving synthetic face images and their potential legal risks that GAN developers and others might confront. For more information about one online image data set, visit ImageNet; for an overview of GANs, see these slides (by GANs innovator Ian Goodfellow and others), this tutorial video (at 51:00 mark), and this ICLR 2018 conference paper by NVIDIA. Read more »
  • Will “Leaky” Machine Learning Usher in a New Wave of Lawsuits?
    A computer science professor at Cornell University has a new twist on Marc Andreessen’s 2011 pronouncement that software is “eating the world.”  According to Vitaly Shmatikov, it is “machine learning [that] is eating the world” today.  His personification is clear: machine learning and other applications of artificial intelligence are disrupting society at a rate that shows little sign of leveling off.  With increasing numbers of companies and individual developers producing customer-facing AI systems, it seems all but inevitable that some of those systems will create unintended and unforeseen consequences, including harm to individuals and society at large.  Researchers like Shmatikov and his colleagues are starting to reveal those consequences, including one–“leaky” machine learning models–that could have serious legal implications. In this post, the causes of action that might be asserted against a developer who publishes, either directly or via a machine learning as a service (MLaaS) cloud platform, a leaky machine learning model are explored along with possible defenses, using the lessons of cybersecurity litigation as a jumping off point. Over the last decade or more, the plaintiffs bar and the defendants bar have contributed to a body of case law now commonly referred to as cybersecurity law.  This was inevitable, given the estimated 8,000 data breaches involving 11 billion data records made public since 2005. After some well-publicized breaches, lawsuits against companies that reported data thefts began appearing more frequently on court dockets across the country.  Law firms responded by marketing “cybersecurity” practice groups whose attorneys advised clients about managing risks associated with data security and the aftermath of data exfiltrations by cybercriminals.  Today, with an estimated 70-percent of all data being generated by individuals (often related to those individuals’ activities), and with organizations globally expected to lose over 146 billion more data records between 2018 and 2023 if current cybersecurity tools are not improved (Juniper Research), the number of cybersecurity lawsuits is not expected to level off anytime soon. While data exfiltration lawsuits may be the most prevalent type of cybersecurity lawsuit today, the plaintiffs bar has begun targeting other cyber issues, such as ransomware attacks, especially those affecting healthcare facilities (in ransomware cases, malicious software freezes an organization’s computer systems until a ransom is paid; while frozen, a business may not be able to effectively deliver critical services to customers).  The same litigators who have expanding into ransomware may soon turn their attention to a new kind of cyber-like “breach”: the so-called leaky machine learning models built on thousands of personal data records. In their research, sponsored in part by the National Science Foundation (NSF) and Google, Shmatikov and his colleagues in early 2017 “uncovered multiple privacy and integrity problems in today’s [machine learning] pipelines” that could be exploited by adversaries to infer if a particular person’s data record was used to train machine learning models.  See R. Shokri, Membership Inference Attacks Against Machine Learning Models, Proceedings of the 38th IEEE Symposium on Security and Privacy (2017). They describe a health care machine learning model that could reveal to an adversary whether or not a certain patient’s data record was part of the model’s training data.  In another example, a different model trained on location and other data, used to categorize mobile users based on their movement patterns, was found to reveal by way of query whether a particular user’s location data was used. These scenarios certainly raise alarms from a privacy perspective, and one can imagine other possible instances of machine learning models revealing the kind of personal information to an attacker that might cause harm to individuals.  While actual user data may not be revealed in these attacks, the mere inference that a person’s data record was included in a data set used to train a model, what Shmatikov and previous researchers refer to as “membership inference,” could cause that person (and the thousands of others whose data records were used) embarrassment and other consequences. Assuming for the sake of argument that a membership inference disclosure of the kind described above becomes legally actionable, it is instructive to consider what businesses facing membership inference lawsuits might expect in terms of statutory and common law causes of action so they can take steps to mitigate problems and avoid contributing more cyber lawsuits to already busy court dockets (and of course avoid leaking confidential and private information).  These causes of actions could include invasion of privacy, consumer protection laws, unfair trade practices, negligence, negligent misrepresentation, innocent misrepresentation, negligent omission, breach of warranty, and emotional distress, among others.  See, e.g., Sony Gaming Networks & Cust. Data Sec. Breach Lit., 996 F.Supp. 2d 942 (S.D. Cal 2014) (evaluating data exfiltration causes of action). Negligence might be alleged, as it often is in cybersecurity cases, if plaintiff (or class action members) can establish evidence of the following four elements: the existence of a legal duty; breach of that duty; causation; and cognizable injury.  Liability might arise where defendant failed to properly safeguard and protect private personal information from unauthorized access, use, and disclosure, where such use and disclosure caused actual money or property loss or the loss of a legally-protected interest in the confidentiality and privacy of plaintiff’s/members’ personal information. Misrepresentation might be alleged if plaintiff/members can establish evidence of a misrepresentation upon which they relied and a pecuniary loss resulting from the reliance of the actionable misrepresentation. Liability under such a claim could arise if, for example, plaintiff’s data record has monetary value and a company makes representations about its use of security and data security measures in user agreements, terms of service, and/or privacy policies that turn out to be in error (for example, the company’s measures lack robustness and do not prevent an attack on a model that is found to be leaky).  In some cases, actual reliance on statements or omissions may need to be alleged. State consumer protection laws might also be alleged if plaintiff/members can establish (depending on which state law applies) deceptive misrepresentations or omissions regarding the standard, quality, or grade of a particular good or service that causes harm, such as those that mislead plaintiff/members into believing that their personal private information would be safe upon transmission to defendant when defendant knew of vulnerabilities in its data security systems. Liability could arise where defendant was deceptive in omitting notice that its machine learning model could reveal to an attacker the fact that plaintiff’s/members’ data record was used to train the model. In certain situations, plaintiff/members might have to allege with particularity the specific time, place, and content of the misrepresentation or omission if the allegations are based in fraud. For their part, defendants in membership inference cases might challenge plaintiff’s/members’ lawsuit on a number of fronts.  As an initial tactic, defendants might challenge plaintiff’s/members’ standing on the basis of failing to establish an actual injury caused by the disclosure (inference) of data record used to train a machine learning model.  See In re Science App. Intern. Corp. Backup Tape Data, 45 F. Supp. 3d 14 (D.D.C. 2014) (considering “when, exactly, the loss or theft of something as abstract as data becomes a concrete injury”). Defendants might also challenge plaintiff’s/members’ assertions that an injury is imminent or certainly impending.  In data breach cases, defendants might rely on state court decisions that denied standing where injury from a mere potential risk of future identity theft resulting from the loss of personal information was not recognized, which might also apply in a membership inference case. Defendants might also question whether permission and/or consent was given by a plaintiffs/members for the collection, storage, and use of personal data records.  This query would likely involve plaintiff’s/members’ awareness and acceptance of membership risks when they allowed their data to be used to train a machine learning model.  Defendants would likely examine whether the permission/consent given extended to and was commensurate in scope with the uses of the data records by defendant or others. Defendants might also consider applicable agreements related to a user’s data records that limited plaintiff’s/members’ choice of forum and which state laws apply, which could affect pleading and proof burdens.  Defendants might rely on language in terms of service and other agreements that provide notice of the possibility of external attacks and the risks of leaks and membership inference.  Many other challenges to a plaintiff’s/members’ allegations could also be explored. Apart from challenging causes of action on the merits, companies should also consider taking other measures like those used by companies in traditional data exfiltration cases.  These might include proactively testing their systems (in the case of machine learning models, testing for leakage) and implementing procedures to provide notice of a leaky model.  As Shmatikov and his colleagues suggest, machine learning model developers and MLaaS providers should take into account the risk that their models will leak information about their training data, warn customers about this risk, and “provide more visibility into the model and the methods that can be used to reduce this leakage.”  Machine learning companies should account for foreseeable risks and associated consequences and assess whether they are acceptable compared to the benefits received from their models. If data exfiltration, ransomware, and related cybersecurity litigation are any indication, the plaintiffs bar may one day turn its attention to the leaky machine learning problem.  If machine learning model developers and MLaaS providers want to avoid such attention and the possibility of litigation, they should not delay taking reasonable steps to mitigate the leaky machine learning model problem. Read more »
  • Trump Signs John S. McCain National Defense Authorization Act, Provides Funds for Artificial Intelligence Technologies
    By signing into law the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (H.R.5515; Public Law No: 115-232; Aug. 13, 2018), the Trump Administration has established a strategy for major new national defense and national security-related initiatives involving artificial intelligence (AI) technologies.  Some of the law’s $717 billion spending authorization for fiscal year 2019 includes proposed funding to assess the current state of AI and deploy AI across the Department of Defense (DOD).  The law also recognizes that fundamental AI research is still needed within the tech-heavy military services.  The law encourages coordination between DOD activities and private industry at a time when some Silicon Valley companies are being pressured by their employees to stop engaging with DOD and other government agencies in AI. In Section 238 of the law, the Secretary of Defense is to lead “Joint Artificial Intelligence Research, Development, and Transition Activities” to include developing a set of activities within the DOD involving efforts to develop, mature, and transition AI technologies into operational use.  In Section 1051 of the law, an independent “National Security Commission on Artificial Intelligence” is to be established within the Executive Branch to review advances in AI and associated technologies, with a focus on machine learning (ML). The Commission’s mandate is to review methods and means necessary to advance the development of AI and associated technologies by the US to comprehensively address US national security and defense needs.  The Commission is to review the competitiveness of the US in AI/ML and associated technologies. “Artificial Intelligence” is defined broadly in Sec. 238 to include the following: (1) any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets; (2) an artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action; (3) an artificial system designed to think or act like a human, including cognitive architectures and neural networks; (4) a set of techniques, including machine learning, that is designed to approximate a cognitive task; and (5) an artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.  Section 1051 has a similar definition. The law does not overlook the need for governance of AI development activities, and requires regular meetings of appropriate DOD officials to integrate the functional activities of organizations and elements with respect to AI; ensure there are efficient and effective AI capabilities throughout the DOD; and develop and continuously improve research, innovation, policy, joint processes, and procedures to facilitate the development, acquisition, integration, advancement, oversight, and sustainment of AI throughout the DOD.  The DOD is also tasked with studying AI to make recommendations for legislative action relating to the technology, including recommendations to more effectively fund and organize the DOD in areas of AI. For further details, please see this earlier post. Read more »
WordPress RSS Feed Retriever by Theme Mason

Computational Intelligence

  • Complex and Intelligent Systems, Volume 5, Issue 1, March 2019
    1. Rare pattern mining: challenges and future perspectivesAuthor(s): Anindita Borah, Bhabesh NathPages: 1-232. LSHADE-SPA memetic framework for solving large-scale optimization problemsAuthor(s): Anas A. Hadi, Ali W. Mohamed, Kamal M. JambiPages: 25-403. Interval-valued Pythagorean fuzzy Einstein hybrid weighted averaging aggregation operator and their application to group decision makingAuthor(s): Khaista Rahman, Saleem Abdullah, Asad Ali, Fazli AminPages: 41-524. Evaluation of firms applying to Malcolm Baldrige National Quality Award: a modified fuzzy AHP methodAuthor(s): Serhat Aydın, Cengiz KahramanPages: 53-635. Intuitionistic trapezoidal fuzzy multi-numbers and its application to multi-criteria decision-making problemsAuthor(s): Vakkas Uluçay, Irfan Deli, Mehmet ŞahinPages: 65-786. Controlling disturbances of islanding in a gas power plant via fuzzy-based neural network approach with a focus on load-shedding systemAuthor(s): M. Moloudi, A. H. MazinanPages: 79-89 Read more »
  • Soft Computing, Volume 23, Issue 6, March 2019
    1. Editorial to image processing with soft computing techniquesAuthor(s): Irina Perfilieva, Javier Montero, Salvatore SessaPages: 1777-17782. Hyperspectral imaging using notions from type-2 fuzzy setsAuthor(s): A. Lopez-Maestresalas, L. De Miguel, C. Lopez-Molina, S. ArazuriPages: 1779-17933. Sensitivity analysis for image represented by fuzzy functionAuthor(s): Petr Hurtik, Nicolás Madrid, Martin DybaPages: 1795-18074. A new edge detection method based on global evaluation using fuzzy clusteringAuthor(s): Pablo A. Flores-Vidal, Pablo Olaso, Daniel Gómez, Carely GuadaPages: 1809-18215. An image segmentation technique using nonsubsampled contourlet transform and active contoursAuthor(s): Lingling FangPages: 1823-18326. Total variation with nonlocal FT-Laplacian for patch-based inpaintingAuthor(s): Irina Perfilieva, Pavel VlašánekPages: 1833-18417. Biometric recognition using finger and palm vein imagesAuthor(s): S Bharathi, R SudhakarPages: 1843-18558. Design of meteorological pattern classification system based on FCM-based radial basis function neural networks using meteorological radar dataAuthor(s): Eun-Hu Kim, Jun-Hyun Ko, Sung-Kwun Oh, Kisung SeoPages: 1857-187259. Zadeh max–min composition fuzzy rule for dominated pixel values in iris localizationAuthor(s): S. G. Gino Sophia, V. Ceronmani SharmilaPages: 1873-188910. Fusion linear representation-based classificationAuthor(s): Zhonghua Liu, Guosen Xie, Lin Zhang, Jiexin PuPages: 1891-189911. A novel optimization algorithm for recommender system using modified fuzzy c-means clustering approachAuthor(s): C. Selvi, E. SivasankarPages: 1901-191612. Energy-aware virtual machine allocation and selection in cloud data centersAuthor(s): V. Dinesh Reddy, G. R. Gangadharan, G. Subrahmanya V. R. K. RaoPages: 1917-193213. An extensive evaluation of search-based software testing: a reviewAuthor(s): Manju Khari, Prabhat KumarPages: 1933-194614. A parallel hybrid optimization algorithm for some network design problemsAuthor(s): Ibrahima Diarrassouba, Mohamed Khalil Labidi, A. Ridha MahjoubPages: 1947-196415. Structure evolution-based design for low-pass IIR digital filters with the sharp transition band and the linear phase passbandAuthor(s): Lijia Chen, Mingguo Liu, Jing Wu, Jianfeng Yang, Zhen DaiPages: 1965-198416. A new approach to construct similarity measure for intuitionistic fuzzy setsAuthor(s): Yafei Song, Xiaodan Wang, Wen Quan, Wenlong HuangPages: 1985-199817. Similarity measures of generalized trapezoidal fuzzy numbers for fault diagnosisAuthor(s): Jianjun Xie, Wenyi Zeng, Junhong Li, Qian YinPages: 1999-201418. Discussing incomplete 2-tuple fuzzy linguistic preference relations in multi-granular linguistic MCGDM with unknown weight informationAuthor(s): Xue-yang Zhang, Hong-yu Zhang, Jian-qiang WangPages: 2015-203219. A hybrid biogeography-based optimization and fuzzy C-means algorithm for image segmentationAuthor(s): Minxia Zhang, Weixuan Jiang, Xiaohan Zhou, Yu Xue, Shengyong ChenPages: 2033-204620. Vibration control of a structure using sliding-mode hedge-algebras-based controllerAuthor(s): Duc-Trung Tran, Van-Binh Bui, Tung-Anh Le, Hai-Le BuiPages: 2047-205921. Efficient obfuscation for CNF circuits and applications in cloud computingAuthor(s): Huang Zhang, Fangguo Zhang, Rong Cheng, Haibo TianPages: 2061-207222. Modeling and comparison of the series systems with imperfect coverage for an unreliable serverAuthor(s): Ching-Chang Kuo, Jau-Chuan KePages: 2073-208223. Gravitational search algorithm and K-means for simultaneous feature selection and data clustering: a multi-objective approachAuthor(s): Jay Prakash, Pramod Kumar SinghPages: 2083-210024. Towards efficient privacy-preserving encrypted image search in cloud computingAuthor(s): Yuan Wang, Meixia Miao, Jian Shen, Jianfeng WangPages: 2101-211225. Complete image fusion method based on fuzzy transformsAuthor(s): Ferdinando Di Martino, Salvatore SessaPages: 2113-2123 Read more »
  • Complex & Intelligent Systems, Volume 5, Issue 1, March 2019
    1. Rare pattern mining: challenges and future perspectivesAuthor(s): Anindita Borah, Bhabesh Nath Pages: 1-232. LSHADE-SPA memetic framework for solving large-scale optimization problemsAuthor(s):Anas A. Hadi, Ali W. Mohamed, Kamal M. Jambi Pages: 25-403. Interval-valued Pythagorean fuzzy Einstein hybrid weighted averaging aggregation operator and their application to group decision makingAuthor(s):Khaista Rahman, Saleem Abdullah, Asad Ali, Fazli Amin Pages: 41-524. Evaluation of firms applying to Malcolm Baldrige National Quality Award: a modified fuzzy AHP methodAuthor(s):Serhat Aydin, Cengiz Kahraman Pages: 53-635. Intuitionistic trapezoidal fuzzy multi-numbers and its application to multi-criteria decision-making problemsAuthor(s):Vakkas Ulucay, Irfan Deli, Mehmet Sahin Pages: 65-786. Controlling disturbances of islanding in a gas power plant via fuzzy-based neural network approach with a focus on load-shedding systemAuthor(s):M. Moloudi, A. H. Mazinan Pages: 79-89 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems, Volume 30, Issue 3, March 2019
    1. NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature MapsAuthor(s): Alessandro Aimar; Hesham Mostafa; Enrico Calabrese; Antonio Rios-Navarro; Ricardo Tapiador-Morales; Iulia-Alexandra Lungu; Moritz B. Milde; Federico Corradi; Alejandro Linares-Barranco; Shih-Chii Liu; Tobi DelbruckPages: 644 - 6562. Robust Dimension Reduction for Clustering With Local Adaptive LearningAuthor(s): Xiao-Dong Wang; Rung-Ching Chen; Zhi-Qiang Zeng; Chao-Qun Hong; Fei YanPages: 657 - 6693. Learning of a Decision-Maker’s Preference Zone With an Evolutionary ApproachAuthor(s): Manish AggarwalPages: 670 - 6824. Fine-Grained Image Classification Using Modified DCNNs Trained by Cascaded Softmax and Generalized Large-Margin LossesAuthor(s): Weiwei Shi; Yihong Gong; Xiaoyu Tao; De Cheng; Nanning ZhengPages: 683 - 6945. Distributed Generalized Nash Equilibrium Seeking Algorithm Design for Aggregative Games Over Weight-Balanced DigraphsAuthor(s): Zhenhua Deng; Xiaohong NianPages: 695 - 7066. DBSDA: Lowering the Bound of Misclassification Rate for Sparse Linear Discriminant Analysis via Model DebiasingAuthor(s): Haoyi Xiong; Wei Cheng; Jiang Bian; Wenqing Hu; Zeyi Sun; Zhishan GuoPages: 707 - 7177. Modulation Classification Based on Signal Constellation Diagrams and Deep LearningAuthor(s): Shengliang Peng; Hanyu Jiang; Huaxia Wang; Hathal Alwageed; Yu Zhou; Marjan Mazrouei Sebdani; Yu-Dong YaoPages: 718 - 7278. ICFS Clustering With Multiple Representatives for Large DataAuthor(s): Liang Zhao; Zhikui Chen; Yi Yang; Liang Zou; Z. Jane WangPages: 728 - 7389. Exponential Stabilization of Fuzzy Memristive Neural Networks With Hybrid Unbounded Time-Varying DelaysAuthor(s): Yin Sheng; Frank L. Lewis; Zhigang ZengPages: 739 - 75010. A Fused CP Factorization Method for Incomplete TensorsAuthor(s): Yuankai Wu; Huachun Tan; Yong Li; Jian Zhang; Xiaoxuan ChenPages: 751 - 76411. Indefinite Kernel Logistic Regression With Concave-Inexact-Convex ProcedureAuthor(s): Fanghui Liu; Xiaolin Huang; Chen Gong; Jie Yang; Johan A. K. SuykensPages: 765 - 77612. Robot Learning System Based on Adaptive Neural Control and Dynamic Movement PrimitivesAuthor(s): Chenguang Yang; Chuize Chen; Wei He; Rongxin Cui; Zhijun LiPages: 777 - 78713. Discriminative Feature Selection via Employing Smooth and Robust Hinge LossAuthor(s): Hanyang Peng; Cheng-Lin LiuPages: 788 - 80214. A Fast and Accurate Matrix Completion Method Based on QR Decomposition and L2,1 Norm MinimizationAuthor(s): Qing Liu; Franck Davoine; Jian Yang; Ying Cui; Zhong Jin; Fei HanPages: 803 - 81715. Local Restricted Convolutional Neural Network for Change Detection in Polarimetric SAR ImagesAuthor(s): Fang Liu; Licheng Jiao; Xu Tang; Shuyuan Yang; Wenping Ma; Biao HouPages: 818 - 83316. Cost-Effective Object Detection: Active Sample Mining With Switchable Selection CriteriaAuthor(s): Keze Wang; Liang Lin; Xiaopeng Yan; Ziliang Chen; Dongyu Zhang; Lei ZhangPages: 834 - 85017. Multiview Subspace Clustering via Tensorial t-Product RepresentationAuthor(s): Ming Yin; Junbin Gao; Shengli Xie; Yi GuoPages: 851 - 86418. Exploring Self-Repair in a Coupled Spiking Astrocyte Neural NetworkAuthor(s): Junxiu Liu; Liam J. Mcdaid; Jim Harkin; Shvan Karim; Anju P. Johnson; Alan G. Millard; James Hilder; David M. Halliday; Andy M. Tyrrell; Jon TimmisPages: 865 - 87519. Fast and Accurate Hierarchical Clustering Based on Growing Multilayer Topology TrainingAuthor(s): Yiu-ming Cheung; Yiqun ZhangPages: 876 - 89020. General Square-Pattern Discretization Formulas via Second-Order Derivative Elimination for Zeroing Neural Network Illustrated by Future OptimizationAuthor(s): Jian Li; Yunong Zhang; Mingzhi MaoPages: 891 - 90121. Optimal Control of Propagating Fronts by Using Level Set Methods and Neural ApproximationsAuthor(s): Angelo Alessandri; Patrizia Bagnerini; Mauro GaggeroPages: 902 - 91222. Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning ApproachAuthor(s): Ramasamy Saravanakumar; Hyung Soo Kang; Choon Ki Ahn; Xiaojie Su; Hamid Reza KarimiPages: 913 - 92223. Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical StructuresAuthor(s): Mohammadreza Mohaghegh Neyshabouri; Kaan Gokcesu; Hakan Gokcesu; Huseyin Ozkan; Suleyman Serdar KozatPages: 923 - 93724. Adaptive Optimal Output Regulation of Time-Delay Systems via Measurement FeedbackAuthor(s): Weinan Gao; Zhong-Ping JiangPages: 938 - 94525. Unsupervised Knowledge Transfer Using Similarity EmbeddingsAuthor(s): Nikolaos Passalis; Anastasios TefasPages: 946 - 95026. Synchronization of Coupled Markovian Reaction–Diffusion Neural Networks With Proportional Delays Via Quantized ControlAuthor(s): Xinsong Yang; Qiang Song; Jinde Cao; Jianquan LuPages: 951 - 95827. Stepsize Range and Optimal Value for Taylor–Zhang Discretization Formula Applied to Zeroing Neurodynamics Illustrated via Future Equality-Constrained Quadratic ProgrammingAuthor(s): Yunong Zhang; Huihui Gong; Min Yang; Jian Li; Xuyun YangPages: 959 - 966 Read more »
  • IEEE Transactions on Fuzzy Systems, Volume 27, Issue 1, Jan. 2019
    1) A Nested Tensor Product Model TransformationAuthor(s): Y Yu, Z Li, X Liu, K Hirota, X Chen, T Fernando, H H C LuPages: 1 - 15 2) High-order Intuitionistic Fuzzy Cognitive Map Based on Evidential Reasoning TheoryAuthor(s): Y Zhang, J Qin, P Shi, Y KangPages: 16 - 30 3) Joint Learning of Spectral Clustering Structure and Fuzzy Similarity Matrix of DataAuthor(s): Z Bian, H Ishibuchi, S WangPages: 31 - 44 4) Fuzzy Optimal Energy Management for Fuel Cell and Supercapacitor Systems Using Neural Network Based Driving Pattern RecognitionAuthor(s): R Zhang, J Tao, H ZhouPages: 45 - 57 5) Comparing the Performance Potentials of Interval and General Type-2 Rule-Based Fuzzy Systems in Terms of Sculpting the State SpaceAuthor(s): J M MendelPages: 58 - 71 6) LDS-FCM: A Linear Dynamical System Based Fuzzy C-Means Method for Tactile RecognitionAuthor(s): C Liu, W Huang, F Sun, M Luo, C TanPages: 72 - 83 7) Improving Risk Evaluation in FMEA With Cloud Model and Hierarchical TOPSIS MethodAuthor(s): H-C Liu, L-E Wang, Z-W Li, Y-P HuPages: 84 - 95 8) Finite-Time Adaptive Fuzzy Output Feedback Dynamic Surface Control for MIMO Nonstrict Feedback SystemsAuthor(s): Y Li, K Li, S TongPages: 96 - 1109) BPEC: Belief-Peaks Evidential ClusteringAuthor(s): Z-G Su, T DenoeuxPages: 111 - 123 10) Improving the Performance of Fuzzy Rule-Based Classification Systems Based on a Nonaveraging Generalization of CC-Integrals NamedCF1F2-IntegralsAuthor(s): G Lucca, G P Dimuro, J Fernandez, H Bustince, B Bedregal, J A SanzPages: 124 - 134 11) The Negation of a Basic Probability AssignmentAuthor(s): L Yin, X Deng, Y DengPages: 135 - 143 12) Event Triggered Adaptive Fuzzy Consensus for Interconnected Switched Multiagent SystemsAuthor(s): S Zheng, P Shi, S Wang, Y ShiPages: 144 - 158 13) Alternative Ranking-Based Clustering and Reliability Index-Based Consensus Reaching Process for Hesitant Fuzzy Large Scale Group Decision MakingAuthor(s): X Liu, Y Xu, R Montes, R-X Ding, F HerreraPages: 159 - 171 14) Fuzzy Adaptive Finite-Time Control Design for Nontriangular Stochastic Nonlinear SystemsAuthor(s): S Sui, C L P Chen, S TongPages: 172 - 184 15) Deviation-Sparse Fuzzy C-Means With Neighbor Information ConstraintAuthor(s): Y Zhang, X Bai, R Fan, Z WangPages: 185 - 199 16) Sampled-Data Adaptive Output Feedback Fuzzy Stabilization for Switched Nonlinear Systems With Asynchronous SwitchingAuthor(s): S Li, C K Ahn, Z XiangPages: 200 - 205 Read more »
  • Soft Computing, Volume 23, Issue 5, March 2019
    Special issue on Theory and Practice of Natural Computing: Fifth Edition1) Theory and practice of natural computing: fifth editionAuthor(s): Carlos Martín-Vide, Miguel A. Vega-RodríguezPages: 14212) A multi-objective evolutionary approach to Pareto-optimal model treesAuthor(s): Marcin Czajkowski, Marek KretowskiPages: 1423-14373) Fuel-efficient truck platooning by a novel meta-heuristic inspired from ant colony optimisationAuthor(s): Abtin Nourmohammadzadeh, Sven HartmannPages: 1439-14524) A heuristic survivable virtual network mapping algorithmAuthor(s): Xiangwei Zheng, Jie Tian, Xiancui Xiao, Xinchun Cui, Xiaomei YuPages: 1453-14635) A semiring-like representation of lattice pseudoeffect algebrasAuthor(s): Ivan Chajda, Davide Fazio, Antonio LeddaPages: 1465-14756) Twenty years of Soft Computing: a bibliometric overviewAuthor(s): José M. Merigó, Manuel J. Cobo, Sigifredo Laengle, Daniela RivasPages: 1477-14977) Monadic pseudo BCI-algebras and corresponding logicsAuthor(s): Xiaolong Xin, Yulong Fu, Yanyan Lai, Juntao WangPages: 1499-15108) Group decision making with compatibility measures of hesitant fuzzy linguistic preference relationsAuthor(s): Xunjie Gou, Zeshui Xu, Huchang LiaoPages: 1511-15279) Semi-multifractal optimization algorithmAuthor(s): Ireneusz GosciniakPages: 1529-153910) Time series interval forecast using GM(1,1) and NGBM(1, 1) modelsAuthor(s): Ying-Yuan Chen, Hao-Tien Liu, Hsiow-Ling HsiehPages: 1541-155511) Tri-partition cost-sensitive active learning through kNNAuthor(s): Fan Min, Fu-Lun Liu, Liu-Ying Wen, Zhi-Heng ZhangPages: 1557-157212) New ranked set sampling schemes for range charts limits under bivariate skewed distributionsAuthor(s): Derya Karagöz, Nursel KoyuncuPages: 1573-158713) Gist: general integrated summarization of text and reviewsAuthor(s): Justin Lovinger, Iren Valova, Chad CloughPages: 1589-160114) Rough fuzzy bipolar soft sets and application in decision-making problemsAuthor(s): Nosheen Malik, Muhammad ShabirPages: 1603-161415) Differential evolution with Gaussian mutation and dynamic parameter adjustmentAuthor(s): Gaoji Sun, Yanfei Lan, Ruiqing ZhaoPages: 1615-164216) Using Covariance Matrix Adaptation Evolution Strategies for solving different types of differential equationsAuthor(s): Jose M. Chaquet, Enrique J. CarmonaPages: 1643-166617) A direct solution approach based on constrained fuzzy arithmetic and metaheuristic for fuzzy transportation problemsAuthor(s): Adil Baykasoğlu, Kemal SubulanPages: 1667-169818) An efficient hybrid algorithm based on Water Cycle and Moth-Flame Optimization algorithms for solving numerical and constrained engineering optimization problemsAuthor(s): Soheyl Khalilpourazari, Saman KhalilpourazaryPages: 1699-172219) An evolutionary approach to constrained path planning of an autonomous surface vehicle for maximizing the covered area of Ypacarai LakeAuthor(s): Mario Arzamendia, Derlis Gregor, Daniel Gutierrez ReinaPages: 1723-173420) A multi-key SMC protocol and multi-key FHE based on some-are-errorless LWEAuthor(s): Huiyong Wang, Yong Feng, Yong Ding, Shijie TangPages: 1735-174421) Smart PSO-based secured scheduling approaches for scientific workflows in cloud computingAuthor(s): J. Angela Jennifa Sujana, T. Revathi, T. S. Siva Priya, K. MuneeswaranPages: 1745-17622) ARD-PRED: an in silico tool for predicting age-related-disorder-associated proteinsAuthor(s): Kirti Bhadhadhara, Yasha HasijaPages: 1767-1776 Read more »
  • Soft Computing, Volume 23, Issue 4, February 2019
    1) Direct limits of generalized pseudo-effect algebras with the Riesz decomposition propertiesAuthor(s): Yanan Guo, Yongjian XiePages: 1071-10782) Geometric structure information based multi-objective function to increase fuzzy clustering performance with artificial and real-life dataAuthor(s): M. M. Gowthul Alam, S. BaulkaniPages: 1079-10983) A Gould-type integral of fuzzy functions IIAuthor(s): Alina Gavriluţ, Alina IosifPages: 1099-11074) Sorting of decision-making methods based on their outcomes using dominance-vector hesitant fuzzy-based distanceAuthor(s): Bahram Farhadinia, Enrique Herrera-ViedmaPages: 1109-11215) On characterization of fuzzy tree pushdown automataAuthor(s): M. GhoraniPages: 1123-11316) BIAM: a new bio-inspired analysis methodology for digital ecosystems based on a scale-free architectureAuthor(s): Vincenzo Conti, Simone Sante Ruffo, Salvatore Vitabile, Leonard BarolliPages: 1133-11507) Self-feedback differential evolution adapting to fitness landscape characteristicsAuthor(s): Wei Li, Shanni Li, Zhangxin Chen, Liang Zhong, Chengtian OuyangPages: 1151-11638) Mining stock category association on Tehran stock marketAuthor(s): Zahra Hoseyni MasumPages: 1165-11779) A comparative study of optimization models in genetic programming-based rule extraction problemsAuthor(s): Marconi de Arruda Pereira, Eduardo Gontijo Carrano…Pages: 1179-119710) Distributed task allocation in multi-agent environments using cellular learning automataAuthor(s): Maryam Khani, Ali Ahmadi, Hajar HajaryPages: 1199-121811) MOEA3D: a MOEA based on dominance and decomposition with probability distribution modelAuthor(s): Ziyu Hu, Jingming Yang, Huihui Cui, Lixin Wei, Rui FanPages: 1219-123712) CCODM: conditional co-occurrence degree matrix document representation methodAuthor(s): Wei Wei, Chonghui Guo, Jingfeng Chen, Lin Tang, Leilei SunPages: 1239-125513) Minimization of reliability indices and cost of power distribution systems in urban areas using an efficient hybrid meta-heuristic algorithmAuthor(s): Avishek Banerjee, Samiran Chattopadhyay, Grigoras Gheorghe…Pages: 1257-128114) A robust method to discover influential users in social networksAuthor(s): Qian Ma, Jun MaPages: 1283-129515) Chance-constrained random fuzzy CCR model in presence of skew-normal distributionAuthor(s): Behrokh Mehrasa, Mohammad Hassan BehzadiPages: 1297-130816) A fuzzy AHP-based methodology for project prioritization and selectionAuthor(s): Amir Shaygan, Özlem Müge TestikPages: 1309-131917) A multi-objective evolutionary fuzzy system to obtain a broad and accurate set of solutions in intrusion detection systemsAuthor(s): Salma Elhag, Alberto Fernández, Abdulrahman Altalhi, Saleh Alshomrani…Pages: 1321-133618) Uncertain vertex coloring problemAuthor(s): Lin Chen, Jin Peng, Dan A. RalescuPages: 1337-134619) Local, global and decentralized fuzzy-based computing paradigms for coordinated voltage control of grid-connected photovoltaic systemsAuthor(s): Alfredo Vaccaro, Hafsa Qamar, Haleema QamarPages: 1347-135620) Combining user preferences and expert opinions: a criteria synergy-based model for decision making on the WebAuthor(s): Marcelo Karanik, Rubén Bernal, José Ignacio Peláez…Pages: 1357-137321) A bi-objective fleet size and mix green inventory routing problem, model and solution methodAuthor(s): Mehdi Alinaghian, Mohsen ZamaniPages: 1375-139122) A genetic algorithm approach to the smart grid tariff design problemAuthor(s): Will Rogers, Paula Carroll, James McDermottPages: 1393-140523) Lyapunov–Krasovskii stable T2FNN controller for a class of nonlinear time-delay systemsAuthor(s): Sehraneh Ghaemi, Kamel Sabahi, Mohammad Ali BadamchizadehPages: 1407-1419 Read more »
  • Soft Computing, Volume 23, Issue 3, February 2019
    1) Butterfly optimization algorithm: a novel approach for global optimizationAuthor(s): Sankalap Arora, Satvir SinghPages: 715-734 2) Congruences and ideals in generalized pseudoeffect algebras revisitedAuthor(s): S. PulmannováPages: 735-7453) An efficient online/offline ID-based short signature procedure using extended chaotic mapsAuthor(s): Chandrashekhar Meshram, Chun-Ta Li, Sarita Gajbhiye MeshramPages: 747-7534) Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversionAuthor(s): Qiuhong Xiang, Bolin Liao, Lin Xiao, Long Lin, Shuai LiPages: 755-7665) Post-training discriminative pruning for RBMsAuthor(s): Máximo Sánchez-Gutiérrez, Enrique M. Albornoz, Hugo L. RufinerPages: 767-7816) Gravitational search algorithm with both attractive and repulsive forcesAuthor(s): Hamed Zandevakili, Esmat Rashedi, Ali MahaniPages: 783-8257) A statistic approach for power analysis of integrated GPUAuthor(s): Qiong Wang, Ning Li, Li Shen, Zhiying WangPages: 827-838) Resolution of single-variable fuzzy polynomial equations and an upper bound on the number of solutionsAuthor(s): Hamed Farahani, Mahmoud Paripour, Saeid AbbasbandyPages: 837-8459) Rescheduling-based congestion management scheme using particle swarm optimization with distributed acceleration constantsAuthor(s): Naresh Kumar YadavPages: 847-85710) Dual buffer rotation four-stage pipeline for CPU–GPU cooperative computingAuthor(s): Tao Li, Qiankun Dong, Yifeng Wang, Xiaoli Gong, Yulu YangPages: 859-86911) Physarum-energy optimization algorithmAuthor(s): Xiang Feng, Yang Liu, Huiqun Yu, Fei LuoPages: 871-88812) An adaptive control study for the DC motor using meta-heuristic algorithmsAlejandro Rodríguez-Molina, Miguel Gabriel Villarreal-CervantesPages: 889-90613) Interval valued L-fuzzy prime ideals, triangular norms and partially ordered groupsAuthor(s): Babushri Srinivas Kedukodi, Syam Prasad Kuncham, B. JagadeeshaPages: 907-92014) An interpretable neuro-fuzzy approach to stock price forecastingAuthor(s): Sharifa Rajab, Vinod SharmaPages: 921-93615) Collaborative multi-view K-means clusteringAuthor(s): Safa Bettoumi, Chiraz Jlassi, Najet ArousPages: 937-94516) New types of generalized Bosbach states on non-commutative residuated latticesAuthor(s): Weibing ZuoPages: 947-95917) Bi-objective corridor allocation problem using a permutation-based genetic algorithm hybridized with a local search techniqueAuthor(s): Zahnupriya Kalita, Dilip Datta, Gintaras PalubeckisPages: 961-98618) Derivative-based acceleration of general vector machineAuthor(s): Binbin Yong, Fucun Li, Qingquan Lv, Jun Shen, Qingguo ZhouPages: 987-99519) An approach based on reliability-based possibility degree of interval for solving general interval bilevel linear programming problemAuthor(s): Aihong Ren, Yuping WangPages: 997-100620) Emotion-based color transfer of images using adjustable color combinationsAuthor(s): Yuan-Yuan Su, Hung-Min SunPages: 1007-102021) Improved metaheuristic-based energy-efficient clustering protocol with optimal base station location in wireless sensor networksAuthor(s): Palvinder Singh Mann, Satvir SinghPages: 1021-103722) Evolving nearest neighbor time series forecastersAuthor(s): Juan J. Flores, José R. Cedeño González, Rodrigo Lopez FariasPages: 1039-104823) On separating axioms and similarity of soft topological spacesAuthor(s): Małgorzata TerepetaPages: 1049-105724) Optimal platform design with modularity strategy under fuzzy environmentAuthor(s): Qinyu Song, Yaodong NiPages: 1059-1070 Read more »
  • Complex & Intelligent Systems, Volume 4, Issue 4, December 2018
    1) A robust system maturity model for complex systems utilizing system readiness level and Petri netsAuthor(s): Brent Thal, Bill Olson, Paul Blessner Pages: 241-2502) Circuit design and simulation for the fractional-order chaotic behavior in a new dynamical systemAuthor(s): Z. Hammouch, T. Mekkaoui Pages: 251-2603) Priority ranking for energy resources in Turkey and investment planning for renewable energy resourcesAuthor(s): Mehmet Emin Baysal, Nazli Ceren Cetin Pages: 261-2694) Towards online data-driven prognostics systemAuthor(s): Hatem M. Elattar, Hamdy K. Elminir, A. M. Riad Pages: 271-2825) Model-based evolutionary algorithms: a short surveyAuthor(s): Ran Cheng, Cheng He, Yaochu Jin, Xin Yao Pages: 283-292 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems: Volume 30, Issue 1, January 2019
    1. Editorial: Booming of Neural Networks and Learning SystemsPage(s): 2 - 102. Deep CNN-Based Blind Image Quality PredictorAuthor(s): Jongyoo Kim; Anh-Duc Nguyen; Sanghoon LeePage(s): 11 - 243. Neuro-Adaptive Control With Given Performance Specifications for Strict Feedback Systems Under Full-State ConstraintsAuthor(s): Xiucai Huang; Yongduan Song; Junfeng LaiPage(s): 25 - 344. Consensus Problems Over Cooperation-Competition Random Switching Networks With Noisy ChannelsAuthor(s): Yonghong Wu; Bin Hu; Zhi-Hong GuanPage(s): 35 - 435. Estimation of Graphlet Counts in Massive NetworksAuthor(s): Ryan A. Rossi; Rong Zhou; Nesreen K. AhmedPage(s): 44 - 576. Finite-Time Passivity-Based Stability Criteria for Delayed Discrete-Time Neural Networks via New Weighted Summation InequalitiesAuthor(s): Ramasamy Saravanakumar; Sreten B. Stojanovic; Damnjan D. Radosavljevic; Choon Ki Ahn; Hamid Reza KarimiPage(s): 58 - 717. Multiple-Model Adaptive Estimation for 3-D and 4-D Signals: A Widely Linear Quaternion ApproachAuthor(s): Min Xiang; Bruno Scalzo Dees; Danilo P. MandicPage(s): 72 - 848. Optimal Synchronization Control of Multiagent Systems With Input Saturation via Off-Policy Reinforcement LearningAuthor(s): Jiahu Qin; Man Li; Yang Shi; Qichao Ma; Wei Xing ZhengPage(s): 85 - 969. Design and Adaptive Control for an Upper Limb Robotic Exoskeleton in Presence of Input SaturationAuthor(s): Wei He; Zhijun Li; Yiting Dong; Ting ZhaoPage(s): 97 - 10810. A Cost-Sensitive Deep Belief Network for Imbalanced ClassificationAuthor(s): Chong Zhang; Kay Chen Tan; Haizhou Li; Geok Soon HongPage(s): 109 - 12211. A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking NeuronsAuthor(s): Malu Zhang; Hong Qu; Ammar Belatreche; Yi Chen; Zhang YiPage(s): 123 - 13712. Enhanced Robot Speech Recognition Using Biomimetic Binaural Sound Source LocalizationAuthor(s): Jorge Dávila-Chacón; Jindong Liu; Stefan WermterPage(s): 138 - 15013. A Discrete-Time Projection Neural Network for Sparse Signal Reconstruction With Application to Face RecognitionAuthor(s): Bingrong Xu; Qingshan Liu; Tingwen HuangPage(s): 151 - 16214. Domain-Weighted Majority Voting for CrowdsourcingAuthor(s): Dapeng Tao; Jun Cheng; Zhengtao Yu; Kun Yue; Lizhen WangPage(s): 163 - 17415. Reconstructible Nonlinear Dimensionality Reduction via Joint Dictionary LearningAuthor(s): Xian Wei; Hao Shen; Yuanxiang Li; Xuan Tang; Fengxiang Wang; Martin Kleinsteuber; Yi Lu MurpheyPage(s): 175 - 18916. On the Duality Between Belief Networks and Feed-Forward Neural NetworksAuthor(s): Paul M. BaggenstossPage(s): 190 - 20017. Exploiting Combination Effect for Unsupervised Feature Selection by L2,0 NormAuthor(s): Xingzhong Du; Feiping Nie; Weiqing Wang; Yi Yang; Xiaofang ZhouPage(s): 201 - 21418. Leader-Following Practical Cluster Synchronization for Networks of Generic Linear Systems: An Event-Based ApproachAuthor(s): Jiahu Qin; Weiming Fu; Yang Shi; Huijun Gao; Yu KangPage(s): 215 - 22419. Semisupervised Learning Based on a Novel Iterative Optimization Model for Saliency DetectionAuthor(s): Shuwei Huo; Yuan Zhou; Wei Xiang; Sun-Yuan KungPage(s): 225 - 24120. Augmented Real-Valued Time-Delay Neural Network for Compensation of Distortions and Impairments in Wireless TransmittersAuthor(s): Dongming Wang; Mohsin Aziz; Mohamed Helaoui; Fadhel M. GhannouchiPage(s): 242 - 25421. UCFTS: A Unilateral Coupling Finite-Time Synchronization Scheme for Complex NetworksAuthor(s): Min Han; Meng Zhang; Tie Qiu; Meiling XuPage(s): 255 - 26822. A Semisupervised Classification Approach for Multidomain Networks With Domain SelectionAuthor(s): Chuan Chen; Jingxue Xin; Yong Wang; Luonan Chen; Michael K. NgPage(s): 269 - 28323. Neurons With Paraboloid Decision Boundaries for Improved Neural Network Classification PerformanceAuthor(s): Nikolaos Tsapanos; Anastasios Tefas; Nikolaos Nikolaidis; Ioannis PitasPage(s): 284 - 29424. Adaptive Reinforcement Learning Control Based on Neural Approximation for Nonlinear Discrete-Time Systems With Unknown Nonaffine Dead-Zone InputAuthor(s): Yan-Jun Liu; Shu Li; Shaocheng Tong; C. L. Philip ChenPage(s): 295 - 30525. Filippov Hindmarsh–Rose Neuronal Model With Threshold Policy ControlAuthor(s): Yi Yang; Xiaofeng LiaoPage(s): 306 - 31126. Blind Denoising AutoencoderAuthor(s): Angshul MajumdarPage(s): 312 - 31727. Variational Random Function Model for Network ModelingAuthor(s): Zenglin Xu; Bin Liu; Shandian Zhe; Haoli Bai; Zihan Wang; Jennifer NevillePage(s): 318 - 324 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems: Volume 30, Issue 2, February 2019.
    1. fpgaConvNet: Mapping Regular and Irregular Convolutional Neural Networks on FPGAsAuthor(s): Stylianos I. Venieris; Christos-Savvas BouganisPage(s): 326 - 3422. A Novel Neural Networks Ensemble Approach for Modeling Electrochemical CellsAuthor(s): Massimiliano Luzi; Maurizio Paschero; Antonello Rizzi; Enrico Maiorino; Fabio Massimo Frattale MascioliPage(s): 343 - 3543. Exploring Correlations Among Tasks, Clusters, and Features for Multitask ClusteringAuthor(s): Wenming Cao; Si Wu; Zhiwen Yu; Hau-San WongPage(s): 355 - 3684. Scaling Up Kernel SVM on Limited Resources: A Low-Rank Linearization ApproachAuthor(s): Liang Lan; Zhuang Wang; Shandian Zhe; Wei Cheng; Jun Wang; Kai ZhangPage(s): 369 - 3785. Optimized Neural Network Parameters Using Stochastic Fractal Technique to Compensate Kalman Filter for Power System-Tracking-State EstimationAuthor(s): Hossam Mosbah; Mohamed E. El-HawaryPage(s): 379 - 3886. Online Identification of Nonlinear Stochastic Spatiotemporal System With Multiplicative Noise by Robust Optimal Control-Based Kernel Learning MethodAuthor(s): Hanwen Ning; Guangyan Qing; Tianhai Tian; Xingjian JingPage(s): 389 - 4047. Semisupervised Learning With Parameter-Free Similarity of Label and Side InformationAuthor(s): Rui Zhang; Feiping Nie; Xuelong LiPage(s): 405 - 4148. H-infinity State Estimation for Discrete-Time Nonlinear Singularly Perturbed Complex Networks Under the Round-Robin ProtocolAuthor(s): Xiongbo Wan; Zidong Wang; Min Wu; Xiaohui LiuPage(s): 415 - 4269. Temporal Self-Organization: A Reaction–Diffusion Framework for Spatiotemporal MemoriesAuthor(s): Prayag Gowgi; Shayan Srinivasa GaraniPage(s): 427 - 44810. Variational Bayesian Learning for Dirichlet Process Mixture of Inverted Dirichlet Distributions in Non-Gaussian Image Feature ModelingAuthor(s): Zhanyu Ma; Yuping Lai; W. Bastiaan Kleijn; Yi-Zhe Song; Liang Wang; Jun GuoPage(s): 449 - 46311. Hierarchical Decision and Control for Continuous Multitarget Problem: Policy Evaluation With Action DelayAuthor(s): Jiangcheng Zhu; Jun Zhu; Zhepei Wang; Shan Guo; Chao XuPage(s): 464 - 47312. Unified Low-Rank Matrix Estimate via Penalized Matrix Least Squares ApproximationAuthor(s): Xiangyu Chang; Yan Zhong; Yao Wang; Shaobo LinPage(s): 474 - 48513. Online Active Learning Ensemble Framework for Drifted Data StreamsAuthor(s): Jicheng Shan; Hang Zhang; Weike Liu; Qingbao LiuPage(s): 486 - 49814. A New Approach to Stochastic Stability of Markovian Neural Networks With Generalized Transition RatesAuthor(s): Ruimei Zhang; Deqiang Zeng; Ju H. Park; Yajuan Liu; Shouming ZhongPage(s): 499 - 51015. Optimization of Distributions Differences for ClassificationAuthor(s): Mohammad Reza Bonyadi; Quang M. Tieng; David C. ReutensPage(s): 511 - 52316. Deep Convolutional Identifier for Dynamic Modeling and Adaptive Control of Unmanned HelicopterAuthor(s): Yu Kang; Shaofeng Chen; Xuefeng Wang; Yang CaoPage(s): 524 - 53817. Neural-Response-Based Extreme Learning Machine for Image ClassificationAuthor(s): Hongfeng Li; Hongkai Zhao; Hong LiPage(s): 539 - 55218. Deep Ensemble Machine for Video ClassificationAuthor(s): Jiewan Zheng; Xianbin Cao; Baochang Zhang; Xiantong Zhen; Xiangbo SuPage(s): 553 - 56519. Multiple ψ-Type Stability of Cohen–Grossberg Neural Networks With Both Time-Varying Discrete Delays and Distributed DelaysAuthor(s): Fanghai Zhang; Zhigang ZengPage(s): 566 - 57920. Neural Network Training With Levenberg–Marquardt and Adaptable Weight CompressionAuthor(s): James S. Smith; Bo Wu; Bogdan M. WilamowskiPage(s): 580 - 58721. Solving Partial Least Squares Regression via Manifold Optimization ApproachesAuthor(s): Haoran Chen; Yanfeng Sun; Junbin Gao; Yongli Hu; Baocai YinPage(s): 588 - 60022. Dendritic Neuron Model With Effective Learning Algorithms for Classification, Approximation, and PredictionAuthor(s): Shangce Gao; Mengchu Zhou; Yirui Wang; Jiujun Cheng; Hanaki Yachi; Jiahai WangPage(s): 601 - 61423. Multiclass Nonnegative Matrix Factorization for Comprehensive Feature Pattern DiscoveryAuthor(s): Yifeng Li; Youlian Pan; Ziying LiuPage(s): 615 - 62924. Self-Paced Learning-Based Probability Subspace Projection for Hyperspectral Image ClassificationAuthor(s): Shuyuan Yang; Zhixi Feng; Min Wang; Kai ZhangPage(s): 630 - 63525. Hierarchical Stability Conditions for a Class of Generalized Neural Networks With Multiple Discrete and Distributed DelaysAuthor(s): Lei Song; Sing Kiong Nguang; Dan HuangPage(s): 636 - 642 Read more »
  • Full UK PhD Scholarships
    Full UK PhD scholarships in evolutionary computation/ computational intelligence/data analytics/ operations research/optimisation/simulationThanks to an arisen opportunity, we at the Operational Research (OR) group, Liverpool John Moores University (United Kingdom) may be able to offer a small number of PhD scholarships (full or tuition-fees-only depending on the quality of the candidate).There are two types of scholarships:The ones for UK/EU/settled students:Deadline 3rd March. Results to be known by end of March.provide full tuition fees for three years and,living expenses + running cost cover of about £16,500 each year (to be determined) for 3 yearsstudents have to enrol in Sept-Oct 2019Brexit will not have any impact on these scholarshipsThe ones for international students:Provide about £20,000 each year (to be determined). Students can use this amount to pay toward their tuition fees and living expenses.If the successful candidate joins one of the projects currently being run by the OR group, he/she may get additional scholarships depending on research performance and level of contribution. Regarding research topic, any area in evolutionary computation/ computational intelligence/data analytics/ operations research would be acceptable. However, I would prefer a topic that relates to one of our existing projects, which are in the following areas:OR techniques to study/mitigate the impact of climate change on transportation. For example, we have a project (with Merseyrail and Network Rail) on using data analytics and optimisation to anticipate and mitigate the impact of leaves falling on train tracks.Evolutionary computation or meta-heuristicsOR/data analytics applications in rail, in partnership with Merseyrail, Network Rail, and Rail Delivery GroupOR applications in maritime, in partnership with UK, EU and overseas portsOR applications in sustainable transportation, e.g. bicycle, e-bikes, walking, buses, emission/congestion reduction etc., in partnership with local authorities and transport authorities (e.g. the ones in Liverpool and Manchester)OR applications in logistics (e.g. bin packing, vehicle routing etc.) in partnership with logistics companies, especially those in airports, ports, and manufacturing plants (especially those in Liverpool).OR applications in manufacturing, in partnership with car manufacturers e.g. Vauxhall and Jaguar Land Rover.Interested candidates please contact Dr. Trung Thanh Nguyen with your full CV and transcripts. It is important that interested candidates contact me ASAP to ensure that we can prepare your applications in the best way to maximise your chance before the deadline of 3rd March. Read more »
  • IEEE Transactions on Neural Networks and Learning Systems, Volume 29, Issue 9, September 2018
    1. Continuous DropoutAuthor(s): Xu Shen; Xinmei Tian; Tongliang Liu; Fang Xu; Dacheng TaoPages: 3926 - 39372. Deep Manifold Learning Combined With Convolutional Neural Networks for Action RecognitionAuthor(s): Xin Chen; Jian Weng; Wei Lu; Jiaming Xu; Jiasi WengPages: 3938 - 39523. AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential DataAuthor(s): Ava Bargi; Richard Yi Da Xu; Massimo PiccardiPages: 3953 - 39684. Deep Learning of Constrained Autoencoders for Enhanced Understanding of DataAuthor(s): Babajide O. Ayinde; Jacek M. ZuradaPages: 3969 - 39795. Learning Methods for Dynamic Topic Modeling in Automated Behavior AnalysisAuthor(s): Olga Isupova; Danil Kuzin; Lyudmila MihaylovaPages: 3980 - 39936. Support Vector Data Descriptions and k-Means Clustering: One Class?Author(s): Nico Görnitz; Luiz Alberto Lima; Klaus-Robert Müller; Marius Kloft; Shinichi NakajimaPages: 3994 - 40067. Data-Driven Robust M-LS-SVR-Based NARX Modeling for Estimation and Control of Molten Iron Quality Indices in Blast Furnace IronmakingAuthor(s): Ping Zhou; Dongwei Guo; Hong Wang; Tianyou ChaiPages: 4007 - 40218. Detection of Sources in Non-Negative Blind Source Separation by Minimum Description Length CriterionAuthor(s): Chia-Hsiang Lin; Chong-Yung Chi; Lulu Chen; David J. Miller; Yue WangPages: 4022 - 40379. Nonparametric Coupled Bayesian Dictionary and Classifier Learning for Hyperspectral ClassificationAuthor(s): Naveed Akhtar; Ajmal MianPages: 4038 - 405010. Heterogeneous Multitask Metric Learning Across Multiple DomainsAuthor(s): Yong Luo; Yonggang Wen; Dacheng TaoPages: 4051 - 406411. Classification of Imbalanced Data by Oversampling in Kernel Space of Support Vector MachinesAuthor(s): Josey Mathew; Chee Khiang Pang; Ming Luo; Weng Hoe LeongPages: 4065 - 407612. A Novel Error-Compensation Control for a Class of High-Order Nonlinear Systems With Input DelayAuthor(s): Chao Shi; Zongcheng Liu; Xinmin Dong; Yong ChenPages: 4077 - 408713. Dimensionality Reduction in Multiple Ordinal RegressionAuthor(s): Jiabei Zeng; Yang Liu; Biao Leng; Zhang Xiong; Yiu-ming CheungPages: 4088 - 410114. A Deep Machine Learning Method for Classifying Cyclic Time Series of Biological Signals Using Time-Growing Neural NetworkAuthor(s): Arash Gharehbaghi; Maria LindénPages: 4102 - 411515. Transductive Zero-Shot Learning With Adaptive Structural EmbeddingAuthor(s): Yunlong Yu; Zhong Ji; Jichang Guo; Yanwei PangPages: 4116 - 412716. Bayesian Nonparametric Regression Modeling of Panel Data for Sequential ClassificationAuthor(s): Sihan Xiong; Yiwei Fu; Asok RayPages: 4128 - 413917. Symmetric Predictive Estimator for Biologically Plausible Neural LearningAuthor(s): David Xu; Andrew Clappison; Cameron Seth; Jeff OrchardPages: 4140 - 415118. A Distance-Based Weighted Undersampling Scheme for Support Vector Machines and its Application to Imbalanced ClassificationAuthor(s): Qi Kang; Lei Shi; MengChu Zhou; XueSong Wang; QiDi Wu; Zhi WeiPages: 4152 - 416519. Learning With Coefficient-Based Regularized Regression on Markov ResamplingAuthor(s): Luoqing Li; Weifu Li; Bin Zou; Yulong Wang; Yuan Yan Tang; Hua HanPages: 4166 - 417620. Sequential Labeling With Structural SVM Under Nondecomposable LossesAuthor(s): Guopeng Zhang; Massimo Piccardi; Ehsan Zare BorzeshiPages: 4177 - 418821. The Stability of Stochastic Coupled Systems With Time-Varying Coupling and General Topology StructureAuthor(s): Yan Liu; Wenxue Li; Jiqiang FengPages: 4189 - 420022. Stability Analysis of Quaternion-Valued Neural Networks: Decomposition and Direct ApproachesAuthor(s): Yang Liu; Dandan Zhang; Jungang Lou; Jianquan Lu; Jinde CaoPages: 4201 - 421123. On Wang k WTA With Input Noise, Output Node Stochastic, and Recurrent State NoiseAuthor(s): John Sum; Chi-Sing Leung; Kevin I.-J. HoPages: 4212 - 422224. Event-Driven Stereo Visual Tracking Algorithm to Solve Object OcclusionAuthor(s): Luis A. Camuñas-Mesa; Teresa Serrano-Gotarredona; Sio-Hoi Ieng; Ryad Benosman; Bernabé Linares-BarrancoPages: 4223 - 423725. Stability Analysis of Neural Networks With Time-Varying Delay by Constructing Novel Lyapunov FunctionalsAuthor(s): Tae H. Lee; Hieu M. Trinh; Ju H. ParkPages: 4238 - 424726. Design, Analysis, and Representation of Novel Five-Step DTZD Algorithm for Time-Varying Nonlinear OptimizationAuthor(s): Dongsheng Guo; Laicheng Yan; Zhuoyun NiePages: 4248 - 426027. Neural Observer and Adaptive Neural Control Design for a Class of Nonlinear SystemsAuthor(s): Bing Chen; Huaguang Zhang; Xiaoping Liu; Chong LinPages: 4261 - 427128. Shared Autoencoder Gaussian Process Latent Variable Model for Visual ClassificationAuthor(s): Jinxing Li; Bob Zhang; David ZhangPages: 4272 - 428629. Online Supervised Learning for Hardware-Based Multilayer Spiking Neural Networks Through the Modulation of Weight-Dependent Spike-Timing-Dependent PlasticityAuthor(s): Nan Zheng; Pinaki MazumderPages: 4287 - 430230. Neural-Network-Based Adaptive Backstepping Control With Application to Spacecraft Attitude RegulationAuthor(s): Xibin Cao; Peng Shi; Zhuoshi Li; Ming LiuPages: 4303 - 431331. Recursive Adaptive Sparse Exponential Functional Link Neural Network for Nonlinear AEC in Impulsive Noise EnvironmentAuthor(s): Sheng Zhang; Wei Xing ZhengPages: 4314 - 432332. Multilabel Prediction via Cross-View SearchAuthor(s): Xiaobo Shen; Weiwei Liu; Ivor W. Tsang; Quan-Sen Sun; Yew-Soon OngPages: 4324 - 433833. Large-Scale Metric Learning: A Voyage From Shallow to DeepAuthor(s): Masoud Faraki; Mehrtash T. Harandi; Fatih PorikliPages: 4339 - 434634. Distributed Event-Triggered Adaptive Control for Cooperative Output Regulation of Heterogeneous Multiagent Systems Under Switching TopologyAuthor(s): Ruohan Yang; Hao Zhang; Gang Feng; Huaicheng YanPages: 4347 - 435835. Event-Based Adaptive NN Tracking Control of Nonlinear Discrete-Time SystemsAuthor(s): Yuan-Xin Li; Guang-Hong YangPages: 4359 - 436936. Dynamic Analysis of Hybrid Impulsive Delayed Neural Networks With UncertaintiesAuthor(s): Bin Hu; Zhi-Hong Guan; Tong-Hui Qian; Guanrong ChenPages: 4370 - 438437. Robust Zeroing Neural-Dynamics and Its Time-Varying Disturbances Suppression Model Applied to Mobile Robot ManipulatorsAuthor(s): Dechao Chen; Yunong ZhangPages: 4385 - 439738. Multiple-Instance Ordinal RegressionAuthor(s): Yanshan Xiao; Bo Liu; Zhifeng HaoPages: 4398 - 441339. Neuroadaptive Control With Given Performance Specifications for MIMO Strict-Feedback Systems Under Nonsmooth Actuation and Output ConstraintsAuthor(s): Yongduan Song; Shuyan ZhouPages: 4414 - 442540. Lagrangean-Based Combinatorial Optimization for Large-Scale S3VMsAuthor(s): Francesco Bagattini; Paola Cappanera; Fabio SchoenPages: 4426 - 443541. Adaptive Fault-Tolerant Control for Nonlinear Systems With Multiple Sensor Faults and Unknown Control DirectionsAuthor(s): Ding Zhai; Liwei An; Xiaojian Li; Qingling ZhangPages: 4436 - 444642. Design of Distributed Observers in the Presence of Arbitrarily Large Communication DelaysAuthor(s): Kexin Liu; Jinhu Lü; Zongli LinPages: 4447 - 446143. A Solution Path Algorithm for General Parametric Quadratic Programming ProblemAuthor(s): Bin Gu; Victor S. ShengPages: 4462 - 447244. Online Density Estimation of Nonstationary Sources Using Exponential Family of DistributionsAuthor(s): Kaan Gokcesu; Suleyman S. KozatPages: 4473 - 447845. Image-Specific Classification With Local and Global DiscriminationsAuthor(s): Chunjie Zhang; Jian Cheng; Changsheng Li; Qi TianPages: 4479 - 448646. Global Asymptotic Stability for Delayed Neural Networks Using an Integral Inequality Based on Nonorthogonal PolynomialsAuthor(s): Xian-Ming Zhang; Wen-Juan Lin; Qing-Long Han; Yong He; Min WuPages: 4487 - 449347. L1-Norm Distance Minimization-Based Fast Robust Twin Support Vector k-Plane ClusteringAuthor(s): Qiaolin Ye; Henghao Zhao; Zechao Li; Xubing Yang; Shangbing Gao; Tongming Yin; Ning YePages: 4494 - 450348. Extensions to Online Feature Selection Using Bagging and BoostingAuthor(s): Gregory Ditzler; Joseph LaBarck; James Ritchie; Gail Rosen; Robi PolikarPages: 4504 - 450949. On Adaptive Boosting for System IdentificationAuthor(s): Johan Bjurgert; Patricio E. Valenzuela; Cristian R. RojasPages: 4510 - 451450. Universal Approximation by Using the Correntropy Objective FunctionAuthor(s): Mojtaba Nayyeri; Hadi Sadoghi Yazdi; Alaleh Maskooki; Modjtaba RouhaniPages: 4515 - 452151. Stability Analysis of Optimal Adaptive Control Under Value Iteration Using a Stabilizing Initial PolicyAuthor(s): Ali HeydariPages: 4522 - 452752. Object Categorization Using Class-Specific RepresentationsAuthor(s): Chunjie Zhang; Jian Cheng; Liang Li; Changsheng Li; Qi TianPages: 4528 - 453453. Improved Stability Analysis for Delayed Neural NetworksAuthor(s): Zhichen Li; Yan Bai; Congzhi Huang; Huaicheng Yan; Shicai MuPages: 4535 - 454154. Connectivity-Preserving Consensus Tracking of Uncertain Nonlinear Strict-Feedback Multiagent Systems: An Error Transformation ApproachAuthor(s): Sung Jin YooPages: 4542 - 4548 Read more »
  • Soft Computing, Volume 22, Issue 16, August 2018
    1. Optimization and decision-making with big dataAuthor(s): Xiang Li, Xiaofeng XuPages: 5197-51992. Impact of healthcare insurance on medical expense in China: new evidence from meta-analysisAuthor(s): Jian Chai, Limin Xing, Youhong Zhou, Shuo Li, K. K. Lai, Shouyang WangPages: 5201-52133. Modified bat algorithm based on covariance adaptive evolution for global optimization problemsAuthor(s): Xian Shan, Huijin ChengPages: 5215-52304. The impacts of private risk aversion magnitude and moral hazard in R&D project under uncertain environmentAuthor(s): Yiping Fu, Zhihua Chen, Yanfei LanPages: 5231-52465. Supporting consumer’s purchase decision: a method for ranking products based on online multi-attribute product ratingsAuthor(s): Zhi-Ping Fan, Yang Xi, Yang LiuPages: 5247-52616. Effect of risk attitude on outsourcing leadership preferences with demand uncertaintyAuthor(s): Huiru Chen, Yingchen Yan, Zhibing Liu, Tiantian XingPages: 5263-52787. Value-at-risk forecasts by dynamic spatial panel GJR-GARCH model for international stock indices portfolioAuthor(s): Wei-Guo Zhang, Guo-Li Mo, Fang Liu, Yong-Jun LiuPages: 5279-52978. Case-based reasoning with optimized weight derived by particle swarm optimization for software effort estimationAuthor(s): Dengsheng Wu, Jianping Li, Chunbing BaoPages: 5299-53109. Dynamic analysis for Governance–Pollution model with education promoting controlAuthor(s): Jiaorui Li, Siqi YuPages: 5311-532110. A combined neural network model for commodity price forecasting with SSAAuthor(s): Jue Wang, Xiang LiPages: 5323-533311. International investing in uncertain financial marketAuthor(s): Yi Zhang, Jinwu Gao, Qi AnPages: 5335-534612. A novel multi-attribute group decision-making method based on the MULTIMOORA with linguistic evaluationsAuthor(s): Xi Chen, Liu Zhao, Haiming LiangPages: 5347-536113. Evaluation research on commercial bank counterparty credit risk management based on new intuitionistic fuzzy methodAuthor(s): Qian Liu, Chong Wu, Lingyan LouPages: 5363-537514. A novel hybrid decision support system for thyroid disease forecastingAuthor(s): Waheed Ahmad, Ayaz Ahmad, Chuncheng Lu, Barkat Ali Khoso, Lican HuangPages: 5377-538315. A bi-level optimization model of LRP in collaborative logistics network considered backhaul no-load costAuthor(s): Xiaofeng Xu, Yao Zheng, Lean YuPages: 5385-539316. Measuring and forecasting the volatility of USD/CNY exchange rate with multi-fractal theoryAuthor(s): Limei Sun, Lina Zhu, Alec Stephenson, Jinyu WangPages: 5395-540617. Affordable levels of house prices using fuzzy linear regression analysis: the case of ShanghaiAuthor(s): Jian Zhou, Hui Zhang, Yujie Gu, Athanasios A. PantelousPages: 5407-541818. What is the value of an online retailer sharing demand forecast information?Author(s): Jinlou Zhao, Hui Zhu, Shuang ZhengPages: 5419-542819. Credibility support vector machines based on fuzzy outputsAuthor(s): Chao Wang, Xiaowei Liu, Minghu Ha, Ting ZhaoPages: 5429-543720. A novel two-sided matching decision method for technological knowledge supplier and demander considering the network collaboration effectAuthor(s): Jing Han, Bin Li, Haiming Liang, Kin Keung LaiPages: 5439-545121. Managerial compensation and research and development investment in a two-period agency settingAuthor(s): Zhiying Zhao, Guoqing Yang, Jianmin XuPages: 5453-546522. Intervention strategies for false information on two-layered networks in public crisis by numerical simulationsAuthor(s): Xiaoxia Zhu, Mengmeng LiuPages: 5467-547723. The coordination mechanisms of emergency inventory model under supply disruptionsAuthor(s): Jiaguo Liu, Huan Zhou, Junjin WangPages: 5479-548924. Evolutionary many-objective optimization based on linear assignment problem transformationsAuthor(s): Luis Miguel Antonio, José A. Molinet Berenguer, Carlos A. Coello CoelloPages: 5491-551225. Multiple-attribute decision-making method based on hesitant fuzzy linguistic Muirhead mean aggregation operatorsAuthor(s): Peide Liu, Ying Li, Maocong Zhang, Li Zhang, Juan ZhaoPages: 5513-552426. Sustainability evaluation of the supply chain with undesired outputs and dual-role factors based on double frontier network DEAAuthor(s): Yi Su, Wei SunPages: 5525-553327. Project portfolio selection based on synergy degree of composite systemAuthor(s): LiBiao Bai, Hongliang Chen, Qi Gao, Wei LuoPages: 5535-554528. Balancing strategic contributions and financial returns: a project portfolio selection model under uncertaintyAuthor(s): Yuntao Guo, Lin Wang, Suike Li, Zhi Chen, Yin ChengPages: 5547-555929. An optimal model using data envelopment analysis for uncertainty metrics in reliabilityAuthor(s): Tianpei Zu, Rui Kang, Meilin Wen, Yi YangPages: 5561-5568 Read more »
  • Soft Computing, Volume 22, Issue 15, August 2018
    Special Issue on Extensions of Fuzzy Sets in Decision Making1. A special issue on extensions of fuzzy sets in decision-makingAuthor(s): Cengiz KahramanPages: 4851-48532. Probabilistic OWA distances applied to asset managementAuthor(s): José M. Merigó, Ligang Zhou, Dejian Yu, Nabil Alrajeh, Khalid AlnowibetPages: 4855-48783. Solid waste collection system selection for smart cities based on a type-2 fuzzy multi-criteria decision techniqueAuthor(s): Murside Topaloglu, Ferhat Yarkin, Tolga KayaPages: 4879-48904. A novel interval-valued neutrosophic EDAS method: prioritization of the United Nations national sustainable development goalsAuthor(s): Ali Karaşan, Cengiz KahramanPages: 4891-49065. A new optimization meta-heuristic algorithm based on self-defense mechanism of the plants with three reproduction operatorsAuthor(s): Camilo Caraveo, Fevrier Valdez, Oscar CastilloPages: 4907-49206. Interval type-2 fuzzy c-control charts using likelihood and reduction methodsAuthor(s): Hatice Ercan-Teksen, Ahmet Sermet AnagünPages: 4921-49347. Comments on crucial and unsolved problems on Atanassov’s intuitionistic fuzzy setsAuthor(s): Piotr DworniczakPages: 4935-49398. A novel interval-valued neutrosophic AHP with cosine similarity measureAuthor(s): Eda Bolturk, Cengiz KahramanPages: 4941-49589. An advanced study on the similarity measures of intuitionistic fuzzy sets based on the set pair analysis theory and their application in decision makingAuthor(s): Harish Garg, Kamal KumarPages: 4959-497010. An interval type 2 hesitant fuzzy MCDM approach and a fuzzy c means clustering for retailer clusteringAuthor(s): Sultan Ceren Oner, Başar OztaysiPages: 4971-498711. Criteria evaluation for pricing decisions in strategic marketing management using an intuitionistic cognitive map approachAuthor(s): Elif Dogu, Y. Esra AlbayrakPages: 4989-500512. Pythagorean fuzzy engineering economic analysis of solar power plantsAuthor(s): Veysel Çoban, Sezi Çevik OnarPages: 5007-502013. Multi-objective evolutionary algorithm for tuning the Type-2 inference engine on classification taskAuthor(s): Edward C. Hinojosa, Heloisa A. CamargoPages: 5021-503114. Modeling attribute control charts by interval type-2 fuzzy setsAuthor(s): Nihal Erginel, Sevil Şentürk, Gülay YıldızPages: 5033-504115. On invariant IF-stateAuthor(s): Alžbeta Michalíková, Beloslav RiečanPages: 5043-504916. Entropy measures for Atanassov intuitionistic fuzzy sets based on divergenceAuthor(s): Ignacio Montes, Nikhil R. Pal, Susana MontesPages: 5051-507117. An enhanced fuzzy evidential DEMATEL method with its application to identify critical success factorsAuthor(s): Yuzhen Han, Yong DengPages: 5073-509018. Cloud computing technology selection based on interval-valued intuitionistic fuzzy MCDM methodsAuthor(s): Gülçin Büyüközkan, Fethullah Göçer, Orhan FeyzioğluPages: 5091-511419. Scaled aggregation operations over two- and three-dimensional index matricesAuthor(s): Velichka Traneva, Stoyan Tranev, Miroslav Stoenchev, Krassimir AtanassovPages: 5115-512020. A bipolar knowledge representation model to improve supervised fuzzy classification algorithmsAuthor(s): Guillermo Villarino, Daniel Gómez, J. Tinguaro Rodríguez, Javier MonteroPages: 5121-514621. Modeling and analysis of the simplest fuzzy PID controller of Takagi–Sugeno type with modified rule baseAuthor(s): Ritu Raj, B. M. MohanPages: 5147-516122. Expressive attribute-based keyword search with constant-size ciphertextAuthor(s): Jinguang Han, Ye Yang, Joseph K. Liu, Jiguo Li, Kaitai Liang, Jian ShenPages: 5163-517723. Task scheduling using Ant Colony Optimization in multicore architectures: a surveyAuthor(s): G. Umarani Srikanth, R. GeethaPages: 5179-5196 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems, Volume 29, Issue 7, July 2018
    1. Driving Under the Influence (of Language)Author(s): Daniel Paul Barrett; Scott Alan Bronikowski; Haonan Yu; Jeffrey Mark SiskindPages: 2668 - 26832. Cascaded Subpatch Networks for Effective CNNsAuthor(s): Xiaoheng Jiang; Yanwei Pang; Manli Sun; Xuelong LiPages: 2684 - 26943. Neighborhood-Based Stopping Criterion for Contrastive DivergenceAuthor(s): Enrique Romero Merino; Ferran Mazzanti Castrillejo; Jordi Delgado PinPages: 2695 - 27044. Neural AILC for Error Tracking Against Arbitrary Initial ShiftsAuthor(s): Mingxuan Sun; Tao Wu; Lejian Chen; Guofeng ZhangPages: 2705 - 27165. RankMap: A Framework for Distributed Learning From Dense Data SetsAuthor(s): Azalia Mirhoseini; Eva L. Dyer; Ebrahim M. Songhori; Richard Baraniuk; Farinaz KoushanfarPages: 2717 - 27306. Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric LearningAuthor(s): Shihui Ying; Zhijie Wen; Jun Shi; Yaxin Peng; Jigen Peng; Hong QiaoPages: 2731 - 27427. Transductive Regression for Data With Latent Dependence StructureAuthor(s): Nico Görnitz; Luiz Alberto Lima; Luiz Eduardo Varella; Klaus-Robert Müller; Shinichi NakajimaPages: 2743 - 27568. Variance-Constrained State Estimation for Complex Networks With Randomly Varying TopologiesAuthor(s): Hongli Dong; Nan Hou; Zidong Wang; Weijian RenPages: 2757 - 27689. Stability Analysis of Continuous-Time and Discrete-Time Quaternion-Valued Neural Networks With Linear Threshold NeuronsAuthor(s): Xiaofeng Chen; Qiankun Song; Zhongshan Li; Zhenjiang Zhao; Yurong LiuPages: 2769 - 278110. Improving Sparsity and Scalability in Regularized Nonconvex Truncated-Loss Learning ProblemsAuthor(s): Qing Tao; Gaowei Wu; Dejun ChuPages: 2782 - 279311. Policy Approximation in Policy Iteration Approximate Dynamic Programming for Discrete-Time Nonlinear SystemsAuthor(s): Wentao Guo; Jennie Si; Feng Liu; Shengwei MeiPages: 2794 - 280712. Multilateral Telecoordinated Control of Multiple Robots With Uncertain KinematicsAuthor(s): Di-Hua Zhai; Yuanqing XiaPages: 2808 - 282213. A Peak Price Tracking-Based Learning System for Portfolio SelectionAuthor(s): Zhao-Rong Lai; Dao-Qing Dai; Chuan-Xian Ren; Ke-Kun HuangPages: 2823 - 283214. Generalized Self-Organizing Maps for Automatic Determination of the Number of Clusters and Their Multiprototypes in Cluster AnalysisAuthor(s): Marian B. Gorzałczany; Filip RudzińskiPages: 2833 - 284515. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected SystemsAuthor(s): Vignesh Narayanan; Sarangapani JagannathanPages: 2846 - 285616. Online Feature Transformation Learning for Cross-Domain Object Category RecognitionAuthor(s): Xuesong Zhang; Yan Zhuang; Wei Wang; Witold PedryczPages: 2857 - 287117. Improving CNN Performance Accuracies With Min–Max ObjectiveAuthor(s): Weiwei Shi; Yihong Gong; Xiaoyu Tao; Jinjun Wang; Nanning ZhengPages: 2872 - 288518. Distribution-Preserving Stratified Sampling for Learning ProblemsAuthor(s): Cristiano Cervellera; Danilo MacciòPages: 2886 - 289519. Training DCNN by Combining Max-Margin, Max-Correlation Objectives, and Correntropy Loss for Multilabel Image ClassificationAuthor(s): Weiwei Shi; Yihong Gong; Xiaoyu Tao; Nanning ZhengPages: 2896 - 290820. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling ErrorAuthor(s): Xinjiang Lu; Wenbo Liu; Chuang Zhou; Minghui HuangPages: 2909 - 292021. Joint Attributes and Event Analysis for Multimedia Event DetectionAuthor(s): Zhigang Ma; Xiaojun Chang; Zhongwen Xu; Nicu Sebe; Alexander G. HauptmannPages: 2921 - 293022. Aggregation Analysis for Competitive Multiagent Systems With Saddle Points via Switching StrategiesAuthor(s): Liying Zhu; Zhengrong XiangPages: 2931 - 294323. Learning Multimodal Parameters: A Bare-Bones Niching Differential Evolution ApproachAuthor(s): Yue-Jiao Gong; Jun Zhang; Yicong ZhouPages: 2944 - 295924. Time-Varying System Identification Using an Ultra-Orthogonal Forward Regression and Multiwavelet Basis Functions With Applications to EEGAuthor(s): Yang Li; Wei-Gang Cui; Yu-Zhu Guo; Tingwen Huang; Xiao-Feng Yang; Hua-Liang WeiPages: 2960 - 297225. Neural Decomposition of Time-Series Data for Effective GeneralizationAuthor(s): Luke B. Godfrey; Michael S. GashlerPages: 2973 - 298526. Feature Selection Based on Neighborhood Discrimination IndexAuthor(s): Changzhong Wang; Qinghua Hu; Xizhao Wang; Degang Chen; Yuhua Qian; Zhe DongPages: 2986 - 299927. Multistability of Recurrent Neural Networks With Nonmonotonic Activation Functions and Unbounded Time-Varying DelaysAuthor(s): Peng Liu; Zhigang Zeng; Jun WangPages: 3000 - 301028. Optimal Triggering of Networked Control SystemsAuthor(s): Ali HeydariPages: 3011 - 302129. Observer-Based Adaptive Fault-Tolerant Tracking Control of Nonlinear Nonstrict-Feedback SystemsAuthor(s): Chengwei Wu; Jianxing Liu; Yongyang Xiong; Ligang WuPages: 3022 - 303330. Joint Estimation of Multiple Conditional Gaussian Graphical ModelsAuthor(s): Feihu Huang; Songcan Chen; Sheng-Jun HuangPages: 3034 - 304631. Stability Analysis of Genetic Regulatory Networks With Switching Parameters and Time DelaysAuthor(s): Tingting Yu; Jianxing Liu; Yi Zeng; Xian Zhang; Qingshuang Zeng; Ligang WuPages: 3047 - 305832. Adaptive Neural Networks Prescribed Performance Control Design for Switched Interconnected Uncertain Nonlinear SystemsAuthor(s): Yongming Li; Shaocheng TongPages: 3059 - 306833. Robust and Efficient Boosting Method Using the Conditional RiskAuthor(s): Zhi Xiao; Zhe Luo; Bo Zhong; Xin DangPages: 3069 - 308334. Learning Deep Generative Models With Doubly Stochastic Gradient MCMCAuthor(s): Chao Du; Jun Zhu; Bo ZhangPages: 3084 - 309635. Discriminative Transfer Learning Using Similarities and DissimilaritiesAuthor(s): Ying Lu; Liming Chen; Alexandre Saidi; Emmanuel Dellandrea; Yunhong WangPages: 3097 - 311036. Discriminative Block-Diagonal Representation Learning for Image RecognitionAuthor(s): Zheng Zhang; Yong Xu; Ling Shao; Jian YangPages: 3111 - 312537. Robustness to Training Disturbances in SpikeProp LearningAuthor(s): Sumit Bam Shrestha; Qing SongPages: 3126 - 313938. Bayesian Neighborhood Component AnalysisAuthor(s): Dong Wang; Xiaoyang TanPages: 3140 - 315139. p th Moment Exponential Input-to-State Stability of Delayed Recurrent Neural Networks With Markovian Switching via Vector Lyapunov FunctionAuthor(s): Lei Liu; Jinde Cao; Cheng QianPages: 3152 - 316340. Distributed Adaptive Finite-Time Approach for Formation–Containment Control of Networked Nonlinear Systems Under Directed TopologyAuthor(s): Yujuan Wang; Yongduan Song; Wei RenPages: 3164 - 317541. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on ChipAuthor(s): Xichuan Zhou; Shengli Li; Fang Tang; Shengdong Hu; Zhi Lin; Lei ZhangPages: 3176 - 318742. Causal Inference on Multidimensional Data Using Free Probability TheoryAuthor(s): Furui Liu; Lai-Wan ChanPages: 3188 - 319843. Regularized Semipaired Kernel CCA for Domain AdaptationAuthor(s): Siamak Mehrkanoon; Johan A. K. SuykensPages: 3199 - 321344. Patch Alignment Manifold MattingAuthor(s): Xuelong Li; Kang Liu; Yongsheng Dong; Dacheng TaoPages: 3214 - 322645. Supervised Learning Based on Temporal Coding in Spiking Neural NetworksAuthor(s): Hesham MostafaPages: 3227 - 323546. Multiple Structure-View Learning for Graph ClassificationAuthor(s): Jia Wu; Shirui Pan; Xingquan Zhu; Chengqi Zhang; Philip S. YuPages: 3236 - 325147. Online Heterogeneous Transfer by Hedge Ensemble of Offline and Online DecisionsAuthor(s): Yuguang Yan; Qingyao Wu; Mingkui Tan; Michael K. Ng; Huaqing Min; Ivor W. TsangPages: 3252 - 326348. Single-Input Pinning Controller Design for Reachability of Boolean NetworksAuthor(s): Fangfei Li; Huaicheng Yan; Hamid Reza KarimiPages: 3264 - 326949. Tree-Based Kernel for Graphs With Continuous AttributesAuthor(s): Giovanni Da San Martino; Nicolò Navarin; Alessandro SperdutiPages: 3270 - 327650. Sufficient Condition for the Existence of the Compact Set in the RBF Neural Network ControlAuthor(s): Jiaming Zhu; Zhiqiang Cao; Tianping Zhang; Yuequan Yang; Yang YiPages: 3277 - 328251. Delayed Feedback Control for Stabilization of Boolean Control Networks With State DelayAuthor(s): Rongjian Liu; Jianquan Lu; Yang Liu; Jinde Cao; Zheng-Guang WuPages: 3283 - 328852. Convolutional Sparse Autoencoders for Image ClassificationAuthor(s): Wei Luo; Jun Li; Jian Yang; Wei Xu; Jian ZhangPages: 3289 - 329453. An Algorithm for Finding the Most Similar Given Sized Subgraphs in Two Weighted GraphsAuthor(s): Xu Yang; Hong Qiao; Zhi-Yong LiuPages: 3295 - 330054. Normalization and Solvability of Dynamic-Algebraic Boolean NetworksAuthor(s): Yang Liu; Jinde Cao; Bowen Li; Jianquan LuPages: 3301 - 3306 Read more »
  • IEEE Transaction on Fuzzy System, Volume 26, Issue 3, June 2018
    1. Lagrange Stability for T–S Fuzzy Memristive Neural Networks with Time-Varying Delays on Time ScalesAuthor(s): Q. Xiao and Z. ZengPages: 1091-11032. A T–S Fuzzy Model Identification Approach Based on a Modified Inter Type-2 FRCM AlgorithmAuthor(s): W. Zou, C. Li and N. ZhangPages: 1104-11133. Sensor Fault Estimation of Switched Fuzzy Systems With Unknown InputAuthor(s): H. Zhang, J. Han, Y. Wang and X. LiuPages: 1114-11244. Fuzzy Remote Tracking Control for Randomly Varying Local Nonlinear Models Under Fading and Missing MeasurementsAuthor(s): J. Song, Y. Niu, J. Lam and H. K. LamPages: 1125-11375. Distributed Adaptive Fuzzy Control for Output Consensus of Heterogeneous Stochastic Nonlinear Multiagent SystemsAuthor(s): S. Li, M. J. Er and J. ZhangPages: 1138-11526. Adaptive Fuzzy Control With Prescribed Performance for Block-Triangular-Structured Nonlinear SystemsAuthor(s): Y. Li and S. TongPages: 1153-11637. Dissipativity-Based Fuzzy Integral Sliding Mode Control of Continuous-Time T-S Fuzzy SystemsAuthor(s): Y. Wang, H. Shen, H. R. Karimi and D. DuanPages: 1164-11768. A Layered-Coevolution-Based Attribute-Boosted Reduction Using Adaptive Quantum-Behavior PSO and Its Consistent Segmentation for Neonates Brain TissueAuthor(s): W. Ding, C. T. Lin, M. Prasad, Z. Cao and J. WangPages: 1177-11919. Fuzzy Model Predictive Control of Discrete-Time Systems with Time-Varying Delay and DisturbancesAuthor(s): L. Teng, Y. Wang, W. Cai and H. LiPages: 1192-120610. Finite-Time Adaptive Fuzzy Tracking Control Design for Nonlinear SystemsAuthor(s): F. Wang, B. Chen, X. Liu and C. LinPages: 1207-121611. Combination of Classifiers With Optimal Weight Based on Evidential ReasoningAuthor(s): Z. G. Liu, Q. Pan, J. Dezert and A. MartinPages: 1217-123012. Evaluating and Comparing Soft Partitions: An Approach Based on Dempster–Shafer TheoryAuthor(s): T. Denœux, S. Li and S. SriboonchittaPages: 1231-124413. Adaptive Tracking Control for a Class of Switched Nonlinear Systems Under Asynchronous SwitchingAuthor(s): D. Zhai, A. Y. Lu, J. Dong and Q. ZhangPages: 1245-125614. Incremental Perspective for Feature Selection Based on Fuzzy Rough SetsAuthor(s): Y. Yang, D. Chen, H. Wang and X. WangPages: 1257-127315. Galois Connections Between a Fuzzy Preordered Structure and a General Fuzzy StructureAuthor(s): I. P. Cabrera, P. Cordero, F. García-Pardo, M. Ojeda-Aciego and B. De BaetsPages: 1274-128716. IC-FNN: A Novel Fuzzy Neural Network With Interpretable, Intuitive, and Correlated-Contours Fuzzy Rules for Function ApproximationAuthor(s): M. M. Ebadzadeh and A. Salimi-BadrPages: 1288-130217. On Using the Shapley Value to Approximate the Choquet Integral in Cases of Uncertain ArgumentsAuthor(s): R. R. YagerPages: 1303-131018. Adaptive Fuzzy Sliding Mode Control for Network-Based Nonlinear Systems With Actuator FailuresAuthor(s): L. Chen, M. Liu, X. Huang, S. Fu and J. QiuPages: 1311-132319. Correntropy-Based Evolving Fuzzy Neural SystemAuthor(s): R. J. Bao, H. J. Rong, P. P. Angelov, B. Chen and P. K. WongPages: 1324-133820. Multiobjective Reliability Redundancy Allocation Problem With Interval Type-2 Fuzzy UncertaintyAuthor(s): P. K. Muhuri, Z. Ashraf and Q. M. D. LohaniPages: 1339-135521. Distributed Adaptive Fuzzy Control For Nonlinear Multiagent Systems Under Directed GraphsAuthor(s): C. Deng and G. H. YangPages: 1356-136622. Probability Calculation and Element Optimization of Probabilistic Hesitant Fuzzy Preference Relations Based on Expected ConsistencyAuthor(s): W. Zhou and Z. XuPages: 1367-137823. Solving High-Order Uncertain Differential Equations via Runge–Kutta MethodAuthor(s): X. Ji and J. ZhouPages: 1379-138624. On Non-commutative Residuated Lattices With Internal StatesAuthor(s): B. Zhao and P. HePages: 1387-140025. Robust ${L_1}$ Observer-Based Non-PDC Controller Design for Persistent Bounded Disturbed TS Fuzzy SystemsAuthor(s): N. Vafamand, M. H. Asemani and A. KhayatianPages: 1401-141326. Decentralized Fault Detection for Affine T–S Fuzzy Large-Scale Systems With Quantized MeasurementsAuthor(s): H. Wang and G. H. YangPages: 1414-142627. Convergence in Distribution for Uncertain Random VariablesAuthor(s): R. Gao and D. A. RalescuPages: 1427-143428. Line Integrals of Intuitionistic Fuzzy Calculus and Their PropertiesAuthor(s): Z. Ai and Z. XuPages: 1435-144629. Unknown Input-Based Observer Synthesis for a Polynomial T–S Fuzzy Model System With UncertaintiesAuthor(s): V. P. Vu, W. J. Wang, H. C. Chen and J. M. ZuradaPages: 1447-145830. Distributed Filtering for Discrete-Time T–S Fuzzy Systems With Incomplete MeasurementsAuthor(s): D. Zhang, S. K. Nguang, D. Srinivasan and L. YuPages: 1459-147131. Multi-ANFIS Model Based Synchronous Tracking Control of High-Speed Electric Multiple UnitAuthor(s): H. Yang, Y. Fu and D. WangPages: 1472-148432. A New Self-Regulated Neuro-Fuzzy Framework for Classification of EEG Signals in Motor Imagery BCIAuthor(s): A. Jafarifarmand, M. A. Badamchizadeh, S. Khanmohammadi, M. A. Nazari and B. M. TazehkandPages: 1485-149733. $Hinfty$ LMI-Based Observer Design for Nonlinear Systems via Takagi–Sugeno Models With Unmeasured Premise VariablesAuthor(s): T. M. Guerra, R. Márquez, A. Kruszewski and M. BernalPages: 1498-150934. Ensemble Fuzzy Clustering Using Cumulative Aggregation on Random ProjectionsAuthor(s): P. Rathore, J. C. Bezdek, S. M. Erfani, S. Rajasegarar and M. PalaniswamiPages: 1510-152435. Lattice-Valued Interval Operators and Its Induced Lattice-Valued Convex StructuresAuthor(s): B. Pang and Z. Y. XiuPages: 1525-153436. Deep Takagi–Sugeno–Kang Fuzzy Classifier With Shared Linguistic Fuzzy RulesAuthor(s): Y. Zhang, H. Ishibuchi and S. WangPages: 1535-154937. Stability Analysis and Control of Two-Dimensional Fuzzy Systems With Directional Time-Varying DelaysAuthor(s): L. V. Hien and H. TrinhPages: 1550-156438. A New Fuzzy Modeling Framework for Integrated Risk Prognosis and Therapy of Bladder Cancer PatientsAuthor(s): O. Obajemu, M. Mahfouf and J. W. F. CattoPages: 1565-157739. Resolution Principle in Uncertain Random EnvironmentAuthor(s): X. Yang, J. Gao and Y. NiPages: 1578-158840. Observer-Based Fuzzy Adaptive Event-Triggered Control Codesign for a Class of Uncertain Nonlinear SystemsAuthor(s): Y. X. Li and G. H. YangPages: 1589-159941. Static Output Feedback Stabilization of Positive Polynomial Fuzzy SystemsAuthor(s): A. Meng, H. K. Lam, Y. Yu, X. Li and F. LiuPages: 1600-161242. Global Asymptotic Model-Free Trajectory-Independent Tracking Control of an Uncertain Marine Vehicle: An Adaptive Universe-Based Fuzzy Control ApproachAuthor(s): N. Wang, S. F. Su, J. Yin, Z. Zheng and M. J. ErPages: 1613-162543. Information Measures in the Intuitionistic Fuzzy Framework and Their RelationshipsAuthor(s): S. Das, D. Guha and R. MesiarPages: 1626-163744. A Random Fuzzy Accelerated Degradation Model and Statistical AnalysisAuthor(s): X. Y. Li, J. P. Wu, H. G. Ma, X. Li and R. KangPages: 1638-165045. Measures of Probabilistic Interval-Valued Intuitionistic Hesitant Fuzzy Sets and the Application in Reducing Excessive Medical ExaminationsAuthor(s): Y. Zhai, Z. Xu and H. LiaoPages: 1651-167046. A Unified Collaborative Multikernel Fuzzy Clustering for Multiview DataAuthor(s): S. Zeng, X. Wang, H. Cui, C. Zheng and D. FengPages: 1671-168747. Asynchronous Piecewise Output-Feedback Control for Large-Scale Fuzzy Systems via Distributed Event-Triggering SchemesAuthor(s): Z. Zhong, Y. Zhu and H. K. LamPages: 1688-170348. Fuzzy Group Decision Making With Incomplete Information Guided by Social InfluenceAuthor(s): N. Capuano, F. Chiclana, H. Fujita, E. Herrera-Viedma and V. LoiaPages: 1704-171849. Fuzzy Bayesian LearningAuthor(s): I. Pan and D. BesterPages: 1719-173150. Observer and Adaptive Fuzzy Control Design for Nonlinear Strict-Feedback Systems With Unknown Virtual Control CoefficientsAuthor(s): B. Chen, X. Liu and C. LinPages: 1732-174351. Controllable-Domain-Based Fuzzy Rule Extraction for Copper Removal Process ControlAuthor(s): B. Zhang, C. Yang, H. Zhu, P. Shi and W. GuiPages: 1744-175652. Renewal Reward Process With Uncertain Interarrival Times and Random RewardsAuthor(s): K. Yao and J. ZhouPages: 1757-176253. Uncertainty Measures of Extended Hesitant Fuzzy Linguistic Term SetsAuthor(s): C. Wei, R. M. Rodríguez and L. MartínezPages: 1763-176854. Correction to “Detection of Resource Overload in Conditions of Project Ambiguity” [Aug 17 868-877]Author(s): M. Pelikán, H. Štiková and I. VranaPages: 1769-1769 Read more »
  • IEEE Transactions on Cognitive and Developmental Systems, Volume 10, Number 2, June 2018
    1. Guest Editorial Special Issue on Neuromorphic Computing and Cognitive SystemsH. Tang, T. Huang, J. L. Krichmar, G. Orchard and A. Basu2. Adaptive Robot Path Planning Using a Spiking Neuron Algorithm With Axonal DelaysT. Hwu, A. Y. Wang, N. Oros and J. L. Krichmar3. Neuro-Activity-Based Dynamic Path Planner for 3-D Rough TerrainA. A. Saputra, Y. Toda, J. Botzheim and N. Kubota4. EMPD: An Efficient Membrane Potential Driven Supervised Learning Algorithm for Spiking NeuronsM. Zhang, H. Qu, A. Belatreche and X. Xie5. Robotic Homunculus: Learning of Artificial Skin Representation in a Humanoid Robot Motivated by Primary Somatosensory CortexM. Hoffmann, Z. Straka, I. Farkas, M. Vavrecka and G. Metta6. A Novel Parsimonious Cause-Effect Reasoning Algorithm for Robot Imitation and Plan RecognitionG. Katz, D. W. Huang, T. Hauge, R. Gentili and J. Reggia7. Predicting Spike Trains from PMd to M1 Using Discrete Time Rescaling Targeted GLMD. Xing, C. Qian, H. Li, S. Zhang, Q. Zhang, Y. Hao, X. Zheng, Z. Wu, Y. Wang, G. Pan8. Visual Pattern Recognition Using Enhanced Visual Features and PSD-Based Learning RuleX. Xu, X. Jin, R. Yan, Q. Fang and W. Lu9. Multimodal Functional and Structural Brain Connectivity Analysis in Autism: A Preliminary Integrated Approach With EEG, fMRI, and DTIB. A. Cociu, S. Das, L. Billeci, W. Jamal, K. Maharatna, S. Calderoni, A. Narzisi, F. Muratori10. Observing and Modeling Developing Knowledge and Uncertainty During Cross-Situational Word LearningG. Kachergis and C. Yu11. Prediction Error in the PMd As a Criterion for Biological Motion Discrimination: A Computational AccountY. Kawai, Y. Nagai and M. Asada12. Learning 4-D Spatial Representations Through Perceptual Experience With HypercubesT. Miwa, Y. Sakai and S. Hashimoto13. Fuzzy Feature Extraction for Multichannel EEG ClassificationP. Y. Zhou and K. C. C. Chan14. Orthogonal Principal Coefficients Embedding for Unsupervised Subspace LearningX. Xu, S. Xiao, Z. Yi, X. Peng and Y. Liu15. A Basal Ganglia Network Centric Reinforcement Learning Model and Its Application in Unmanned Aerial VehicleY. Zeng, G. Wang and B. Xu16. Biologically Inspired Self-Organizing Map Applied to Task Assignment and Path Planning of an AUV SystemD. Zhu, X. Cao, B. Sun and C. Luo17. Autonomous Discovery of Motor Constraints in an Intrinsically Motivated Vocal LearnerJ. M. Acevedo-Valle, C. Angulo and C. Moulin-Frier18. Bio-Inspired Model Learning Visual Goals and Attention Skills Through Contingencies and Intrinsic MotivationsV. Sperati and G. Baldassarre19. Seamless Integration and Coordination of Cognitive Skills in Humanoid Robots: A Deep Learning ApproachJ. Hwang and J. Tani20. Learning Temporal Intervals in Neural DynamicsB. Duran and Y. Sandamirskaya21. Quantifying Cognitive Workload in Simulated Flight Using Passive, Dry EEG MeasurementsJ. A. Blanco, M. K. Johnson, K. J. Jaquess, H. Oh, L. Lo, R. J. Gentili, B. D. Hatfield22. Enhanced Robotic Hand–Eye Coordination Inspired From Human-Like Behavioral PatternsF. Chao, Z. Zhu, C. Lin, H. Hu, L. Yang, C. Shang, C. Zhou23. Covariate Conscious Approach for Gait Recognition Based Upon Zernike Moment InvariantsH. Aggarwal and D. K. Vishwakarma24. EEG-Based Emotion Recognition Using Hierarchical Network With Subnetwork NodesY. Yang, Q. M. J. Wu, W. L. Zheng and B. L. Lu26. A Novel Biologically Inspired Visual Cognition Model: Automatic Extraction of Semantics, Formation of Integrated Concepts, and Reselection Features for AmbiguityP. Yin, H. Qiao, W. Wu, L. Qi, Y. Li, S. Zhong, B. Zhang27. Zero-Shot Image Classification Based on Deep Feature ExtractionX. Wang, C. Chen, Y. Cheng and Z. J. Wang28. A Hormone-Driven Epigenetic Mechanism for Adaptation in Autonomous RobotsJ. Lones, M. Lewis and L. Ca?amero29. Heteroscedastic Regression and Active Learning for Modeling Affordances in HumanoidsF. Stramandinoli, V. Tikhanoff, U. Pattacini and F. Nori30. Artificial Cognitive Systems That Can Answer Human Creativity Tests: An Approach and Two Case StudiesA. M. Olte?eanu, Z. Falomir and C. Freksa31. A Practical SSVEP-Based Algorithm for Perceptual Dominance Estimation in Binocular RivalryK. Tanaka, M. Tanaka, T. Kajiwara and H. O. Wang Read more »
  • Evolving Systems, Volume 9, Issue 2, June 2018, Special Section on Evolving Soft Sensors
    1. Evolving sensor systemsAuthor(s): Chrisina Jayne, Nirmalie WiratungaPages: 93-942. Predictive intelligence to the edge: impact on edge analyticsAuthor(s): Natascha Harth, Christos Anagnostopoulos, Dimitrios PezarosPages: 95-1183. Evolving ANN-based sensors for a context-aware cyber physical system of an offshore gas turbineAuthor(s): Farzan Majdani, Andrei Petrovski, Daniel DoolanPages: 119-1334. Multistatic radar classification of armed vs unarmed personnel using neural networksAuthor(s): Jarez S. Patel, Francesco Fioranelli, Matthew Ritchie, Hugh GriffithsPages: 135-1445. An evolving spatio-temporal approach for gender and age group classification with Spiking Neural NetworksAuthor(s): Fahad Bashir Alvi, Russel Pears, Nikola KasabovPages: 145-1566. Devolutionary genetic algorithms with application to the minimum labeling Steiner tree problemAuthor(s): Nassim DehouchePages: 157-1687. Modality of teaching learning based optimization algorithm to reduce the consistency ratio of the pair-wise comparison matrix in analytical hierarchy processingAuthor(s): Prashant Borkar, M. V. SarodePages: 169-180 Read more »
  • IEEE Transactions on Evolutionary Computation, Volume 22 Issue 3, June 2018
    1. Guest Editorial Special Issue on Search-Based Software EngineeringAuthor(s): Federica Sarro, Marouane Kessentini, Kalayanmoy DebPages: 3332. Constructing Cost-Aware Functional Test-Suites Using Nested Differential Evolution AlgorithmAuthor(s): Yuexing Wang, Min Zhou, Xiaoyu Song, Ming Gu, Jiaguang SunPages: 334 - 3463. Multiobjective Testing Resource Allocation Under UncertaintyAuthor(s): Roberto Pietrantuono, Pasqualina Potena, Antonio Pecchia, Daniel Rodriguez, Stefano Russo, Luis Fernández-SanzPages: 347 - 3624. Achieving Feature Location in Families of Models Through the Use of Search-Based Software EngineeringAuthor(s): Jaime Font, Lorena Arcega, Øystein Haugen, Carlos CetinaPages: 363 - 3775. Integrating Weight Assignment Strategies With NSGA-II for Supporting User Preference Multiobjective OptimizationAuthor(s): Shuai Wang, Shaukat Ali, Tao Yue, Marius LiaaenPages: 378 - 3936. An Empirical Study of Cohesion and Coupling: Balancing Optimization and DisruptionAuthor(s): Matheus Paixao, Mark Harman, Yuanyuan Zhang, Yijun YuPages: 394 - 4147. Genetic Improvement of Software: A Comprehensive SurveyAuthor(s): Justyna Petke, Saemundur O. Haraldsson, Mark Harman, William B. Langdon, David R. White, John R. WoodwardPages: 415 - 4328. Adaptively Allocating Search Effort in Challenging Many-Objective Optimization ProblemsAuthor(s): Hai-Lin Liu, Lei Chen, Qingfu Zhang, Kalyanmoy DebPages: 433 - 4489. Computing and Updating Hypervolume Contributions in Up to Four DimensionsAuthor(s): Andreia P. Guerreiro, Carlos M. FonsecaPages: 449 - 46310. Evolutionary Computation for Community Detection in Networks: A ReviewAuthor(s): Clara PizzutiPages: 464 - 48311. Escaping Local Optima Using Crossover With Emergent DiversityAuthor(s): Duc-Cuong Dang, Tobias Friedrich, Timo Kötzing, Martin S. Krejca, Per Kristian Lehre, Pietro S. Oliveto, Dirk Sudholt, Andrew M. SuttonPages: 484 - 497 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems, Volume 29, Issue 6, June 2018
    1. Special Issue on Deep Reinforcement Learning and Adaptive Dynamic ProgrammingAuthor(s): Dongbin Zhao, Derong Liu, F. L. Lewis, Jose C. Principe, Stefano SquartiniPages: 2038 - 20412. Optimal and Autonomous Control Using Reinforcement Learning: A SurveyAuthor(s): Bahare Kiumarsi, Kyriakos G. Vamvoudakis, Hamidreza Modares, Frank L. LewisPages: 2042 - 20623. Applications of Deep Learning and Reinforcement Learning to Biological DataAuthor(s): Mufti Mahmud, Mohammed Shamim Kaiser, Amir Hussain, Stefano VassanelliPages: 2063 - 20794. Guided Policy Exploration for Markov Decision Processes Using an Uncertainty-Based Value-of-Information CriterionAuthor(s): Isaac J. Sledge, Matthew S. Emigh, José C. PríncipePages: 2080 - 20985. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only StructureAuthor(s): Biao Luo, Derong Liu, Huai-Ning WuPages: 2099 - 21116. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched DisturbancesAuthor(s): Huaguang Zhang, Qiuxia Qu, Geyang Xiao, Yang CuiPages: 2112 - 21267. Robust ADP Design for Continuous-Time Nonlinear Systems With Output ConstraintsAuthor(s): Bo Fan, Qinmin Yang, Xiaoyu Tang, Youxian SunPages: 2127 - 21388. Leader–Follower Output Synchronization of Linear Heterogeneous Systems With Active Leader Using Reinforcement LearningAuthor(s): Yongliang Yang, Hamidreza Modares, Donald C. Wunsch, Yixin YinPages: 2139 - 21539. Approximate Dynamic Programming: Combining Regional and Local State Following ApproximationsAuthor(s): Patryk Deptula, Joel A. Rosenfeld, Rushikesh Kamalapurkar, Warren E. DixonPages: 2154 - 216610. Suboptimal Scheduling in Switched Systems With Continuous-Time Dynamics: A Least Squares ApproachAuthor(s): Tohid Sardarmehni, Ali HeydariPages: 2167 - 217811. Optimal Fault-Tolerant Control for Discrete-Time Nonlinear Strict-Feedback Systems Based on Adaptive Critic DesignAuthor(s): Zhanshan Wang, Lei Liu, Yanming Wu, Huaguang ZhangPages: 2179 - 219112. Distributed Economic Dispatch in Microgrids Based on Cooperative Reinforcement LearningAuthor(s): Weirong Liu, Peng Zhuang, Hao Liang, Jun Peng, Zhiwu HuangPages: 2192 - 220313. Reusable Reinforcement Learning via Shallow TrailsAuthor(s): Yang Yu, Shi-Yong Chen, Qing Da, Zhi-Hua ZhouPages: 2204 - 221514. Self-Paced Prioritized Curriculum Learning With Coverage Penalty in Deep Reinforcement LearningAuthor(s): Zhipeng Ren, Daoyi Dong, Huaxiong Li, Chunlin ChenPages: 2216 - 222615. Multisource Transfer Double DQN Based on Actor LearningAuthor(s): Jie Pan, Xuesong Wang, Yuhu Cheng, Qiang YuPages: 2227 - 223816. Action-Driven Visual Object Tracking With Deep Reinforcement LearningAuthor(s): Sangdoo Yun, Jongwon Choi, Youngjoon Yoo, Kimin Yun, Jin Young ChoiPages: 2239 - 225217. Extreme Trust Region Policy Optimization for Active Object RecognitionAuthor(s): Huaping Liu, Yupei Wu, Fuchun SunPages: 2253 - 225818. Learning to Predict Consequences as a Method of Knowledge Transfer in Reinforcement LearningAuthor(s): Eric Chalmers, Edgar Bermudez Contreras, Brandon Robertson, Artur Luczak, Aaron GruberPages: 2259 - 227019. A Discrete-Time Recurrent Neural Network for Solving Rank-Deficient Matrix Equations With an Application to Output Regulation of Linear SystemsAuthor(s): Tao Liu, Jie HuangPages: 2271 - 227720. Online Learning Algorithm Based on Adaptive Control TheoryAuthor(s): Jian-Wei Liu, Jia-Jia Zhou, Mohamed S. Kamel, Xiong-Lin LuoPages: 2278 - 229321. User Preference-Based Dual-Memory Neural Model With Memory Consolidation ApproachAuthor(s): Jauwairia Nasir, Yong-Ho Yoo, Deok-Hwa Kim, Jong-Hwan KimPages: 2294 - 230822. Online HashingAuthor(s): Long-Kai Huang, Qiang Yang, Wei-Shi ZhengPages: 2309 - 232223. GoDec+: Fast and Robust Low-Rank Matrix Decomposition Based on Maximum CorrentropyAuthor(s): Kailing Guo, Liu Liu, Xiangmin Xu, Dong Xu, Dacheng TaoPages: 2323 - 233624. A Parallel Multiclassification Algorithm for Big Data Using an Extreme Learning MachineAuthor(s): Mingxing Duan, Kenli Li, Xiangke Liao, Keqin LiPages: 2337 - 235125. Nonlinear Decoupling Control With ANFIS-Based Unmodeled Dynamics Compensation for a Class of Complex Industrial ProcessesAuthor(s): Yajun Zhang, Tianyou Chai, Hong Wang, Dianhui Wang, Xinkai ChenPages: 2352 - 236626. Online Learning Algorithms Can Converge Comparably Fast as Batch LearningAuthor(s): Junhong Lin, Ding-Xuan ZhouPages: 2367 - 237827. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform NeuronsAuthor(s): Ahana Gangopadhyay, Shantanu ChakrabarttyPages: 2379 - 239128. Uncertain Data Clustering in Distributed Peer-to-Peer NetworksAuthor(s): Jin Zhou, Long Chen, C. L. Philip Chen, Yingxu Wang, Han-Xiong LiPages: 2392 - 240629. Distributed Optimal Consensus Over Resource Allocation Network and Its Application to Dynamical Economic DispatchAuthor(s): Chaojie Li, Xinghuo Yu, Tingwen Huang, Xing HePages: 2407 - 241830. Distributed Adaptive Containment Control for a Class of Nonlinear Multiagent Systems With Input QuantizationAuthor(s): Chenliang Wang, Changyun Wen, Qinglei Hu, Wei Wang, Xiuyu ZhangPages: 2419 - 242831. Data-Driven Learning Control for Stochastic Nonlinear Systems: Multiple Communication Constraints and Limited StorageAuthor(s): Dong ShenPages: 2429 - 244032. Reversed Spectral HashingAuthor(s): Qingshan Liu, Guangcan Liu, Lai Li, Xiao-Tong Yuan, Meng Wang, Wei LiuPages: 2441 - 244933. Structure Learning for Deep Neural Networks Based on Multiobjective OptimizationAuthor(s): Jia Liu, Maoguo Gong, Qiguang Miao, Xiaogang Wang, Hao LiPages: 2450 - 246334. On the Dynamics of Hopfield Neural Networks on Unit QuaternionsAuthor(s): Marcos Eduardo Valle, Fidelis Zanetti de CastroPages: 2464 - 247135. End-to-End Feature-Aware Label Space Encoding for Multilabel Classification With Many ClassesAuthor(s): Zijia Lin, Guiguang Ding, Jungong Han, Ling ShaoPages: 2472 - 248736. Improved Stability and Stabilization Results for Stochastic Synchronization of Continuous-Time Semi-Markovian Jump Neural Networks With Time-Varying DelayAuthor(s): Yanling Wei, Ju H. Park, Hamid Reza Karimi, Yu-Chu Tian, Hoyoul JungPages: 2488 - 250137. Robust Latent Subspace Learning for Image ClassificationAuthor(s): Xiaozhao Fang, Shaohua Teng, Zhihui Lai, Zhaoshui He, Shengli Xie, Wai Keung WongPages: 2502 - 251538. New Splitting Criteria for Decision Trees in Stationary Data StreamsAuthor(s): Maciej Jaworski, Piotr Duda, Leszek RutkowskiPages: 2516 - 252939. A Sequential Learning Approach for Scaling Up Filter-Based Feature Subset SelectionAuthor(s): Gregory Ditzler, Robi Polikar, Gail RosenPages: 2530 - 254440. Substructural Regularization With Data-Sensitive Granularity for Sequence Transfer LearningAuthor(s): Shichang Sun, Hongbo Liu, Jiana Meng, C. L. Philip Chen, Yu YangPages: 2545 - 255741. Exponential Synchronization of Networked Chaotic Delayed Neural Network by a Hybrid Event Trigger SchemeAuthor(s): Zhongyang Fei, Chaoxu Guan, Huijun GaoPages: 2558 - 256742. Multiclass Learning With Partially Corrupted LabelsAuthor(s): Ruxin Wang, Tongliang Liu, Dacheng TaoPages: 2568 - 258043. Boundary-Eliminated Pseudoinverse Linear Discriminant for Imbalanced ProblemsAuthor(s): Yujin Zhu, Zhe Wang, Hongyuan Zha, Daqi GaoPages: 2581 - 259444. An Information-Theoretic-Cluster Visualization for Self-Organizing MapsAuthor(s): Leonardo Enzo Brito da Silva, Donald C. WunschPages: 2595 - 261345. Learning-Based Adaptive Optimal Tracking Control of Strict-Feedback Nonlinear SystemsAuthor(s): Weinan Gao, Zhong-Ping JiangPages: 2614 - 262446. On the Impact of Regularization Variation on Localized Multiple Kernel LearningAuthor(s): Yina Han, Kunde Yang, Yixin Yang, Yuanliang MaPages: 2625 - 263047. Structured Learning of Tree Potentials in CRF for Image SegmentationAuthor(s): Fayao Liu, Guosheng Lin, Ruizhi Qiao, Chunhua ShenPages: 2631 - 263748. Adaptive Backstepping-Based Neural Tracking Control for MIMO Nonlinear Switched Systems Subject to Input DelaysAuthor(s): Ben Niu, Lu LiPages: 2638 - 264449. Memcomputing Numerical Inversion With Self-Organizing Logic GatesAuthor(s): Haik Manukian, Fabio L. Traversa, Massimiliano Di VentraPages: 2645 - 265050. Graph Regularized Restricted Boltzmann MachineAuthor(s): Dongdong Chen, Jiancheng Lv, Zhang YiPages: 2651 - 265951. A Self-Paced Regularization Framework for Multilabel LearningAuthor(s): Changsheng Li, Fan Wei, Junchi Yan, Xiaoyu Zhang, Qingshan Liu, Hongyuan ZhaPages: 2660 - 2666 Read more »
  • Neural Networks, Volume 104, Pages 1-124, August 2018
    1. Design of double fuzzy clustering-driven context neural networksAuthor(s): Eun-Hu Kim, Sung-Kwun Oh, Witold PedryczPages: 1-142. Bio-inspired spiking neural network for nonlinear systems controlAuthor(s): Javier Pérez, Juan A. Cabrera, Juan J. Castillo, Juan M. VelascoPages: 15-253. A frequency-domain approach to improve ANNs generalization quality via proper initializationAuthor(s): Majdi Chaari, Afef Fekih, Abdennour C. Seibi, Jalel Ben HmidaPages: 26-394. Using a model of human visual perception to improve deep learningAuthor(s): Michael Stettler, Gregory FrancisPages: 40-495. Effect of dilution in asymmetric recurrent neural networksAuthor(s): Viola Folli, Giorgio Gosti, Marco Leonetti, Giancarlo RuoccoPages: 50-59    6. Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural networkAuthor(s): Alvin Poernomo, Dae-Ki KangPages: 60-677. A deep belief network with PLSR for nonlinear system modelingAuthor(s): Junfei Qiao, Gongming Wang, Wenjing Li, Xiaoli LiPages: 68-798. Generalized pinning synchronization of delayed Cohen–Grossberg neural networks with discontinuous activationsAuthor(s): Dongshu Wang, Lihong Huang, Longkun Tang, Jinsen ZhuangPages: 80-929. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized controlAuthor(s): Wanli Zhang, Shiju Yang, Chuandong Li, Wei Zhang, Xinsong YangPages: 93-10310. Quasi-projective synchronization of fractional-order complex-valued recurrent neural networksAuthor(s): Shuai Yang, Juan Yu, Cheng Hu, Haijun JiangPages: 104-11311. Electrical resistivity imaging inversion: An ISFLA trained kernel principal component wavelet neural network approachAuthor(s): Feibo Jiang, Li Dong, Qianwei DaiPages: 114-123 Read more »
  • Soft Computing, Volume 22, Issue 12, June 2018
    1. Solving a nonhomogeneous linear system of interval differential equationsAuthor(s): Nizami A. Gasilov, Şahin Emrah AmrahovPages: 3817-38282. N-soft sets and their decision making algorithmsAuthor(s): Fatia Fatimah, Dedi Rosadi, R. B. Fajriya Hakim, José Carlos R. AlcantudPages: 3829-38423. On the measure of M-rough approximation of L-fuzzy setsAuthor(s): Sang-Eon Han, Alexander ŠostakPages: 3843-38554. A new metaheuristic algorithm: car tracking optimization algorithmAuthor(s): Jian Chen, Hui Cai, Wei WangPages: 3857-38785. Ideals and congruences in quasi-pseudo-MV algebrasAuthor(s): Wenjuan Chen, Wieslaw A. DudekPages: 3879-38896. Solving maximal covering location problem using genetic algorithm with local refinementAuthor(s): Soumen Atta, Priya Ranjan Sinha Mahapatra, Anirban MukhopadhyayPages: 3891-39067. A novel fuzzy time series forecasting method based on the improved artificial fish swarm optimization algorithmAuthor(s): Sidong Xian, Jianfeng Zhang, Yue Xiao, Jia PangPages: 3907-39178. A novel constraint-handling technique based on dynamic weights for constrained optimization problemsAuthor(s): Chaoda Peng, Hai-Lin Liu, Fangqing GuPages: 3919-39359. Recognizing the human attention state using cardiac pulse from the noncontact and automatic-based measurementsAuthor(s): Dazhi Jiang, Bo Hu, Yifei Chen, Yu Xue, Wei Li, Zhengping LiangPages: 3937-394910. Tauberian theorems for weighted mean summability method of improper Riemann integrals of fuzzy-number-valued functionsAuthor(s): Cemal BelenPages: 3951-395711. A fuzzy decision support system for multifactor authenticationAuthor(s): Arunava Roy, Dipankar DasguptaPages: 3959-398112. A fast weak-supervised pulmonary nodule segmentation method based on modified self-adaptive FCM algorithmAuthor(s): Hui Liu, Fenghuan Geng, Qiang Guo, Caiqing Zhang, Caiming ZhangPages: 3983-399513. Adjust weight vectors in MOEA/D for bi-objective optimization problems with discontinuous Pareto frontsAuthor(s): Chunjiang Zhang, Kay Chen Tan, Loo Hay Lee, Liang GaoPages: 3997-401214. An improved method of automatic text summarization for web contents using lexical chain with semantic-related termsAuthor(s): Htet Myet Lynn, Chang Choi, Pankoo KimPages: 4013-402315. A composite particle swarm optimization approach for the composite SaaS placement in cloud environmentAuthor(s): Mohamed Amin Hajji, Haithem MezniPages: 4025-404516. Convergence analysis of standard particle swarm optimization algorithm and its improvementAuthor(s): Weiyi Qian, Ming LiPages: 4047-407017. Attribute-based fuzzy identity access control in multicloud computing environmentsAuthor(s): Wenmin Li, Qiaoyan Wen, Xuelei Li, Debiao HePages: 4071-408218. A predictive model-based image watermarking scheme using Regression Tree and Firefly algorithmAuthor(s): Behnam Kazemivash, Mohsen Ebrahimi MoghaddamPages: 4083-409819. A multiple time series-based recurrent neural network for short-term load forecastingAuthor(s): Bing Zhang, Jhen-Long Wu, Pei-Chann ChangPages: 4099-411220. Novel ranking method of interval numbers based on the Boolean matrixAuthor(s): Deqing Li, Wenyi Zeng, Qian YinPages: 4113-412221. The mean chance of ultimate ruin time in random fuzzy insurance risk modelAuthor(s): Sara Ghasemalipour, Behrouz Fathi-VajargahPages: 4123-413122. A self-adaptive and stagnation-aware breakout local search algorithm on the grid for the Steiner tree problem with revenue, budget and hop constraintsAuthor(s): Tansel Dokeroglu, Erhan MengusogluPages: 4133-415123. Valuation of European option under uncertain volatility modelAuthor(s): Sabahat Hassanzadeh, Farshid MehrdoustPages: 4153-4163 Read more »
  • Complex & Intelligent Systems All Volumes & Issues Volume 4, Issue 2, June 2018
    1. Hybrid fuzzy-based sliding-mode control approach, optimized by genetic algorithm for quadrotor unmanned aerial vehiclesAuthor(s): M. Pazooki, A. H. MazinanPages: 79-932. Forecasting of financial data: a novel fuzzy logic neural network based on error-correction concept and statisticsAuthor(s): Dusan MarcekPages: 95-1043. EFS-MI: an ensemble feature selection method for classificationAuthor(s): Nazrul Hoque, Mihir Singh, Dhruba K. BhattacharyyaPages: 105-1184. Deep neural architectures for prediction in healthcareAuthor(s): Dimitrios Kollias, Athanasios Tagaris…Pages: 119-1315. A hybrid decision support model using axiomatic fuzzy set theory in AHP and TOPSIS for multicriteria route selectionAuthor(s): Sunil Pratap Singh, Preetvanti SinghPages: 133-1436. Risk prediction in life insurance industry using supervised learning algorithmsAuthor(s): Noorhannah Boodhun, Manoj JayabalanPages: 145-154 Read more »
  • Neural Networks, Volume 103, Pages 1-150, July 2018
    1. Beyond Low-Rank Representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clusteringAuthor(s): Yang Wang, Lin WuPages: 1-82. Learning from label proportions on high-dimensional dataAuthor(s): Yong Shi, Jiabin Liu, Zhiquan Qi, Bo WangPages: 9-183. The convergence analysis of SpikeProp algorithm with smoothing regularizationAuthor(s): Junhong Zhao, Jacek M. Zurada, Jie Yang, Wei WuPages: 19-284. Multilayer bootstrap networksAuthor(s): Xiao-Lei ZhangPages: 29-435. A multivariate additive noise model for complete causal discoveryAuthor(s): Pramod Kumar Parida, Tshilidzi Marwala, Snehashish ChakravertyPages: 44-54    6. Boundedness and global robust stability analysis of delayed complex-valued neural networks with interval parameter uncertaintiesAuthor(s): Qiankun Song, Qinqin Yu, Zhenjiang Zhao, Yurong Liu, Fuad E. AlsaadiPages: 55-627. A nonnegative matrix factorization algorithm based on a discrete-time projection neural networkAuthor(s): Hangjun Che, Jun WangPages: 63-718. Personalized response generation by Dual-learning based domain adaptationAuthor(s): Min Yang, Wenting Tu, Qiang Qu, Zhou Zhao, Jia ZhuPages: 72-829. Impulsive synchronization of stochastic reaction–diffusion neural networks with mixed time delaysAuthor(s): Yin Sheng, Zhigang ZengPages: 83-9310. Information-theoretic decomposition of embodied and situated systemsAuthor(s): Federico Da RoldPages: 94-10711. The Growing Curvilinear Component Analysis (GCCA) neural networkAuthor(s): Giansalvo Cirrincione, Vincenzo Randazzo, Eros PaseroPages: 108-11712. Spiking neural networks for handwritten digit recognition—Supervised learning and network optimizationAuthor(s): Shruti R. Kulkarni, Bipin RajendranPages: 118-12713. Robust generalized Mittag-Leffler synchronization of fractional order neural networks with discontinuous activation and impulsesAuthor(s): A. Pratap, R. Raja, C. Sowmiya, O. Bagdasar, ... G. RajchakitPages: 128-14114. General memristor with applications in multilayer neural networksAuthor(s): Shiping Wen, Xudong Xie, Zheng Yan, Tingwen Huang, Zhigang ZengPages: 142-149 Read more »
WordPress RSS Feed Retriever by Theme Mason

AI ML MarketPlace

  • Big data AI startup Noble.AI raises a second seed round from a chemical giant
    Noble.AI, an SF/French AI company that claims to accelerate decision making in R&D, has raised a new round of funding from Solvay Ventures, the VC arm of a large chemical company, Solvay SA. Although the round was undisclosed, TechCrunch understands it to be a second seed round, and we know the company has closed a total of $8.6 million to date. Solvay was previously an early customer of the platform, prior to this investment. The joint announcement was made at the Hello Tomorrow conference in Paris this week. As a chemical company, Solvay’s research arm generates huge volumes of data from various sources, which is part of the reason for the investment, confirmed the firm. Noble.AI’s “Universal Ingestion Engine” and “Intelligent Recommendation Engine” claim to enable the creation of high-quality data assets for these kinds of big data sets that can later be turned into recommendations for decision making inside these large businesses. Founder and CEO of Noble.AI, Dr. Matthew C. Levy, said he is “enthusiastic to see what unfolds in its next phase, tackling the most important and high-value problems in chemistry” via the partnership with Solvay. “Noble.AI has the potential to be a real game changer for Solvay in the way it enables us to utilize data from our 150-year history with new AI tools, resulting in a unique lever to accelerate our innovation,” said Stéphane Roussel, Solvay Ventures’ managing director. Prime Movers led a seed round in Noble.AI in late 2018, which was never previously disclosed to the press. Solvay Ventures is now leading this second seed round. The move comes in the context of booming corporate R&D spending, which in 2018 reached $782 billion among the top 1,000 companies, representing a 14 percent increase relative to 2017 and the largest figure deployed to R&D ever. However, R&D in corporates lags behind the startup world, so these strategic investments seem to be picking up pace. Source: TechCrunch Read more »
  • 3 Things That Will Help You Leverage AI
    AI is the transformative technology of tomorrow, but leaders need to get it up and running today. Here's how.   If artificial intelligence isn't at the top of your priority list, it should be. Deloitte's "Tech Trends 2019: Beyond the digital frontier" report shows AI topping the list of tech trends that CIOs are eager to invest in. Deloitte predicts that the next two years will see a growing number of companies transition certain functions, such as insurance claim processing, to fully autonomous operations backed by AI. Terms like "cognitive technologies" and "machine learning" have become buzzwords, but these trends will strengthen--particularly as these systems begin to harness the scads of data available from which they can extract insights.   But AI's promise is more general than just data mining. Lu Zhang, founder and managing partner at Fusion Fund, describes the technology as applicable to a broad swath of commerce: "AI's application space has developed. The AI market has great potential across various industry verticals such as manufacturing, retail, healthcare, agriculture, and education." Even with this potential of AI for business, many business leaders feel held back from taking the actions necessary to implement it at their companies. So let's take a look at some things you can do now to overcome those barriers. 1. Get your C-suite on board the AI train. Any change is hard to create when the top of the organization is not fully on board. IDC found that 49 percent of enterprises surveyed cited problems related to stakeholders' reluctance to buy in as a barrier to AI adoption. The first step in setting up AI at your company is to make sure the members of the C-suite understand the value--particularly in the long term--that AI can bring.   The evidence is out there: An Accenture report predicts that AI could increase productivity by up to 40 percent by 2035. And when dealing with data, AI really shines, enabling exciting new opportunities to discover valuable business insights. For instance, a McKinsey Global Institute analysis found when AI combines demographic and past transaction data with information gleaned from social media monitoring, the resulting personalized product recommendations can lead to a doubling of the sales conversion rate. Aside from providing your company's leadership team industry data proving AI's worth, it's imperative to also show them evidence of the value for your business specifically. You can do this by implementing a small AI project, such as using a chatbot to help answer customer questions online. After seeing the success of one AI use case, your C-suite is more likely to be ready for further AI-driven digital transformations. 2. Pack quality data onto the train's cargo car. Of course, AI can only create value from data if you have data--and not just any data, but good data. Despite the world generating incomprehensible volumes of data every minute, 23 percent of respondents to a Vanson Bourne/Teradata survey of senior IT and business leaders reported that C-suite executives aren't using data to inform their decisions. Data has to be relevant to a company's business model, and sometimes the systems are not in place to capture the data business leaders need.   Nor is it just a matter of access to relevant data; data quality is critical as well. Data that contains many factual errors or omissions need to be cleaned before it can be fed to AI algorithms so that the insights derived from the data set reflect reality and not just data noise. To prepare your data beforehand, have your team scan it for missing or incomplete records, empty cells and misplaced characters, and data that's entered in a different format from everything else--any or all of which could throw off your algorithms. There are machine learning platforms with tools to help your team with this task, such as the DataRobot platform, which uses tools like Trifacta to facilitate the data prep process. 3. Hire--or train--the right crew members. Finally, make sure your team members have what it takes to launch your new AI initiative and keep it aligned with best practices. You'll want to put together a team that includes roles such as a systems architect, data engineer and/or data scientist, and a business analyst, among possible others. The team should be focused on creating scalable solutions that take advantage of the latest approaches in the fields of machine learning, deep learning, big data, SQL and NoSQL databases, and other areas of active development. Not that assembling such a team will be easy: the Vanson Bourne/Teradata survey found about a third of respondents cited talent as the bottleneck to advancing their AI plans. That isn't surprising given there may be only about 3,000 AI professionals who are actively seeking jobs--against about 10,000 available jobs in this country alone.   So if you're struggling to find people trained and experienced in working with AI, train members of your current team to fill that talent gap. Your developers can take advantage of Microsoft Professional Program for Artificial Intelligence, a program the software giant has made available to IT professionals who want to develop skills in AI and data science. And don't neglect the non-IT members of your team. A former AI leader at Google offers online AI training through Coursera that aims to give businesspeople a foundational knowledge of pattern recognition and machine learning. Spreading the AI savvy throughout your organization can only aid your efforts to put this groundbreaking technology to the best use. Companies that get on board with AI now will be at a critical advantage this time next year. Don't delay further--fire up your executives' enthusiasm, find your best data sets and use cases, and start putting together a first-rate team of AI experts. Done right, AI can be a complete game changer for many companies, from enterprises to small businesses. Source: Read more »
  • 3 ways AI is already changing medicine
    When Dr. Eric Topol joined an experiment on using artificial intelligence to get personalized nutrition advice, he was hopeful. For two weeks, Topol, a cardiologist at Scripps Research, dutifully tracked everything he ate, wore a sensor to monitor his blood-glucose levels, and even collected and mailed off a stool sample for an analysis of his gut microbiome. The diet advice he got back stunned him: Eat Bratwurst, nuts, danishes, strawberries, and cheesecake. Stay away from oatmeal, melon, whole-wheat fig bars, veggie burgers, and grapefruit. “It was crazy stuff,” Topol told me. Bratwurst and cheesecake are foods Topol generally shirks because he considers them “unhealthy.” And strawberries can actually be dangerous for Topol: He’s had kidney stones and has to avoid foods, such as berries, that are high in calcium oxalate, a chemical that can turn into stones. All in all, Topol discovered that most of the companies currently marketing personalized diets can’t actually deliver. It’s just one of the great insights in his new book about artificial intelligence, Deep Medicine. AI for diet is one of the most hyped applications of the technology. But in the book Topol uncovers more promising opportunities for artificial intelligence to improve health — some of which surprised me. He also challenges the most common narrative about AI in health: that radiologists will soon be replaced by machines. Instead of robots coming into medicine and further eroding what’s left of the doctor-patient relationship, Topol argues, AI may actually enhance it. I’ve boiled down three of Topol’s most surprising findings, after reading the book and talking with him. 1) AI for your eyes and colon Diagnosing disease is a notoriously difficult task, and doctors don’t always get it right — which is why there’s been a lot of excitement around the idea that AI might make the task both easier and more precise. But as the quest to create a medical tricorder — a portable device capable of diagnosing diseases in humans — continues, there’ve been serious developments in automating diagnostics, and even triage, in several pretty specific areas of medicine. Take ophthalmology. The top cause of loss of vision in adults worldwide is diabetic retinopathy, a condition that affects about a third of people with diabetes in the US. Patients should be screened for the condition, but that doesn’t always happen, which can delay sometimes diagnosis and treatment — and lead to more vision loss. Researchers at Google developed a deep learning algorithm that can automatically detect the condition with a great deal of accuracy, Topol found. According to one paper, the software had a sensitivity score of 87 to 90 percent and 98 percent specificity for detecting diabetic retinopathy, which they defined as “moderate or worse diabetic retinopathy or referable macular edema by the majority decision of a panel of at least seven US board-certified ophthalmologists.” Doctors at Moorfields Eye Hospital in London took that work a step further. They trained an algorithm that could recommend the correct treatment approach for more than 50 eye diseases with 94 percent accuracy. “They compared that to eye specialists, and the machine didn’t miss one referral, but the eye doctors did,” Topol said. “The eye doctors were only in agreement about the referrals 65 percent of the time. So that’s the beginning of moving from narrow AI to triage.” In another example, doctors in China used AI to diagnose polyps on the colon during a colonoscopy. In one arm of the randomized trial, the diagnosis was made by AI plus the gastroenterologist. In another arm, just the specialist made the diagnosis. The AI system significantly increased polyp detection (29 percent compared to 20 percent). And this was mainly because AI spotted what are known as “diminutive adenomas,” or tiny polyps — less than 5 mm in size — that are notoriously easy for doctors to miss. “Machine vision is starting to improve,” Topol said. And while we’re far from having a hand-held machine that can diagnose any disease, these small steps will probably eventually lead there, he added. 2) Avatars to help anxiety and depression When we talk about the impact of computers and the internet on our mental health, we often talk about the negative: that they can be alienating, isolating, anxiety-provoking. Yet Topol found good evidence of just the opposite: They can be comforting in some cases. In one elegant experiment, researchers at USC tested whether people would be willing to reveal their innermost secrets to an avatar named Ellie as compared to another human. “The shocking result — it wasn’t even a contest,” said Topol. “People far more readily would tell an avatar their deepest secret.” That experiment has since been replicated, and researchers are finding chat bots and avatars also seem to help people with symptoms of anxiety and depression. “It’s an interesting finding in the modern era,” said Topol. “I don’t think it would have been predicted. It’s like going to confession — you’re laying it out there and you feel a catharsis.” So why is this so important? “Some think it’s a breakthrough. Others are skeptical it’ll help. But there’s such an absurd mismatch between what we need to support people’s mental health conditions and what’s available,” Topol said. “So if this does work — and it looks promising — this could be a vital step forward to helping [more] people.” 3) AI could free up time for doctors As the average doctor appointment time has dwindled to a few minutes, so too has any intimacy or sense of connection that can develop between doctors and patients. Topol went into the book thinking AI — and bringing more machines into hospitals and clinics — might further dampen the human side of medicine. But by the end of his research, he ended up seeing a big opportunity: “I realized that as you can augment human performance at both the clinician level and the patient level, at a scale that is unprecedented, you can make time grow.” And giving more time to doctors, could, in theory mean the intimacy can come back. To “make time grow,” Topol said, AI can help with time-consuming tasks, like note-taking by voice. Notes can then be archived for patients to review — and a correction function could be built into the process so patients can flag any errors in their records. “These are all features that can enhance the humanistic encounter we’ve lost over time,” Topol said. AI can also free up time for specialists to meet with patients. Topol argues in the book that instead of AI replacing radiologists — widely viewed as the medical specialists most at risk of becoming extinct — AI will enhance them. “The average radiologist today reads between 50 and 100 films in a day. There’s a significant error rate and a third of radiologists at some point in their career get sued for malpractice,” he said. Enter deep learning. “You then have an amazing ability to scale where a radiologist could read 10 times as many films or 100 times as many films. But is that what we want? Or do want to use that capability [so radiologists] can start talking to patients, come out of the basement and review the results, sharing an expertise which they never otherwise get to.” So AI could liberate doctors in a tech-heavy specialty, like radiology, to help patients through a diagnosis — something that doesn’t happen now. Two big hurdles Topol is certainly an optimist about the power of AI to make things better — even about personalized diets. “Our health is not just absence of disease. It’s about the prevention of disease,” he told Vox. “And if we can use food as a medicine to help us prevent illness, that would be terrific. We’ll get there someday.” But you might still be skeptical — that’s fair. The health care system has been abysmal at doing the very basics of incorporating new technology into medical practice, like digitizing medical records. And Topol makes clear in the book that many of these promising technologies, like avatars for mental health or AI for colonoscopies, need to be further validated and refined in clinical studies, and followed up with as they move beyond the study phase and into the real world. To get there, there are also the privacy and data hurdles to contend with, which could make or break technologies like the avatar shrink. Machine learning is best when lots of data is fed into an algorithm — the more data, the better. “If we’re going to do deep learning and provide feedback, the only way it’ll work well is if we have all a person’s data: sensor data, genome data, microbiome data, [medical records]. It’s a long list.” But “people don’t have their [personal] data today in this country,” Topol said. “They can’t get all their medical records for every time they’ve been to a doctor or hospital. We’d want each person to have all their data from when they’re still in their moms’ womb.” Topol has some ideas for how to fix this too. US policymakers need to move in step with countries like Estonia, which found a way to allow people full control of their personal, including medical, data. Empowering people with their data could also help with security. Our data right now is stored on massive servers and clouds. “The gurus say the best chance of data being secure and maintained privately is to store it in the smallest units possible,” Topol said. “It’ll help guide your health in the times ahead.” Source: Vox Read more »
  • Why AI will make healthcare personal
    For generations healthcare has been episodic – someone gets sick or breaks a bone, they see a doctor, and then they might not see another one until the next time they get sick or injured. Now, as emerging technologies such as artificial intelligence open up new possibilities for the healthcare industry in the Fourth Industrial Revolution, policymakers and practitioners are developing new ways to deliver continuous healthcare for better outcomes. Consumers already expect access to healthcare providers to be as smart and easy as online banking, retrieving boarding passes and making restaurant reservations, according to Kaiser Permanente CEO Bernard J Tyson. Nearly three-quarters of Americans with health insurance (72%), for example, say it’s important that their health insurance provider uses modern communication tools, such as instant message and two-way video. Innovative healthcare organizations such as Kaiser Permanente are listening. The company is not only looking at how they harness technology now, to provide patients with better access to care, but how it can be used in the future to diagnose and treat chronic disease early, so people have a better chance of leading longer and healthier lives. A future where personal digital healthcare assistants monitor every aspect of our health and well-being, and screening for and treating disease is tailored to our DNA, isn’t science-fiction. It’s getting closer every day. Achieving the dream of personalized healthcare for everyone won’t be without challenges. Not only will healthcare providers have to develop and adopt new technology, they will also have to collect, aggregate and share the vast amounts of patient data, and organize it into a usable form to train the AI systems to make intelligent diagnoses, advice and predictions. They will also have to address the very real privacy concerns raised by high-profile cases such as Google's acquisition of DeepMind Health, which may see the tech giant get access to 1.6 million National Health Service patients’ data in the UK. Digital assistants to provide a 24/7 helping hand Already, digital assistants such as Amazon’s Alexa and Apple’s Siri are using AI to handle routine tasks, from making restaurant reservations to scheduling meetings and returning phone calls. The digital assistants of the future will be full-time healthcare companions, able to monitor a patient’s condition, transmit results to healthcare providers, and arrange virtual and face-to-face appointments. They will help manage the frequency and dosage of medication, and provide reliable medical advice around the clock. They will remind doctors of patients’ details, ranging from previous illnesses to past drug reactions. And they will assist older people to access the care they need as they age, including hospice care, and help to mitigate the fear and loneliness many elderly people feel. Precision medicine to personalize treatment AI is also the driving force behind precision medicine, which uses information about a person’s environment, lifestyle and biology, including in their DNA, to diagnose and treat diseases. By analyzing a patient’s information, doctors are able to prescribe the treatments that are most likely to be effective, as well as minimize drug reactions and unwanted side effects. As the World Economic Forum’s head of precision medicine, Genya Dana, says, it’s “the right treatment for the right person at the right time.” Collecting the genetic information needed for precision medicine is already becoming easier and less expensive. It’s now possible to have your entire genome sequenced for less than $1,000 (and access it via a mobile phone app). In 2007, the cost was $350,000. In addition to improving people’s health outcomes, being able to quickly identify effective treatments could also help reduce the cost of healthcare by reducing the number of treatments and procedures doctors prescribe. This will become increasingly crucial as the world’s older population continues to grow. Globally, countries including the US, China, and Japan, as well as pharmaceutical companies, are investing billions on researching precision medicine. Reduced costs The great power of harnessing AI is that access to these innovations won’t just be limited to the wealthy few. More of the world’s population will benefit from these advances. In Africa, cancer is now the No 1 cause of death. The Rwandan government is working with the World Economic Forum to increase the country’s diagnostic capacity for detecting cancer. As examples such as Massachusetts General Hospital and Harvard Medical School’s breast cancer screening trial prove, AI can be used to accurately assess scans, make recommendations for treatment, and reduce unnecessary surgeries caused by false positives. With the right kind of policy and infrastructure in place, the potential benefits of AI-driven medicine would be enormous for Rwanda. Removing the opportunity for error AI is already contributing to reducing deaths due to medical errors. After heart disease and cancer, medical errors are the third-leading cause of death. Take prescription drug errors. In the US, around 7,000 people die each year from being given the wrong drug, or the wrong dosage of the correct drug. To help solve the problem, Bainbridge Health has designed a system that uses AI to take the possibility of human error out of the process, ensuring that hospital patients get the right drug at the right dosage. The system tracks the entire process, step-by-step, from the prescription being written to the correct dosage being given to the patient. Health insurance company Humana is using AI to augment its human customer service. The system can send customer service agents real-time messages about how to improve their interaction with callers. It’s also able to identify those conversations that seem likely to escalate and alert a supervisor so that they’re ready to take the call, if necessary. This means the caller isn’t put on hold, improving the customer experience and helping to resolve issues faster. These are both great examples of the kinds of problems that can be solved with AI. We’re going to be seeing more and more innovations like these. Where do we go from here? AI has the potential to revolutionize healthcare, but if we want to make sure that this leads to better healthcare outcomes for everyone, then we need to do three things. Firstly, governments and other organizations need to develop protocols for safely and sustainably building the emerging personal data economy. Only by allowing people to manage and trade their own data will we calm fears about data security and misuse, and ensure the good flow of high-quality data that healthcare providers will require to build smarter AI systems. Second, we need to maintain a strong ethical mindset when considering the moral implications of using technology to make decisions about people’s health and well-being. This includes being transparent about the algorithms employed and the data used to feed them, so that patients understand why a decision was made. And finally, we need to dream big. Diseases like polio and smallpox have been virtually eradicated in many parts of the world already, so why can’t we do the same with other diseases? Imagine a world without sickle cell anemia, without cancer! When thinking about the future of healthcare, the potential is immense. Now let’s make it happen. Source: World Economic Forum Read more »
  • A.I. Could Help Us Be More Human
    Maricel Cabahug, the chief design officer at German business software giant SAP, says her company likes to think of its A.I.-driven products and services as coworkers for SAP’s clients. But that paradigm has its issues.     “How do we make [an A.I. product] so it doesn’t compete with you?” Cabahug asked during Fortune’s Brainstorm Design conference in Singapore last week. The potential for robots to replace humans has already been realized on a large scale in the manufacturing sector, she said, and mundane white-collar jobs are likely targets for automation, too. “Unlike a coworker you might train and who might one day be your boss, this coworker will never be better than you,” Cabahug assured conference attendees, whose companies might call such “coworkers” by another name: virtual assistants. With that premise in mind, SAP developed a “smart” tool called Inscribe, which allows users to interact with SAP’s management software via a stylus and therefore natural handwriting. Through Inscribe, a person can scribble out columns in a spreadsheet, add notes to sections they find interesting, and hand-write directives to the algorithms running the software. Cabahug described the technology as a “conversational experience” because SAP’s A.I. responds to prompts from the stylus and feeds the user information. Inscribe’s purpose, Cabahug said, is to help people be better at their job—not to do their job for them. Besides Inscribe, SAP’s solution to the problem of how humans interact with ever-advancing tech, the company also offers voice-activated solutions. “These types of interaction allow us to be more human,” Cabahug said. That’s a similar sentiment to one that Tim Brown, CEO of design consultancy IDEO, expressed during the first day of Brainstorm Design. Brown remarked that A.I. could stand for “augmented” rather than “artificial” intelligence, because its purpose is to help humans achieve more than we could do alone. Perhaps having a robotic co-worker won’t be so bad after all. For more coverage of Fortune’s Brainstorm Design conference, click here.   Source: Fortune Read more »
WordPress RSS Feed Retriever by Theme Mason

Artificial Intelligence Weekly News

  • Artificial Intelligence Weekly - Artificial Intelligence News #100 - Mar 13th 2019
    In the News The AI-art gold rush is here An artificial-intelligence “artist” got a solo show at a Chelsea gallery. Will it reinvent art, or destroy it? Also in the news... OpenAI created OpenAI LP, a new “capped-profit” company to invest in projects that align with their vision towards AGI. More Google Duplex rolls out to Pixel phones in 43 US states. More Sponsor Machine Learning for Marketers: We Built a Marketing Tactic Recommendation Engine Ladder’s mission is to remove the guesswork from growth. Machine learning, data-science, and automated intelligence is their latest leap forward. Learn how they built a marketing tactic recommendation engine, and how they’re currently working to automate marketing strategy. Learn More Learning Humanity + AI: Better Together Frank Chen on how A.I. should "help humanity": creativity, decision making, understanding etc. Many examples too — mainly companies they invested in. You created a machine learning application. Now make sure it’s secure The software industry has demonstrated, all too clearly, what happens when you don’t pay attention to security Technical details on Facebook Portal's smart camera Software tools & code Lessons learned building natural language processing systems in health care NLP systems in health care are hard—they require broad general and medical knowledge, must handle a large variety of inputs, and need to understand context. Using deep learning to “read your thoughts” With Keras and an EEG sensor Jupyter Lab: Evolution of the Jupyter Notebook An overview of JupyterLab, the next generation of the Jupyter Notebook. Cocktail similarity Fun project to generate a cocktail similarity map based on common ingredients Hardware Launching TensorFlow Lite for Microcontrollers Workplace 12 things I wish I’d known before starting as a Data Scientist Useful and practical advice by someone who has worked as a data scientist for a few years at Airbnb. Some thoughts Driver Behaviours in a world of Autonomous Mobility These are the behaviours and practices that will mainstream in our self-driving urban landscape. This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #99 - Feb 28th 2019
    In the News This is why AI has yet to reshape most businesses For many companies, deploying AI is slower and more expensive than it might seem. We have stumbled into the era of machine psychology Nvidia's got a cunning plan to keep powering the AI revolution Nvidia’s artificial intelligence journey started with cats. Now it's heading to the kitchen Sponsor Partner with Neon to Develop New Conversational AI Apps Take your apps and devices to the next level. Neon’s time-tested, white-label product delivers easy-to-install code with endless possibilities. Our Polylingual AI offers real-time translation, transcription, natural language understanding, home automation and more. Let's build the future together. Learning 14 NLP Research Breakthroughs You Can Apply To Your Business BERT, sequence classification with human attention, SWAG, Meta-learning, Multi-task learning etc. The technology behind OpenAI’s fiction-writing, fake-news-spewing AI, explained The language model can write like a human, but it doesn’t have a clue what it’s saying. Data Versioning "Productionizing machine learning/AI/data science is a challenge. Not only are the outputs of machine-learning algorithms often compiled artifacts that need to be incorporated into existing production services, the languages and techniques used to develop these models are usually very different than those used in building the actual service. In this post, I want to explore how the degrees of freedom in versioning machine learning systems poses a unique challenge. I'll identify four key axes on which machine learning systems have a notion of version, along with some brief recommendations for how to simplify this a bit." Foundations Built for a General Theory of Neural Networks Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural network’s form will influence its function. Software tools & code How 20th Century Fox uses ML to predict a movie audience How Hayneedle created its visual search engine Some thoughts Meet 2 women transforming the AI Ecosystem in Africa About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #98 - Feb 21st 2019
    In the News The Rise of the Robot Reporter As reporters and editors find themselves the victims of layoffs at digital publishers and traditional newspaper chains alike, journalism generated by machine is on the rise. Getting smart about the future of AI Artificial intelligence is a primary driver of possibilities and promise as the Fourth Industrial Revolution unfolds. Sponsor Add Audible AI to Any Web Page in Just 5 Lines of HTML Neon adds Audible AI to all your pages quickly and easily! Empower your website users to gather helpful information by using voice commands or by typing. Equip your site so users can ask for real-time Q&A, conversions, math solutions, language translation, transcription & more! Customizable! Watch our Audible AI demo to learn how. Learning Better Language Models and Their Implications OpenAI trained GPT2, a language generation model that achieved surprisingly good results (see article for examples). Seeing this performance, OpenAI decided not to open-source their best model for fear it might be mis-used (online trolling, fake news, cyber bullying, spam...) List of Machine Learning / Deep Learning conferences in 2019 Perspectives on issues in AI Governance Report by Google focusing on 5 areas for clarification: explainability, fairness, safety, human-AI collaboration and liability Software tools & code Introducing PlaNet Instead of using traditional RL approaches, Google has trained an agent to "learn a world model" and thus become more efficient at planning ahead. Troubleshooting Deep Neural Networks A field guide to fixing your model Hardware Edge TPU Devices The Edge TPU is a small ASIC designed by Google that perform ML inferencing on low-power devices. For example, it can execute MobileNet V2 at 100+ fps in a power efficient manner. Facebook is Working on Its Own Custom AI Silicon Workplace Succeeding as a data scientist in small companies/startups Some thoughts Will AI achieve consciousness? Wrong question When Norbert Wiener, the father of cybernetics, wrote his book The Human Use of Human Beings in 1950, vacuum tubes were still the primary electronic building blocks, and there were only a few actual computers in operation. About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #97 - Feb 7th 2019
    In the News DeepMind wants to teach AI to play a card game that’s harder than Go Hanabi is a card game that relies on theory of mind and a higher level of reasoning than either Go or chess—no wonder DeepMind’s researchers want to tackle it next. In the news this week... China is said to be worried an AI arms race could lead to accidental war. More Google says it wants rules for the use of AI—kinda, sorta. More Is China’s corruption-busting AI system ‘Zero Trust’ being turned off for being too efficient? More Sponsor Build Better Bots with Our Neon Conversational AI SDK Finally, a white label solution for your polylingual conversational AI needs. Neon provides advanced Natural Language Understanding so you can build custom audio-responsive devices, AI personal assistants, home automation, corporate apps and more…with real-time translated responses! Watch our AI in action. Learning Cameras that understand: portrait mode and Google Lens On-device ML and computer vision advances will help make camera sensors a lot smarter. Today it's all about "computational photography", but tomorrow cameras will be able to anticipate and understand our needs and context. An AI is playing Pictionary to figure out how the world works Forget Go or StarCraft—guessing the phrase behind a drawing will require machines to gain some understanding of the way concepts fit together in the real world. Software tools & code Papers with code Great new resource that provides summaries and links to Machine Learning papers along with the corresponding code and evaluation tables. Multi-label Text Classification using BERT Diversity in Faces IBM Research releases ‘Diversity in Faces’ dataset to advance study of fairness in Facial Recognition systems Running TensorFlow at Petascale and Beyond Uber AresDB Introducing Uber’s GPU-powered Open Source, Real-time Analytics Engine Workplace Data Scientist Salaries and Jobs in Europe Here is what a recent report says about job opportunities for Data Scientists across Europe including salaries and benefits, job motivations, programming languages used, tech skills and what people want most from their work. Your AI skills are worth less than you think Some thoughts 10 TED Talks on AI and machine learning How will AI reshape your career? Your health? Your ability to tell real from fake video? Recent TED talks explore some fascinating AI questions About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin · Share on Google+ Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #96 - Jan 31st 2019
    In the News Deepmind's AlphaStar beats pro Starcraft player Starcraft has been a focus for AI due to its higher complexity vs chess or go: imperfect information, real-time, a lot more things happening... Granted, Alphastar was maybe not on a level-playing fields with humans. But this is still an impressive feat. Virtual creators aren’t AI — but AI is coming for them Lil Miquela, the A.I. generated Instagram superstar, may just be the beginning. Amazon's delivery robot Scout Amazon has revealed Scout, a six-wheeled knee-height robot designed to autonomously deliver products to Amazon customers. Sponsor Quickly Enable Your Solutions with Conversational AI Tech Don’t get left in the AI dust. You need fast, sophisticated AI enablement—and we’ve got it. From real-time transcription and language translation to smart alerts and database integration, our patented technologies can put your apps and devices ahead of the pack. Watch our demos to learn how. Learning We analyzed 16,625 papers to figure out where AI is headed next Our study of 25 years of artificial-intelligence research suggests the era of deep is coming to an end. Practical Deep Learning for coders 2019 This is the 3rd iteration of's great online learning resources for Coders. There are seven lessons, each around 2 hours long, covering Computer vision (classification, localization, key-points), NLP (language modeling, document classification) and tabular data (both categorical and continuous). Why are Machine Learning projects so hard to manage? "I’ve watched lots of companies attempt to deploy machine learning — some succeed wildly and some fail spectacularly. One constant is that machine learning teams have a hard time setting goals and setting expectations. Why is this?" Software tools & code Natural Questions — by Google Google released a dataset containing around 300,000 questions along with human-annotated answers from Wikipedia pages. Useful for Question Answering Research. The state of the octoverse: machine learning Github's review of the most popular languages, frameworks and tools for Machine Learning. Transformer-XL: Unleashing the Potential of Attention Models Introducing Transformer-XL, a novel architecture that enables natural language understanding beyond a fixed-length context... Hardware This robot can probably beat you at Jenga—thanks to its understanding of the world Industrial machines could be trained to be less clumsy if we gave them a sense of touch and a better sense of real-world physics. Some thoughts The AI threat to open societies — by George Soros In an age of populist nationalism, open societies have increasingly come under strain. But the threat of atavistic ideological movements pales in comparison to that posed by powerful new technologies in the hands of authoritarians. A.I. could worsen health disparities In a health system riddled with inequity, we risk making dangerous biases automated and invisible. About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin · Share on Google+ Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #95 - Jan 24th 2019
    In the News AI is sending people to jail—and getting it wrong Using historical data to train risk assessment tools could mean that machines are copying the mistakes of the past. Three charts show how China’s AI industry is propped up by three companies More than half of the country’s major AI players have funding ties that lead back to Baidu, Alibaba, and Tencent. Sponsor Is There Bias in Your AI Model? Do You Know How It Got There? Bias comes in a variety of forms, all of them potentially damaging to the efficacy of your ML algorithm. Our Chief Data Scientist specializes in training data bias, the source of most headlines about AI failures. Our overview – Four Types of AI Bias – is a guide for detecting and mitigating bias. Learning Few-shot learning Thoughts on progress made and challenges ahead in few-shot learning (i.e. learning from tiny datasets). Slides by Hugo Larochelle (Google Brain) What can neural networks learn? "Neural networks are famously difficult to interpret. It’s hard to know what they are actually learning when we train them. Let’s take a closer look and see whether we can build a good picture of what’s going on inside." Looking Back at Google’s AI Research Efforts in 2018 Software tools & code Machine Learning for Kids Web-based list of ML projects aimed at children aged 8 to 16. They cover simple computer vision, NLP, game mechanics, and are available on "scratch", a coding platform for children. Uber Manifold Model-Agnostic Visual Debugging Tool for Machine Learning What’s coming in TensorFlow 2.0 Workplace Demand and salaries for Data Scientists continue to climb Data-science job openings are expanding faster than the number of technologists looking for them Airbnb Data Science Interview Questions Some thoughts How AI will turn us all into Filmmakers About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin · Share on Google+ Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #94 - Jan 10th 2019
    In the News Cheaper AI for everyone is the promise with Intel and Facebook’s new chip Companies hoping to use artificial intelligence should benefit from more efficient chip designs. Finland’s grand AI experiment Inside Finland’s plan to train its whole population in artificial intelligence. World's largest AI startup readies $2B fundraising Sponsor The first AI/machine learning course with job guarantee Work with the latest AI applications after completing Springboard's self-paced, online machine learning course. Weekly personal calls with your own AI/machine learning expert. Personalized career coaching. Build a portfolio of meaningful projects that will get you hired. Get a job or your tuition back with the proven Springboard job guarantee. Learning High-performance medicine On the convergence of human and artificial intelligence Does AI make strong tech companies stronger? A.I. needs lots of data to work well, which leads to virtuous circles (more data => better AI => better product => more data) that can benefit large established tech companies. Is this true though? Unprovability comes to machine learning Scenarios have been discovered in which it is impossible to prove whether or not a machine-learning algorithm could solve a particular problem. This finding might have implications for both established and future learning algorithms. Lessons Learned at Instagram Stories and Feed Machine Learning Instagram's recommender system serves over 1 billion users on a regular basis for feed and stories ranking as well as post recommendations and smart prefetching. Here are a few lessons learnt along the way of building this ML pipeline. Software tools & code Designing an audio adblocker for radio and podcasts Adblock Radio detects audio ads with machine-learning and Shazam-like techniques. The core engine is open source: use it in your radio product! You are welcome to join efforts to support more radios and podcasts. What Kagglers are using for Text Classification About TextCNN, Bidirectional RNNs and Attention Models. 10 Data Science tools I explored in 2018 Tensorflow Privacy Library for training machine learning models with privacy for training data. This makes use of differential privacy. Workplace Does my startup data team need a Data Engineer? The role of the data engineer in a startup data team is changing rapidly. Are you thinking about it the right way? Some thoughts How big data has created a big crisis in science About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin · Share on Google+ Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #93 - Dec 27th 2018
    In the News Should we be worried about computerized Facial Recognition? The technology could revolutionize policing, medicine, even agriculture—but its applications can easily be weaponized. AlphaZero: Shedding new light on the grand games of chess, shogi and Go Introducing the full evaluation of AlphaZero on how it learns each game to become the strongest player in history for each, despite starting its training from random play, with no in-built domain knowledge but the basic rules of the game. Learning 10 Exciting Ideas of 2018 in NLP Unsupervised MT, pre-trained language models, common sense inference datasets, meta-learning, robust unsupervised learning, understanding representations, clever auxiliary tasks, inductive bias and more How AI Training Scales "We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized." Data Science vs Engineering: Tension Points Current state of collaboration around building and deploying models, tension points that potentially arise, as well as practical advice on how to address these tension points. Using object detection for complex image classification scenarios For situations where scenes don't contain one main object or a simple scene, object detection can be used to improve the performance of computer vision algorithms. Comes with examples from the retail industry. The limitations of deep learning Software tools & code Text as Data Learn how to collect and analyze social media data using topic models, text networks, and word2vec with this open source version of the Text as Data class from Duke's Data Science program. Wav2letter++, the fastest open source speech system, and flashlight Open-sourced by Facebook: a new fully convolutional approach to automatic speech recognition and wav2letter++, the fastest state-of-the-art end-to-end speech recognition system available. Hardware A Full Hardware Guide to Deep Learning About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin · Share on Google+ Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #92 - Dec 13th 2018
    In the News A radical new neural network design could overcome big challenges in AI Researchers borrowed equations from calculus to redesign the core machinery of deep learning so it can model continuous processes like changes in health. The friendship that made Google huge Coding together at the same computer, Jeff Dean and Sanjay Ghemawat changed the course of the company—and the Internet. Alibaba already has a voice assistant way better than Google’s It navigates interruptions and other tricky features of human conversation to field millions of requests a day. Also in the news... DeepMind has announced AlphaFold: an AI to predict the 3D structure of a protein based solely on its genetic sequence. More Waymo is introducing the Waymo One, a fully self-driving service in the Phoenix Metro area. More Learning Community-driven site listing recent and interesting papers to help you review the state-of-the-art in NLP, computer vision, game playing, program synthesis etc. AI Index 2018 Report Long and detailed report on A.I. research activity, industry activity and technical performance for 2018. Predicting the real-time availability of 200 million grocery items Details on Instacart's model to continuously monitor and predict the availability of 200 million grocery items. Software tools & code Deepdive into Facebook's open-source Reinforcement Learning platform "Facebook decided to open-source the platform that they created to solve end-to-end Reinforcement Learning problems at the scale they are working on. So of course I just had to try this 😉 Let’s go through this together on how they installed it and what you should do to get this working yourself." Machine Learning Basics - Gradient Boosting & XGBoost Like Random Forest, Gradient Boosting is another technique for performing supervised machine learning tasks, like classification and regression. Hardware Amazon’s homegrown chips threaten Silicon Valley giant Intel Amazon is upping its efforts to build its own CPUs, in particular what it calls the Inferentia, a chip specifically designed for ML inference. Workplace 3 common data science career transitions, and how to make them happen Some thoughts The deepest problem with deep learning Some reflections on an accidental Twitterstorm, the future of AI and deep learning, and what happens when you confuse a schoolbus with a snow plow. About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin · Share on Google+ Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #91 - Nov 29th 2018
    In the News How cheap labor drives China’s A.I. ambitions Cheap manual labor is one of the current requirements for large AI developments. China has a lot of inexpensive labor force and is making it one of its key advantages to become the AI world leader by 2030. Is the Chinese billionaire Jack Ma using AI to create dystopian cities? "News that tech giant Ma is a member of Communist party of China should set alarm bells ringing – and not just in China" One of the fathers of AI is worried about its future Yoshua Bengio wants to stop talk of an AI arms race and make the technology more accessible to the developing world. Also in the news... US commerce department proposes new export restrictions on AI. Tweet, Report Fearful of gender bias, Google blocks gender based pronouns from Smart Compose. More Learning Easy-to-read summary of important AI research papers of 2018 Lack time to read research papers? Have a look at this great summary of some of the main ideas found in recent research papers, along with comments from the community. Beating the state-of-the-art in NLP with HMTL Multi-Task Learning is a general method in which a single architecture is trained towards learning several different tasks at the same time. Here's an example with such a model (HMTL) trained to beat the state-of-the-art on several NLP tasks. Why bigger isn’t always better with GANs and AI art Looking at GANs less from a performance standpoint and more from an artistic one. Automated testing in the modern data warehouse "If data issues have lasting consequences, why are we less sophisticated at testing than our software developing counterparts?" Measuring what makes readers subscribe to The New York Times How the NYT builds fast and reusable econometric models in-house Software tools & code BigGAN A new state of the art in Image synthesis Best Deals in Deep Learning Cloud Providers Comparing prices to train models on GPUs and TPUs with AWS, Google, Paperspace, and others. Hardware Basic Graphics Processing Unit (GPU) design concepts Graphics pipeline, vector processing along with the other graphics operations. Useful to have in mind when wading into GPU-powered ML training. Some thoughts Portrait of Li Fei-Fei Former Professor at Stanford University in Computer Vision, now Chief Scientist for Google Cloud AI — on her "quest to make AI better for humanity". About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin · Share on Google+ Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on You can also subscribe via email. Read more »
WordPress RSS Feed Retriever by Theme Mason

Marketing Artificial Intelligence Institute

  • The AI-Powered Assistant Every Marketer Needs
    Meet Lucy, an AI-powered assistant created by Equals 3 that holds the answers to all the questions you have about your organization's knowledge. Lucy processes all the documents and data your company has. Then, you and your colleagues can ask her questions. She'll provide the right answer from your data, helping you make the best decisions possible. (She can even now search videos.) The result? Fortune 1000s and big agencies are able to actually use critical business data locked away in their organizations. We talked with Equals 3 managing partner Scott Litman (LinkedIn) and AI advisor Rahul Singhal (LinkedIn) to uncover all that Lucy can do. In a single sentence or statement, describe Equals 3. Lucy, created by Equals 3, works as an AI-powered knowledge management assistant that reads and learns every document and data asset that you feed her—she never leaves, never forgets and becomes smarter every day.   Read more »
  • What’s Wrong with Marketing Automation Software Today?
    Editor’s Note: This post originally appeared in the Answering AI editorial section of our newsletter. Subscribe to the newsletter to get exclusive insights and resources twice weekly (usually Tuesday/Thursday), as well as select promotions. Read more »
  • 7 Things Every Marketer Should Know About Artificial Intelligence
    Artificial intelligence (AI) may seem abstract, and even a bit overwhelming, but its potential to drive costs down and revenue up in your business are very real. Read more »
  • How Should Marketers Get Started with Artificial Intelligence?
    Editor’s Note: This post originally appeared in the Answering AI editorial section of our newsletter. Subscribe to the newsletter to get exclusive insights and resources twice weekly (usually Tuesday/Thursday), as well as select promotions. Read more »
  • How to Transform your Content Marketing with Artificial Intelligence
    Do you feel like your content marketing strategy has become stale, and you’re not seeing the results you’re used to? Or maybe despite regularly publishing content you thought was high quality and relevant, it’s not resonating with your audience, and you’re falling behind your competitors. Read more »
  • How Can We Make AI More Approachable for Marketers?
    Editor’s Note: This post originally appeared in the Answering AI editorial section of our newsletter. Subscribe to the newsletter to get exclusive insights and resources twice weekly (usually Tuesday/Thursday), as well as select promotions. Read more »
  • 18 Artificial Intelligence Courses to Take Online
    There are a ton of ways to get started learning about artificial intelligence thanks to massive open online courses (MOOCs). You may have heard of popular MOOC providers like Udemy, Coursera, and edX. These providers offer some courses on artificial intelligence that go much deeper on the subject than your average article or video. In fact, some are part-time courses of study that require several hours per week to complete. These courses are often taught by top AI researchers or experts, but cost far less than a typical university course. Read more »
  • AI in Advertising: What It Is, How to Use It and Companies to Demo
    In 2018, Lexus released what it called the first advertisement scripted by artificial intelligence. Read more »
  • This Marketing AI Solution Tells You Exactly How It Makes Recommendations and Predictions
    Artificial intelligence solutions in marketing and other industries often have a problem: It is sometimes hard, or impossible, to explain why AI systems make the recommendations or predictions they do. It makes sense, if you think about it. Machine learning algorithms might use hundreds, thousands, or millions of factors to arrive at a single recommendation or prediction. It can be difficult to untangle just how the machine got to the outcome it did, even if that outcome seems ideal. But this poses problems for businesses. What happens if your amazing AI tool makes what looks like a great recommendation—but that recommendation ends up being bad for business? How do you explain to your boss, CEO, or the board why that particular recommendation over another was made?  It's a problem that AI-powered platform simMachines tries to solve with its explainable AI models. The models are used by marketers to do everything from customer lifecycle modeling to lead prioritization to A/B testing and measurement.  Just as important as their results, the models tell you the "why" behind every prediction and recommendation they make. We spoke with simMachines CMO Dave Irwin to learn more about the solution. Read more »
  • 8 TED Talks on AI Every Marketer Should Watch
    TED talks are a great place for AI insights and inspiration, as they feature people with a range of AI backgrounds and experiences. At the time of this writing, a search for “artificial intelligence” on the TED talks website turns up 311 results. In case you don’t have the time to watch them all, we compiled a list of our favorites. Read more »
WordPress RSS Feed Retriever by Theme Mason