Geeky Girls Knit ~ Episode 337 ~ In Which #BigEarringsAreRad

Geeky Girls Knit ~ Episode 337 ~ In Which #BigEarringsAreRad Geeky Girls Knit ~ a Mamma/Daughter video podcast for fans of all things knitting & geeky ~*~ Show Notes and Find out more on our website  GeekyGirlsKnit.com Support the Podcast, Become A Patron  https://www.patreon.com/geekygirlsknit.

This item belongs to: movies/geekygirlsknit.

This item has files of the following types: Archive BitTorrent, Item Tile, MPEG4, Metadata, Ogg Video, Thumbnail

Internet Archive – Collection: vlogs

AI machine learning

  • Topping the tower: the Obstacle Tower Challenge AI Contest with Unity and Google Cloud
    true
    Ever since Marvin Minsky and several collaborators coined the term “artificial intelligence” in 1956, games have served as both a training ground and a benchmark for AI research. At the same time, in many cultures around the world, the ability to play certain games such as chess or Go has long been considered one of the hallmarks of human intelligence. And when computer science researchers started thinking about building systems that mimic human behavior, games emerged as a natural “playground” environment.Over the last decade, deep learning has driven a resurgence in AI research, and games have returned to the spotlight. Perhaps most significantly, in 2015 AlphaGo, an autonomous Go bot built by DeepMind (an Alphabet subsidiary) emerged as the best player in the world at the traditional board game Go. Since then, the DeepMind team has built bots that challenge top competitors at a variety of other games, including Starcraft.The competitionAs games have become a prominent arena for AI, Google Cloud and Unity Technologies decided to collaborate on a game-focused AI competition: the Obstacle Tower Challenge. Competitors create advanced AI agents in a game environment. The agents they create are AI programs that take as inputs the image data of the simulation, including obstacles, walls, and the main character’s avatar. They then provide the next action that the character takes in order to solve a puzzle or advance to the next level. The Unity engine runs the logic and graphics for the environment, which operates very much like a video game.Unity launched the first iteration of the Obstacle Tower Challenge in February, and the reception from the AI research community has been very positive. The competition has received more than 2,000 entries from several hundred teams around the world, including both established research institutions and collegiate student teams. The top batch of competitors, the highest scoring 50 teams, will receive an award sponsored by Google Cloud and advance to the second round.Completing the first round was a significant milestone, since teams had to overcome a fairly difficult hurdle, advancing past several levels of increased difficulty in the challenge. None of these levels were available to the researchers or their agents during training, so the agents had to learn complex behavior and generalize their behavior to handle previously unseen situations.The contest’s second round features a set of additional levels. These new three-dimensional environments incorporate brand new puzzles and graphical elements that force contestant research teams to develop more sophisticated machine learning models. New obstacles may stymie many of the agents that passed the levels from first phase.How Google Cloud can helpDeveloping complex game agents is a computationally demanding task, which is why we hope that the availability of Cloud credits will help participating teams. Google Cloud offers the same infrastructure that trained AlphaGo’s world-class machine learning models, to any developer around the world. In particular we recently announced the availability of Cloud TPU pods, for more information you can read this blog post.All of us at Google Cloud AI would like to congratulate the first batch of successful contestants of the Unity AI challenge, and we wish them the best of luck as they enter the second phase. We are excited to learn from the winning strategies. Read more »
  • Sunny spells: How SunPower puts solar on your roof with AI Platform
    true
    Editor’s Note: Today’s post comes from Nour Daouk, Product Manager at SunPower. She describes how SunPower uses AI Platform to provide users with useful models and proposals of solar panel layouts for their home, with only a street address for user input.Have you ever wondered what solar panels would look like on your roof? At SunPower, we’re helping homeowners create solar designs from the comfort of their home. Specifically, we use deep learning and high-resolution imagery as inputs to models that design and visualize solar power systems on residential roofs. Read on to learn how and why we built this technology for our customers, called SunPower Instant Design.Homeowners typically spend a significant amount of time online researching solar panels and running calculations to understand their potential savings and  the number of panels they need for their home. There are no quick answers because every roof is different and every house requires a customized design. With SunPower Instant Design, homeowners can create their own designs in seconds, which improves their buying experience, reduces barriers to going solar, and ultimately increases solar adoption.Instant Design’s 3D model of a roof with obstructions in red (left), satellite image with panel layout (middle), and input satellite image (right)How we helpDesigning a solar power system for a home is a process that relies on factors unique to each home. First, we model the roof in three dimensions to account for obstructions such as chimneys and vents. Second, we lay legally-mandated access walkways and place solar panels on the roof segments. Finally, we model the angle and exposure of sunlight hitting the roof to calculate the system’s potential energy production. With Instant Design, we replicate this same process by leveraging tools including machine learning and optimization. Below, we’ll explain how we used deep neural networks to obtain accurate three-dimensional models of residential roofs.The data: guiding the design with both color and depth imageryIt is probably possible to design a three-dimensional model of a roof with satellite imagery alone, but design accuracy improves greatly with the use of a height map. For Instant Design, we partnered with Google Project Sunroof for access to both satellite and digital surface model (DSM) data. We used our database of manually generated designs as a base for our labeled data, and projected those onto the RGB and depth channels for the training, validation, and test sets. We also generated augmentations—including rotation and translation—to reduce overfitting.  Roof segmentationTo reconstruct a roof, we model each roof segment with its corresponding pitch and azimuth in three dimensions. We began to identify roof segments by applying image processing and edge detection on both the satellite and depth data, but we quickly realized that semantic segmentation would yield much better results, as similar edges were detected successfully with that method in research literature.Image processing result (left), neural network-based result (middle) and input satellite image (right)After some experimentation, we chose to perform semantic segmentation, and then selected a version of a U-net that works well with our type of imagery at high speeds. The U-net architecture was a solid starting point, with a few tweaks for better results. For instance, we added batch normalization to each convolutional layer for regularization and selected the Wide Residual Network as our encoder for improved accuracy. We also created a domain-specific loss function to get the model to converge to meaningful outcomes.U-net diagram (click for source)What gets in the way: chimneys, vents, pipes, and skylightsIn an effort to avoid mistakenly placing panels on obstructions such as chimneys, vents, pipes, skylights, and previously-installed panels, our next step is to detect those obstructions as separate items on the roof. Our main challenge here was that we had to handle both the quantity and size of the obstructions, and address any imbalance in class representation. Indeed, there are more roof pixels than obstruction pixels in our images. Due to the difference in shape and scale of chosen classes we decided to use a separate model from the segmentation model to detect obstructions, although both models are similar in structure.Roof with detected obstructions outlined in redSpeed and scale via Cloud AI PlatformOnce we had built a satisfactory proof of concept, we quickly realized that we would need to iterate on our model in order to deliver an experience that was ready for homeowners. We needed to build a development pipeline that could quickly bring modeling ideas from conception to deployment, so we chose AI Platform to help us scale. Our initial training setup was on our own servers, and the training process was slow: training a new model took a week. In contrast, on AI Platform, we were able to train and test a new model in a single day. Moreover, we took full advantage of the ability to train multiple models simultaneously to conduct a vast hyperparameter search. For our prediction, we used NVIDIA V100 GPU-enabled virtual machines on GCP with nvidia-docker, which helped us achieve prediction times of around one second.ConclusionSunPower empowers homeowners to understand the amount of energy they can generate with solar, now with just a few clicks. Our team was able to start work on this exciting project due to advances in aerial imagery and machine learning. And AI Platform helped us focus on the core design problem, achieve our goals faster, and create designs quickly.We are changing how we offer solar power to homeowners by giving them immediate answers to their questions. While we have more work to do, we are optimistic that SunPower Instant Design will transform the solar industry when our first product featuring this technology launches this summer.To learn more about how SunPower is using the cloud, read this blog post from Google Cloud CEO Thomas Kurian. Read more »
  • No deep learning experience needed: build a text classification model with Google Cloud AutoML Natural Language
    true
    Modern organizations process greater volumes of text than ever before. Although certain tasks like legal annotation must be performed by experienced professionals with years of domain expertise, other processes require simpler types of sorting, processing, and analysis, with which machine learning can often lend a helping hand.Categorizing text content is a common machine learning task—typically called “content classification”—and it has all kinds of applications, from analyzing sentiment in a review of a consumer product on a retail site, to routing customer service inquiries to the right support agent. AutoML Natural Language helps developers and data scientists build custom content classification models without coding. Google Cloud’s Natural Language API helps you classify input text into a set of predefined categories. If those categories work for you, the API is a great place to start, but if you need custom categories, then building a model with AutoML Natural Language is very likely your best option.In this blog post, we'll guide you through the entire process of using AutoML Natural Language. We'll use the 20 Newsgroups dataset, which consists of about 20,000 posts, roughly evenly divided across 20 different newsgroups, and is frequently used for content classification and clustering tasks.As you'll see, this can be a fun and tricky exercise, since the posts typically use casual language and don't always stay on topic. Also, some of the newsgroups that we’ll use from the dataset overlap quite a bit; for example, two disparate groups cover PC and Mac hardware.Preparing your dataLet's first start by downloading the data. I've included a link to a Jupyter notebook that will download the raw dataset, and then transform it into the CSV format expected by AutoML Natural Language. AutoML Natural Language looks for the text itself or a URL in the first column, and the label in the second column. In our example, we're assigning one label to each sample, but AutoML Natural Language also supports multiple labels.To download the data, you can simply run the notebook in the hosted Google Colab environment, or you can find the source code on GitHub.Importing your dataWe are now ready to access the AutoML Natural Language UI. Let's start by creating a new dataset by clicking the New Dataset button. Create a name like twenty_newsgroups and upload the CSV you downloaded in the earlier step.Training your modelIt will take several minutes for the endpoint to import your training text. Once complete, you'll see a list of the text items and each accompanying label. You can drill down into the text items for specific labels on the left side.After you’ve loaded your data successfully, you can move on to the next stage by training your model. It will take several hours to return the optimal model, and you’ll receive notification emails about the status of the training.Evaluating your modelWhen the model training is complete, you'll see a dashboard that displays a number of metrics. AutoML Natural Language generates these metrics comparing predictions against the actual labels in the test set. If these metrics are new to you, I'd recommend reading more about them in the Google Machine Learning Crash Course. In short, recall represents how well the model found instances of the correct label (minimizing false negatives). Precision represents how well it did at avoiding labeling instances incorrectly (minimizing false positives).The precision and recall metrics from this example are based on a score threshold of 0.5. You can try adjusting this threshold to see how it impacts your metrics. You can see that there is a tradeoff between precision and recall. If the confidence required to apply a label rises from 0.5 to 0.9, for example, precision will go up because your model will be less likely to mislabel a sample. On the other hand, recall will go down because any samples between 0.5 and 0.9 which were previously identified will not be labeled.Just below this paragraph, you’ll find a confusion matrix. This tool can help you more precisely evaluate the model’s accuracy at the label level. You'll not only see how often the model identified each label correctly, but you'll see which labels it mistakenly identified. You can drill down to see specific examples of false positives and negatives. This can prove to be very useful information, because you’ll know whether you need to add more training data to help your model better differentiate between labels that it frequently failed to predict.PredictionLet's have some fun and try this on some example text. By moving to the Predict tab, you can paste or type some text and see how your newly trained model labels it. Let's start with an easy example. I'll take the first paragraph of a Google article about automotive trends, and paste it in. Woohoo! 100% accuracy.You can try some more examples yourself, entering text that might be a little tougher for the model to distinguish. You'll also see how to invoke a prediction using the API at the bottom. For more details, the documentation provides examples in Python, Java, and Node.js.ConclusionOnce you’ve created a custom model that organizes content into categories, you can then use AutoML Natural Language’s robust evaluation tools to assess your model's accuracy. These will help you refine your threshold and potentially add more data to shore up any weaknesses. Try it out for yourself! Read more »
  • Google’s scalable supercomputers for machine learning, Cloud TPU Pods, are now publicly available in beta
    true
    To accelerate the largest-scale machine learning (ML) applications deployed today and enable rapid development of the ML applications of tomorrow, Google created custom silicon chips called Tensor Processing Units (TPUs). When assembled into multi-rack ML supercomputers called Cloud TPU Pods, these TPUs can complete ML workloads in minutes or hours that previously took days or weeks on other systems. Today, for the first time, Google Cloud TPU v2 Pods and Cloud TPU v3 Pods are publicly available in beta to help ML researchers, engineers, and data scientists iterate faster and train more capable machine learning models.A full Cloud TPU v3 PodDelivering business valueGoogle Cloud is committed to providing a full spectrum of ML accelerators, including both Cloud GPUs and Cloud TPUs. Cloud TPUs offer highly competitive performance and cost, often training cutting-edge deep learning models faster while delivering significant savings. If your ML team is building complex models and training on large data sets, we recommend that you evaluate Cloud TPUs whenever you require:Shorter time to insights—iterate faster while training large ML modelsHigher accuracy—train more accurate models using larger datasets (millions of labeled examples; terabytes or petabytes of data)Frequent model updates—retrain a model daily or weekly as new data comes inRapid prototyping—start quickly with our optimized, open-source reference models in image segmentation, object detection, language processing, and other major application domainsWhile some custom silicon chips can only perform a single function, TPUs are fully programmable, which means that Cloud TPU Pods can accelerate a wide range of state-of-the-art ML workloads, including many of the most popular deep learning models. For example, a Cloud TPU v3 Pod can train ResNet-50 (image classification) from scratch on the ImageNet dataset in just two minutes or BERT (NLP) in just 76 minutes.Cloud TPU customers see significant speed-ups in workloads spanning visual product search, financial modeling, energy production, and other areas. In a recent case study, Recursion Pharmaceuticals iteratively tests the viability of synthesized molecules to treat rare illnesses. What took over 24 hours to train on their on-prem cluster completed  in only 15 minutes on a Cloud TPU Pod.What’s in a Cloud TPU PodA single Cloud TPU Pod can include more than 1,000 individual TPU chips which are connected by an ultra-fast, two-dimensional toroidal mesh network, as illustrated below. The TPU software stack uses this mesh network to enable many racks of machines to be programmed as a single, giant ML supercomputer via a variety of flexible, high-level APIs.2D toroidal mesh networkThe latest-generation Cloud TPU v3 Pods are liquid-cooled for maximum performance, and each one delivers more than 100 petaFLOPs of computing power. In terms of raw mathematical operations per second, a Cloud TPU v3 Pod is comparable with a top 5 supercomputer worldwide (though it operates at lower numerical precision).It’s also possible to use smaller sections of Cloud TPU Pods called “slices.” We often see ML teams develop their initial models on individual Cloud TPU devices (which are generally available) and then expand to progressively larger Cloud TPU Pod slices via both data parallelism and model parallelism to achieve greater training speed and model scale.You can learn more about the underlying architecture of TPUs in this blog post or this interactive website, and you can learn more about individual Cloud TPU devices and Cloud TPU Pod slices here.Getting startedIt’s easy and fun to try out a Cloud TPU in your browser right now via this interactive Colab that enables you to apply a pre-trained Mask R-CNN image segmentation model to an image of your choice. You can learn more about image segmentation on Cloud TPUs in this recent blog post.Next, we recommend working through our Cloud TPU Quickstart and then experimenting with one of the optimized and open-source Cloud TPU reference models listed below. We carefully optimized these models to save you time and effort, and they demonstrate a variety of Cloud TPU best practices. Benchmarking one of our official reference models on a public dataset on larger and larger pod slices is a great way to get a sense of Cloud TPU performance at scale.Image classificationResNet (tutorial, code, blog post)AmoebaNet-D (tutorial, code)Inception (tutorial, code)Mobile image classificationMnasNet (tutorial, code, blog post)MobileNet (code)Object detectionRetinaNet (tutorial, code, blog post)TensorFlow Object Detection API (blog post, tutorial)Image segmentationMask R-CNN (tutorial, code, blog post, interactive Colab)DeepLab (tutorial, code, blog post, interactive Colab)Natural language processingBERT (code, interactive Colab)Transformer (tutorial, Tensor2Tensor docs)Mesh TensorFlow (paper, code)QANet (code)Transformer-XL (code)Speech recognitionASR Transformer (tutorial)Lingvo (code)Generative Adversarial NetworksCompare GAN library, including a reimplementation of BigGAN (blog post, paper, code)DCGAN (code)After you work with one of the above reference models on Cloud TPU, our performance guide, profiling tools guide, and troubleshooting guide can give you in-depth technical information to help you create and optimize machine learning models on your own using high-level TensorFlow APIs. Once you’re ready to request a Cloud TPU Pod or Cloud TPU Pod slices to accelerate your own ML workloads, please contact a Google Cloud sales representative. Read more »
  • Improving data quality for machine learning and analytics with Cloud Dataprep
    true
    Editor’s note: Today’s post comes to us from Bertrand Cariou at Trifacta, and presents some steps you might take in Cloud Dataprep to clean your data for later use for your analytics or in training a machine learning model.Data quality is a critical component of any analytics and machine learning initiative, and unless you’re working with pristine, highly-controlled data, you’ll likely face data quality issues. To illustrate the process of turning unknown, inconsistent data into trustworthy assets, we will leverage the example of a forecast analyst in the retail (consumer packaged goods) industry. Forecast analysts must be extremely accurate in planning the right quantities to order. Supplying too much product results in wasted resources, whereas supplying too little means that they risk losing profit. On top of that, an empty shelf also risks consumers choosing a competitor’s product, which can have a harmful, long-term impact on the brand.To strike the right balance between appropriate product stocking levels and razor-thin margins, forecast analysts must continually refine their analysis and predictions, leveraging their own internal data as well as third-party data, over which they have no control.Every business partner, including suppliers, distributors, warehouses and other retail stores, may provide data (e.g. inventory, forecast, promotions, or past transactions) in various shapes and level of quality. One company may use palettes instead of boxes as a unit of storage, pounds versus kilograms, may have different categories nomenclature and namings, may use a different date format, or will most likely have product SKUs that are a combination of internal and other supplier IDs. Furthermore, some data may be missing or may have been incorrectly entered.Each of these data issues represents an important risk to reliable forecasting. Forecast analysts must absolutely clean, standardize, and gain trust in the data before they can report and model on it accurately. This post reviews key techniques for cleaning data with Cloud Dataprep and covers new features that may help improve your data quality with minimal effort.Basic conceptsCleaning data with Cloud Dataprep corresponds to a three-step iterative process:Assessing your data qualityResolving or remediating any issues uncoveredValidating cleaned data, at scaleCloud Dataprep constantly profiles the data you’re working on, from the moment you open the grid interface and start preparing data. With Dataprep’s real-time Active Profiling, you can see the impact of each data cleaning step on your data.The profile result is summarized at the column header with basic data points to point out key characteristics in your data, in the form of an interactive visual profile. By clicking one of these profile column header bars, Cloud Dataprep suggests some transformations to remediate mismatched or missing values. You can always try a transformation, preview its impact, select it or tweak it. At any point, you can always revert to a specific previous step if you don’t like the result.With these basic concepts in mind, let’s cover Cloud Dataprep data quality capabilities.1. Assessing your data qualityAs soon as you open a dataset in the grid interface, you can access to data quality signals that help you assess data issues and guide your work in cleaning the data.Rapid profilingYou’ll likely scan over your column headers and identify the potential quality issues to understand which columns may need your attention. Mismatched values (red bar) based on the inferred data types, missing values (black) and uneven value distribution (bars) can help you quickly identify which columns need your attention.In this particular case, our forecast analyst knows she’ll have to drill down on the `material` field that includes some mismatched and missing values. How should these data defaults impact her forecast and replenishment models?Intermediary data profilingIf you click on a column header, you’ll see some extra statistics in the right panel of Dataprep. This is particularly useful if you expect a specific format standard for a field and want to identify the values that don’t comply to the standard. In the example below, you can see that Cloud Dataprep discovered three different format patterns for the order_date. You might have follow-up questions: can empty order dates be leveraged in the forecast? Can mismatched dates can be corrected and how can you correct them?Advanced profilingIf you click “Show more”, or click the column header menu and “column details” in the main grid, you’ll land on a comprehensive data profiling page with some details about mismatched values, value distribution, or outliers. You can also navigate to the pattern tab to explore the data structure within a specific column.These three data profiling capabilities are dynamic by nature in the sense that Cloud Dataprep reprofiles the data in real time at each step of a transformation, to always present you with the latest information. This helps you clean your data faster and more effectively.The value for the forecast analyst is that she can immediately validate as she goes through the process of cleaning and transforming the data so that it fits the format she expects for her downstream modeling and reporting.2. Resolving data quality issuesDynamic profiling helps you assess the data quality at hand, and it is also the point of entry to start cleaning the data. Graph profiles are interactive and offer transformation suggestions as soon as you interact with them. For example, clicking the missing value space in the column header displays transformation suggestions such as deleting the values or setting the values to a default one.Resolving incorrect patternsYou can efficiently resolve incorrect patterns in a column (such as the recurrent date formatting issue in the order_data column) by accessing the pattern tab in the column details screen. Cloud Dataprep shows you the most frequent patterns. Once you select a target conversion format, Cloud Dataprep displays some transformation suggestions on the right panel, to convert all the data to fit the selected pattern. Watch the animation below, and try it for yourself:Highlight over data contentAnother interactive way to clean your data is to highlight over some portion of a value in a cell. Cloud Dataprep will suggest a set of transformations based on your selection, and you can refine the selection by highlighting over some additional content from another cell. Here is an example that extracts the month from the order date in order to calculate the volume per month:Format, replace, conditional functions, and moreYou can find most of the functions you’ll use to clean up data in the Column menu from the format or replace sections, or in the conditional formulas in the icon bar as shown below. These can be useful to convert all product or category names into uppercase or trim the names that have often quotes after import from a CSV or Excel file.Format functionsExtract functionsThe extract functions can be particularly useful to extract a subset of a value within a column. For example, you may want to extract from the product_id “Item: ACME_66979905111536979300 - PASTA RONI FETTUCINE ALFR” each individual component by splitting it on the “ - ” value.Conditional functionsConditional functions are useful for tagging values that are out of scope. For example you can write a formula that will tag records when a quantity is over 10,000, which wouldn’t be valid for the order sizes you typically encounter.If none of the visual suggestions give you what you require for cleaning your data, you can always edit a suggestion or manually adding a new step in a Dataprep recipe. Type in a search box what you want to do and Cloud Dataprep will suggest some transformations you can then edit and apply to the dataset.StandardizationStandardizing values is a way to group similar values into a single, consistent format. This problem is especially prevalent with free-form entries like product, product categories, company names. You can access the standardization feature from the Column menu. Additionally, Cloud Dataprep can group similar values together by string similarities or by pronunciation.Tip: You can mix-and-match standardization algorithms. Some values may be standardized using spelling, while others are more sensibly standardized based on international pronunciation standards.3. Validation at scaleThe last, critical step of a typical data quality workflow in Cloud Dataprep is to validate that no single data quality issue remains in the dataset, at scale.Leveraging sampling to clean dataSometimes, the full volume of a dataset won’t fit into Cloud Dataprep via your browser tab (especially when leveraging BigQuery tables with hundreds of millions of records or more). In that case, Cloud Dataprep automatically samples the data from BigQuery to fit it in your local computer’s memory. That might lead you to question: how can you ensure you’ve standardized all the data from one column (e.g. product name, category, region, etc.) or you have cleaned all the date formats from another?You can adjust your sampling settings by clicking the sampling icon at the top right and choosing the sampling technique that fits your requirements.Select anomaly-based to keep all the data mismatched or missing for one of multiple columnsSelect stratified to retrieve every distinct value for a particular column (particularly useful for standardization)Select filter-based to retrieve all the data based on particular formula (i.e format does not match dd/mm/yyyy)Profiling the data at scaleAt this point, hopefully you’re happy and confident that your recipe will produce a clean dataset, but until you run it at scale across the whole data set, you can’t ensure all your data is valid. To do so, click the ‘Run Job’ button and check that Profile Results is enabled.If in the job results you still see some red, this most likely means you need to adjust your data quality rules and try again.SchedulingTo ensure that the data quality rules you create are applied on a recurring basis schedule your recipes to run automatically. In the case of forecasting, data may change on a weekly basis, so users must run the job every week to validate that all the profile results stay green over time. If not, you can simply reopen and adapt the recipe to address the new data inconsistencies you discovered.In the flow view, select Schedule Flow to define the parameters to run the job on a recurring basis.ConclusionOur example here is retail-specific, but regardless of your area of expertise or industry, you may encounter similar data issues. Following this process and leveraging Cloud Dataprep, you can become more effective and faster at cleaning up your data for analytics or feature engineering.We hope you that by using Cloud Dataprep, the toil of cleaning up your data and improving your data quality is, well, not so messy. If you’re ready to start, log in to Dataprep via Google Cloud Console to start using this three-step data quality workflow on your data. Read more »
  • Empower your AI Platform-trained serverless endpoints with machine learning on Google Cloud Functions
    true
    Editor’s note:Today’s post comes from Hannes Hapke at Caravel. Hannes describes how Cloud Functions can accelerate the process of hosting machine learning models in production for conversational AI, based on serverless infrastructure.At Caravel, we build conversational AI for digital retail clients — work that relies heavily on Google Cloud Functions. Our clients experience website demand fluctuations that vary by the day of the week or even by time-of-day. Because of the constant change in customer requests, Google Cloud Platform’s serverless endpoints help us handle fluctuating demand for our service. Unfortunately, serverless functions are limited in available memory and CPU cycles, which makes them an odd place to deploy machine learning models. However, Cloud Functions offer a tremendous ease in deploying API endpoints, so we decided to integrate machine learning models without deploying them to the endpoints directly.If your organization is interested in using serverless functions to help address its business problems, but you are unsure how you can use your machine learning models with your serverless endpoints, read on. We’ll explain how our team used Google Cloud Platform to deploy machine learning models on serverless endpoints. We’ll focus on our preferred Python solution and outline some ways you can optimize your integration. If you would prefer to build out a Node.js implementation, check out “Simplifying ML Prediction with Google Cloud Functions.”Architecture OverviewFigure 1: System architecture diagram.First, let’s start with the architecture. As shown in Figure 1, this example consists of three major components: a static page accessible to the user, a serverless endpoint that handles all user requests, and a model instance running on AI Platform. While other articles suggest loading the machine learning model directly onto the serverless endpoint for online predictions, we found that approach to have a few downsides:Loading the model will increase your serverless function memory footprint, which can accrue unnecessary expenses.The machine learning model has to be deployed with the serverless function code, meaning the model can’t be updated independently from a code deployment.For the sake of simplicity, we’re hosting the model for this example on an AI Platform serving instance, but we could also run our own Tensorflow Serving instance.Model setupBefore we describe how you might run your inference workload from a serverless endpoint, let’s quickly set up the model instance on Cloud AI Platform.1. Upload the latest exported model to a Cloud Storage bucket. We exported our model from TensorFlow’s Keras API.Create a bucket for your models and upload the latest trained model into its own folder.2. Head over to AI Platform from the Console and register a new model.Set up a new model on AI Platform.3. After registering the model, set up a new model version, probably your V1. To start the setup steps, click on ‘Create version.’Note: Under Model URI link to the Cloud Storage Bucket where you saved the exported model.You can choose between different ML frameworks. In our case, our model is based on TensorFlow 1.13.1.For our demo, we disable model autoscaling.Once the creation of the instance is completed and the model is ready to serve, you’ll see a green icon next to the model’s version name.Inferring a prediction from a serverless endpointInferring a prediction with Python is fairly straightforward. You need to generate a payload that you would like to submit to the model endpoint, and then you submit it to that endpoint. We’ll cover the generation of the payload in the following sections, but for now, let’s focus on inferring an arbitrary payload.Google provides a Python library google-api-python-client that allows you to access its products through a generic API interface. You can install it with:Once installed, you need to “discover” your desired service. In our case, the service name is ml. However, you aren’t limited to just the prediction functionality; depending on your permissions (more later on that), you can access various API services of AI Platform. You’ll now want to execute any API request you created thus far. If you don’t encounter any errors, the response should contain the model’s response: its prediction.PermissionsCloud Functions on Google Cloud Platform execute all requests as the user with the id:By default, your account has Editor permissions for the entire project, and you should be able to execute online predictions. At the time of this blog post’s publication, you can’t control permissions per serverless function, but if you want to try out the functionality yourself, sign up for the Alpha Tester Program.Generating a request payloadBefore submitting our inference request, you need to generate your payload with the input data for the model. At Caravel, we trained a deep learning model to classify the sentiment of sentences. We developed our model on Keras and TensorFlow 1.13.1, and because we wanted to limit the amount of preprocessing required on the client side, we decided to implement our preprocessing steps with TensorFlow (TF) Transform. Using TF Transform has multiple advantages:Preprocessing can occur server-side.Because the preprocessing runs on the server side, you can update the preprocessing functionality without affecting the clients. If this weren’t the case, you could imagine a situation like the following: if you perform the preprocessing in a mobile client, you would have to update all clients in case you implement changes or provide new endpoints for every change (not scalable).The preprocessing steps are consistent between the training, validation, and serving stages. Changes to the preprocessing steps will force you to re-train the model, which avoids misalignment between these steps and already trained models.You can transform the dataset nicely and train and validate your datasets efficiently, but at time of writing, you still need to convert your Keras model to a TensorFlow Estimator, in order to properly integrate TF Transform with Keras. With TensorFlow Transform, you can submit raw data strings as inputs to the model. The preprocessing graph, which is running in conjunction with the model graph, will convert your string characters first into character indices and then into embedding vectors.Connecting the preprocessing graph in TF Transform with our TensorFlow modelOur AI Platform instance and any TensorFlow Serving instance both expect a payload dictionary that includes the key instances, which contains a list of input dictionaries for each inference. You can submit multiple input dictionaries in a single request; the model server can infer the predictions all in a single request through the amazing batching feature of TensorFlow Serving. Thus, the payload for your sentence classification demo should look like this:We moved the generation step into its own helper function to allow for potential manipulation of the payload—when we want to lower-case or tokenize the sentences, for example. Here, however, we have not yet included such manipulations._connect_service provides us access to the AI platform service with the service name “ML”. At the time of writing this post, the current version was “v1”. We have encapsulated the service discovery into its own function to be able to add more parameters like account credentials, if needed.Once you generate a payload in the correct data structure and have access to the GCP service, you can infer predictions from the AI Platform instance. Here is an example:Obtaining model meta-information from the AI Platform training instanceSomething amazing happens when the Cloud Function setup interacts with the AI Platform instance: the client can infer predictions without any knowledge of the model. You don’t need to specify the model version during the inference, because the AI Platform Serving instance handles that for you. However, it’s generally very useful to know which version was used for the prediction. At Caravel, we track our models’ performance extensively, and our team prioritizes knowing when each model was used and deployed and consider this to be essential information.Obtaining the model meta information from the AI Platform instance is simple, because the Serving API has its own endpoint for requesting the model information. This helps a lot when you perform a large number of requests and only need to obtain the meta information once.The little helper function below obtains model information for any given model in a project. You’ll need to call two different endpoints, depending on whether we want to obtain the information for a specific model version or just for the default model. You can specify this in the AI Platform Command Console.Here is a brief example of metadata returned from the AI Platform API endpoint:ConclusionServerless functions have proven very useful to our team, thanks to their scalability and ease of deployment. The Caravel team wanted to demonstrate that both concepts can work together easily and share our best practices, as machine learning becomes an essential component of a growing number of today’s leading applications.In this blog post, we introduced the setup of a machine learning model on AI Platform and how to infer model predictions from a Python 3.7 Cloud Function. We also reviewed how you might structure your prediction payloads, as well as an overview of how you can request model metadata from the model server. By splitting your application between the Cloud Functions and AI Platform, you can deploy your legacy applications in an efficient and cost-effective manner.If you’re interested in ways to reduce network traffic between your serverless endpoints, we recommend our follow-up post on how to generate model request payloads with the ProtoBuf serialization format. To see this example live, check out our demo endpoint here, and if you want to start with some source code to build your own, you can find it in the ML on GCP GitHub repository.Acknowledgements: Gonzalo Gasca Meza, Developer Programs Engineer contributed to this post. Read more »
  • Efficiently scale ML and other compute workloads on NVIDIA’s T4 GPU, now generally available
    true
    NVIDIA’s T4 GPU, now available in regions around the world, accelerates a variety of cloud workloads, including high performance computing (HPC), machine learning training and inference, data analytics, and graphics. In January of this year, we announced the availability of the NVIDIA T4 GPU in beta, to help customers run inference workloads faster and at lower cost. Earlier this month at Google Next ‘19, we announced the general availability of the NVIDIA T4 in eight regions, making Google Cloud the first major provider to offer it globally.A focus on speed and cost-efficiencyEach T4 GPU has 16 GB of GPU memory onboard, offers a range of precision (or data type) support (FP32, FP16, INT8 and INT4), includes NVIDIA Tensor Cores for faster training and RTX hardware acceleration for faster ray tracing. Customers can create custom VM configurations that best meet their needs with up to four T4 GPUs, 96 vCPUs, 624 GB of host memory and optionally up to 3 TB of in-server local SSD.At time of publication, prices for T4 instances are as low as $0.29 per hour per GPU on preemptible VM instances. On-demand instances start at $0.95 per hour per GPU, with up to a 30% discount with sustained use discounts.Tensor Cores for both training and inferenceNVIDIA’s Turing architecture brings the second generation of Tensor Cores to the T4 GPU. Debuting in the NVIDIA V100 (also available on Google Cloud Platform), Tensor Cores support mixed-precision to accelerate matrix multiplication operations that are so prevalent in ML workloads. If your training workload doesn’t fully utilize the more powerful V100, the T4 offers the acceleration benefits of Tensor Cores, but at a lower price. This is great for large training workloads, especially as you scale up more resources to train faster, or to train larger models.Tensor Cores also accelerate inference, or predictions generated by ML models, for low latency or high throughput. When Tensor Cores are enabled with mixed precision, T4 GPUs on GCP can accelerate inference on ResNet-50 over 10X faster with TensorRT when compared to running only in FP32. Considering its global availability and Google’s high-speed network, the NVIDIA T4 on GCP can effectively serve global services that require fast execution at an efficient price point. For example, Snap Inc. uses the NVIDIA T4 to create more effective algorithms for its global user base, while keeping costs low.“Snap’s monetization algorithms have the single biggest impact to our advertisers and shareholders. NVIDIA T4-powered GPUs for inference on GCP will enable us to increase advertising efficacy while at the same time lower costs when compared to a CPU-only implementation.” —Nima Khajehnouri, Sr. Director, Monetization, Snap Inc.The GCP ML Infrastructure combines the best of Google and NVIDIA across the globeYou can get up and running quickly, training ML models and serving inference workloads on NVIDIA T4 GPUs by using our Deep Learning VM images. These include all the software you’ll need: drivers, CUDA-X AI libraries, and popular AI frameworks like TensorFlow and PyTorch. We handle software updates, compatibility, and performance optimizations, so you don’t have to. Just create a new Compute Engine instance, select your image, click Start, and a few minutes later, you can access your T4-enabled instance. You can also start with our AI Platform, an end-to-end development environment that helps ML developers and data scientists to build, share, and run machine learning applications anywhere. Once you’re ready, you can use Automatic Mixed Precision to speed up your workload via Tensor Cores with only a few lines of code.Performance at scaleNVIDIA T4 GPUs offer value for batch compute HPC and rendering workloads, delivering dramatic performance and efficiency that maximizes the utility of at-scale deployments. A Princeton University neuroscience researcher had this to say about the T4’s unique price and performance:“We are excited to partner with Google Cloud on a landmark achievement for neuroscience: reconstructing the connectome of a cubic millimeter of neocortex. It’s thrilling to wield thousands of T4 GPUs powered by Kubernetes Engine. These computational resources are allowing us to trace 5 km of neuronal wiring, and identify a billion synapses inside the tiny volume.” —Sebastian Seung, Princeton UniversityQuadro Virtual Workstations on GCPT4 GPUs are also a great option for running virtual workstations for engineers and creative professionals. With NVIDIA Quadro Virtual Workstations from the GCP Marketplace, users can run applications built on the NVIDIA RTX platform to experience the next generation of computer graphics, including real-time ray tracing and AI-enhanced graphics, video and image processing, from anywhere.“Access to NVIDIA Quadro Virtual Workstation on the Google Cloud Platform will empower many of our customers to deploy and start using Autodesk software quickly, from anywhere. For certain workflows, customers leveraging NVIDIA T4 and RTX technology will see a big difference when it comes to rendering scenes and creating realistic 3D models and simulations. We’re excited to continue to collaborate with NVIDIA and Google to bring increased efficiency and speed to artist workflows." —Eric Bourque, Senior Software Development Manager, AutodeskGet started todayCheck out our GPU page to learn more about how the wide selection of GPUs available on GCP can meet your needs. You can learn about customer use cases and the latest updates to GPUs on GCP in our Google Cloud Next 19 talk, GPU Infrastructure on GCP for ML and HPC Workloads. Once you’re ready to dive in, try running a few TensorFlow inference workloads by reading our blog or our documentation and tutorials. Read more »
  • Train and deploy state-of-the-art mobile image classification models via Cloud TPU
    true
    As organizations use machine learning (ML) more frequently in mobile and embedded devices, training and deploying small, fast, and accurate machine learning models becomes increasingly important. To help accelerate this process, we’ve published open-source Cloud TPU models to enable you and your data science team to train state-of-the-art mobile image classification models faster and at a lower cost.For many IoT-focused businesses, it’s also essential to optimize both latency and accuracy, especially on low power, resource-constrained devices. By leveraging a novel, platform-aware neural architecture search framework (MnasNet), we identified a model architecture that can outperform the previous state-of-the-art MobileNetV1 and MobileNetV2 models that were carefully built by hand. You can find a comparison between MnasNet and MobileNetV2 below:This new MnasNet model runs nearly 1.8x faster inference speed (or 55% less latency) than the corresponding MobileNetV2 model and still maintains the same ImageNet top-1 classification accuracy.How to train MnasNet on Cloud TPUWe specifically designed and optimized MNasNet to train as fast as we could make it on Cloud TPUs. The MnasNet model training source code is now the latest available in the TensorFlow TPU GitHub repository. Using this code, you can benefit from both low training cost and fast inference speed when you train MnasNet on Cloud TPUs and export the trained model for deployment.If you have not yet experimented with training models on Cloud TPUs, you might want to begin by following the QuickStart guide. Once you are up and running with Cloud TPUs, you can begin training an MnasNet model by executing a command of this form:The model processes training data in TFRecord format, which can be created from input image collections via TensorFlow’s Apache Beam pipeline tool. You can find more details on how to use Cloud TPUs to train MnasNet in our tutorial.To help you further tune your MnasNet model, we have published additional notes about our implementation along with a variety of suggested tuning parameters to accommodate different classification latency requirements.How you can deploy via SavedModel or TensorFlow LiteYou can easily deploy the models trained on Cloud TPUs to a variety of different platforms and devices. We have published pre-trained SavedModel files (mnasnet-a1 and mnasnet-b1) from ImageNet training runs to help you get started: you can use this MnasNet Colab to experiment with these pre-trained models interactively.You can easily deploy your newly trained model by exporting it to TensorFlow Lite. You can convert an exported saved model into a *.tflite file with the following code:Next, you can optionally apply post-training quantization, a common technique that reduces the model size while also providing up to 3x lower latency. These improvements are a result of smaller word sizes that enable faster computation and more efficient memory usage. To quantize 32-bit floating point numbers into more efficient 8-bit integers, add the following code:The open-source implementation provided in the Cloud TPU repository implements saved model export, TensorFlow Lite export, and TensorFlow Lite’s post-training quantization by default. The code also includes a default serving input function that decodes and classifies JPEG images: if your application requires custom input preprocessing, you should consider modifying this example to perform your own input preprocessing (for serving or for on-device deployment via TensorFlow Lite).With this new open source MnasNet implementation for Cloud TPU, it is easier and faster to train a state-of-the-art image classification model with full control and deploy it on mobile and embedded devices. Check out our tutorial and Colab to get started.AcknowledgementsMany thanks to the Googlers who contributed to this post, including Zak Stone, Xiaodan Song, David Shevitz, Barrett Williams, Russell Power, Adam Kerin, and Quoc Le. Read more »
  • AI in Depth: Serving a PyTorch text classifier on AI Platform Serving using custom online prediction
    Earlier this week, we explained in detail how you might build and serve a text classifier in TensorFlow. Today, we’ll provide a new explainer on how to build a similar classifier in PyTorch, another machine learning framework. In today’s blog post, we’ll explain how to implement the same model using PyTorch, and deploy it to AI Platform Serving for online prediction. We will reuse the preprocessing implemented in Keras in the previous blog post. The code for this example can be found in this Notebook.AI Platform ML Engine is a serverless, NoOps product that lets you train and serve machine learning models at scale. These models can then be served as REST APIs for online prediction. The AI Platform Serving automatically scales to adjust to any throughput, and provides secure authentication to its REST endpoints.To help maintain affinity of preprocessing between training and serving, AI Platform Serving now enables users to customize the prediction routine that gets called when sending prediction requests to their model deployed on AI Platform Serving. This feature allows you to upload a Custom Model Prediction class, along with your exported model, to apply custom logic before or after invoking the model for prediction.In other words, we can now leverage AI Platform Serving to execute arbitrary Python code, breaking the typical and previous coupling with TensorFlow. This change enables you to pick the best framework for the job, or even combine multiple frameworks into a single application. For example, we can use Keras APIs for their easy-to-use text pre-processing methods, and combine them with PyTorch for the actual machine learning model. This combination of frameworks is precisely what we’ll discuss in this blog post.For more details on text classification, the Hacker News dataset used in the example, and the text preprocessing logic, refer to the Serving a Text Classifier with Preprocessing using AIPlatform Serving blog post.Building a PyTorch text classification modelYou can begin by implementing your TorchTextClassifier model class in the torch_model.py module. As shown in the following code block, we implement the same text classification model architecture described in this post, which consists of an Embedding layer, Dropout layer, followed by two Conv1d and Pooling Layers, then a Dense layer with Softmax activation at the end.Loading and preprocessing dataThe following code prepares both the training and evaluation data. Note that, you use both fit() and transform() with the training data, while you only use transform() with the evaluation data, to make use of the tokenizer generated from the training data. The created train_texts_vectorized and eval_texts_vectorized objects will be used to train and evaluate our text classification model respectively.The implementation of TextPreprocessor class, which uses Keras APIs, is described in Serving a Text Classifier with Preprocessing using AI Platform Serving blog post.Now you need to save the processor object—which includes the tokenizer generated from the training data—to be used when serving the model for prediction. The following code dumps the object to a new processor_state.pkl file.Training and saving the PyTorch modelThe following code snippet shows you how to train your PyTorch model. First, you create an object of the TorchTextClassifier, according to your parameters. Second, you implement a training loop, in which each iteration you predictions from your model (y_pred) given the current training batch, compute the loss using cross_entropy, and backpropagation using loss.backward() and optimizer.step(). After NUM_EPOCH epochs, the trained model is saved to torch_saved_model.pt file.Implementing the Custom Prediction classIn order to apply a custom prediction routine, which includes both preprocessing and postprocessing, you need to wrap this logic in a Custom Model Prediction class. This class, along with the trained model and its corresponding preprocessing object, will be used to deploy the AI Platform Serving microservices. The following code shows how the Custom Model Prediction class (CustomModelPrediction) for our text classification example is implemented in the model_prediction.py module.Deploying to AI Platform servingUploading the artifacts to Cloud StorageNext, you’ll want to upload your artifacts to Cloud Storage, as follows:Your saved (trained) model file: trained_saved_model.pt (see Training and Saving the PyTorch model).Your pickled preprocessing objects (which contain the state needed for data transformation prior to prediction): processor_state.pkl. As described in the previous, Keras-based post, the processor_state.pkl object includes the tokenizer generated from the training data.Second, you need to upload a Python package including all the classes you’ll need for prediction (preprocessing, model classes, and post-processing, if any). In this example, you need to create a `pip`-installable tar file that includes torch_model.py, model_prediction.py, and preprocess.py. To begin, create the following setup.py file:The setup.py file includes a list of the PyPI packages you need to `pip install` and use for prediction in the REQUIRED_PACKAGES variable.Because we are deploying a model implemented by PyTorch, we need to include ‘torch’ in REQUIRED_PACKAGES. Now, you can create the package by running the following command:This will create a `.tar.gz` package under /dist directory. The name of the package will be `$name-$version.tar.gz` where `$name` and `$version` are the ones specified in setup.py.Once you have successfully created the package, you can upload it to Cloud Storage:Deploying the model to AI Platform ServingLet’s define the model name, the model version, and the AI Platform Serving runtime (which corresponds to a TensorFlow version) required for deploying the model.First, you create a model in AI Platform Serving by running the following gcloud command:Second, you create a model version using the following gcloud command, in which you specify the location of the model and preprocessing object (--origin), the location the package(s) including the scripts needed for your prediction (--package-uris), and a pointer to you Custom Model prediction class (--prediction-class).This should take 1-2 minutes.After deploying the model to AI Platform Serving, you can invoke the model for prediction using the code described in previous Keras-based blog post .Note that the client of our REST API does not need to know whether the service was implemented in TensorFlow or in PyTorch. In either case, the client should send the same request, and receive a response of the same form.ConclusionAlthough AI Platform initially provided only support for TensorFlow, it is now evolving into a platform that supports multiple frameworks. You can now deploy models using TensorFlow, PyTorch, or any Python-based ML framework, since AI Platform Serving supports custom prediction Python code, available in beta. This post demonstrates that you can flexibly deploy a PyTorch text classifier, which utilizes text preprocessing logic implemented in using Keras.Feel free to reach out @GCPcloud if there are still features or other frameworks you’d like to train or deploy on AI Platform Serving.Next stepsTo learn more about AI Platform serving custom online prediction, read this blog post.To learn more about machine learning on GCP, take this course.To try out the code, run this Notebook. Read more »
  • American Cancer Society uses Google Cloud machine learning to power cancer research
    Among the most promising and important applications of machine learning is finding better ways to diagnose and treat life threatening conditions, including diseases such as cancer that cut far too many lives short. In the United States, cancer is the second most common cause of death and accounts for nearly one in four deaths. Prevention and early detection are critical to improving survival, but there remains much that medical professionals do not understand about lifestyle factors, diagnosis, and treatment of specific subtypes of cancer.The American Cancer Society is using Google Cloud to reinvent the way data is analyzed so they can save more lives. For the past few decades ACS has conducted the Cancer Prevention Study-II (CPS-II) Nutrition cohort, a prospective study of more than 184,000 American men and women, to explore how factors such as height, weight, demographic characteristics, personal and family history can affect cancer etiology and prognosis.Mia M. Gaudet, PhD, Scientific Director of Epidemiology Research at ACS, was able to use an end-to-end ML pipeline built on Google Cloud to perform deep analysis of breast cancer tissue samples, the most commonly diagnosed type of cancer among women and the second leading cause of cancer death.After obtaining medical records and surgical tissue samples for 1,700 CPS-II study participants who were diagnosed with breast cancer from hundreds of hospitals throughout the U.S., Dr. Gaudet studied high-resolution images of the tumor tissue in an effort to determine what lifestyle, medical, and genetic factors are related to molecular subtypes of breast cancer, and whether different features in the breast cancer tissue translate to a better prognosis.She faced a few technical challenges in analyzing the 1,700 images of breast tumor tissue:They were captured in a high-resolution, uncompressed and proprietary format—up to 10GB each. Image conversion would be exceedingly costly and time consuming.Even if the images were converted to a usable format, it would take a team of highly trained pathologists up to three years to analyze all 1,700, and at significant cost.Analysis would be subject to human fatigue and bias, and some patterns might not be detectable by humans at all.How Slalom used Cloud ML Engine to help Dr. Gaudet complete her researchTo overcome these challenges, Dr. Gaudet and ACS teamed up with Slalom, a Cloud premier partner, to facilitate deep learning at scale. Quality of preprocessing standardization was critical and the images needed to be translated consistently, with colors normalized.The interpretation of colors across images was standardized through the reduction of color variance and every image was broken into evenly sized tiles to distribute the workload and optimize the data structure required to train the models.Slalom used GCP to build an end-to-end machine learning pipeline, including preprocessing, feature engineering, and clustering:Cloud Machine Learning Engine (Cloud ML Engine) preprocessing enabled model training and batch prediction.Images were stored using Cloud Storage.Compute Engine orchestrated image conversion and initiated Cloud ML Engine training and prediction jobs in the correct sequence.Using Keras with a TensorFlow backend for prototyping, Slalom created an auto-encoder model. It then used distributed training on Cloud ML Engine to convert the images into feature vectors that represent patterns in the images as a sequence of numbers.The features were then clustered with TensorFlow, once again using Cloud ML Engine. The result was a set of cluster assignments, one for each tile in the image, that American Cancer Society plans on using in follow-up analyses.With this approach, analysis was completed in only three months—twelve times faster than projected—and with a higher degree of accuracy and consistency. The analysis found interesting results: it isolated potentially significant patterns in the cancer tissue that might help inform risk factors and prognosis in the future."By leveraging Cloud ML Engine to analyze cancer images, we're gaining more understanding of the complexity of breast tumor tissue and how known risk factors lead to certain patterns,” said Gaudet.ACS now has established processes and a cloud infrastructure that will be reusable on similar projects to come. We’re enormously proud that our technology is helping medical professionals who are working tirelessly to prevent cancer deaths and improve outcomes.For more information on Cloud ML Engine, visit our website. Read more »
  • What’s in an image: fast, accurate image segmentation with Cloud TPUs
    true
    Google designed Cloud TPUs from the ground up to accelerate cutting-edge machine learning (ML) applications, from image recognition, to language modeling, to reinforcement learning. And now, we’ve made it even easier for you to use Cloud TPUs for image segmentation—the process of identifying and labeling regions of an image based on the objects or textures they contain—by releasing high-performance TPU implementations of two state-of-the-art segmentation models, Mask R-CNN and DeepLab v3+ as open source code. Below, you can find performance and cost metrics for both models that can help you choose the right model and TPU configuration for your business or product needs.A brief introduction to image segmentationImage segmentation is the process of labeling regions in an image, often down to the pixel level. There are two common types of image segmentation:Instance segmentation: This process gives each individual instance of one or multiple object classes a distinct label. In a family photo containing several people, this type of model would automatically highlight each person with a different color.Semantic segmentation: This process labels each pixel of an image according to the class of object or texture it represents. For example, pixels in an image of a city street scene might be labeled as “pavement,” “sidewalk,” “building,” “pedestrian,” or “vehicle.”Autonomous driving, geospatial image processing, and medical imaging, among other applications, typically require both of these types of segmentation. And image segmentation is even an exciting new enabler for certain photo and video editing processes, including bokeh and background removal!High performance, high accuracy, and low costWhen you choose to work with image segmentation models, you’ll want to consider a number of factors: your accuracy target, the total training time to reach this accuracy, the cost of each training run, and more. To jump-start your analysis, we have trained Mask R-CNN and DeepLab v3+ on standard image segmentation datasets and collected many of these metrics in the tables below.Instance segmentation using Mask R-CNNFigure 1: Mask R-CNN training performance and accuracy, measured on the COCO datasetSemantic segmentation using DeepLab v3+Figure 2: DeepLab v3+ training performance and accuracy, measured on the PASCAL VOC 2012 datasetAs you can see above, Cloud TPUs can help you train state-of-the-art image segmentation models with ease, and you’ll often reach usable accuracy very quickly. At the time we wrote this blog post, the first two Mask R-CNN training runs and both of the DeepLab v3+ runs in the tables above cost less than $50 using the on-demand Cloud TPU devices that are now generally available.By providing these open source image segmentation models and optimizing them for a range of Cloud TPU configurations, we aim to enable ML researchers, ML engineers, app developers, students, and many others to train their own models quickly and affordably to meet a wide range of real-world image segmentation needs.A closer look at Mask R-CNN and DeepLab v3+In order to achieve the image segmentation performance described above, you’ll need to use a combination of extremely fast hardware and well-optimized software. In the following sections, you can find more details on each model’s implementation.Mask R-CNNMask R-CNN is a two-stage instance segmentation model that can be used to localize multiple objects in an image down to the pixel level. The first stage of the model extracts features (distinctive patterns) from an input image to generate region proposals that are likely to contain objects of interest. The second stage refines and filters those region proposals, predicts the class of every high-confidence object, and generates a pixel-level mask for each object.Figure 3: An image from Wikipedia with an overlay of Mask R-CNN instance segmentation results.In the Mask R-CNN table above, we explored various trade-offs between training time and accuracy. The accuracy you wish to achieve as you train Mask R-CNN will vary by application: for some, training speed might be your top priority, whereas for others, you’ll prioritize around training to the highest possible accuracy, even if more training time and associated costs are needed to reach that accuracy threshold.The training time your model will require depends on both the number of training epochs and your chosen TPU hardware configuration. When training for 12 epochs, Mask R-CNN training on the COCO dataset typically surpasses an object detection “box accuracy” of 37 mAP (“mean Average Precision”). While this accuracy threshold may be considered usable for many applications, we also report training results using 24 and 48 epochs across various Cloud TPU configurations to help you evaluate the current accuracy-speed trade off and choose an option that works best for your application. All the numbers in the tables above were collected using TensorFlow version 1.13. While we expect your results to be similar to ours, your results may vary.Here are some high-level conclusions from our Mask R-CNN training trials:If budget is your top priority, a single Cloud TPU v2 device (v2-8) should serve you well. With a Cloud TPU v2, our Mask R-CNN implementation trains overnight to an accuracy point of more than 37 mAP for less than $50. With a preemptible Cloud TPU device, that cost can drop to less than $20.Alternatively, if you choose a Cloud TPU v3 device (v3-8), you should benefit from a speedup of up to 1.7x over a Cloud TPU v2 device—without any code changes.Cloud TPU Pods enable even faster training at larger scale. Using just 1/16th of a Cloud TPU v3 Pod, Mask R-CNN trains to the highest accuracy tier in the table in under two hours.DeepLab v3+Google’s DeepLab v3+, a fast and accurate semantic segmentation model, makes it easy to label regions in images. For example, a photo editing application might use DeepLab v3+ to automatically select all of the pixels of sky above the mountains in a landscape photograph.Last year, we announced the initial open source release of DeepLab v3+, which as of writing is still the most recent version of DeepLab. The DeepLab v3+ implementation featured above includes optimizations that target Cloud TPU.Figure 4: Semantic segmentation results using DeepLab v3+ [image from the DeepLab v3 paper]We trained DeepLab v3+ on the PASCAL VOC 2012 dataset using TensorFlow version 1.13 on both Cloud TPU v2 and Cloud TPU v3 hardware. Using a single Cloud TPU v2 device (v2-8), DeepLab v3+ training completes in about 8 hours and costs less than $40 (less than $15 using preemptible Cloud TPUs). Cloud TPU v3 offers twice the memory (128 GB) and more than twice the peak compute (420 teraflops), enabling a speedup of about 1.7x without any code changes.Getting started—in a sandbox, or in your own projectIt’s easy to start experimenting with both the models above by using a free Cloud TPU in Colab right in your browser:Mask R-CNN ColabDeepLab v3+ ColabYou can also get started with these image segmentation models in your own Google Cloud projects by following these tutorials:Mask R-CNN tutorial (source code here)DeepLab v3+ tutorial (source code here)If you’re new to Cloud TPUs, you can get familiar with the platform by following our quickstart guide, and you can also request access to Cloud TPU v2 Pods—available in alpha today. For more guidance on determining whether you should use an individual Cloud TPU or an entire Cloud TPU Pod, check out our comparison documentation here.AcknowledgementsMany thanks to the Googlers who contributed to this post, including Zak Stone, Pengchong Jin, Shawn Wang, Chiachen Chou, David Shevitz, Barrett Williams, Liang-Chieh Chen, Yukun Zhu, Yeqing Li, Wes Wahlin, Pete Voss, Sharon Maher, Tom Nguyen, Xiaodan Song, Adam Kerin, and Ruoxin Sang. Read more »
  • AI in depth: Creating preprocessing-model serving affinity with custom online prediction on AI Platform Serving
    true
    AI Platform Serving now lets you deploy your trained machine learning (ML) model with custom online prediction Python code, in beta. In this blog post, we show how custom online prediction code helps maintain affinity between your preprocessing logic and your model, which is crucial to avoid training-serving skew. As an example, we build a Keras text classifier, and deploy it for online serving on AI Platform, along with its text preprocessing components. The code for this example can be found in this Notebook.BackgroundThe hard work of building an ML model pays off only when you deploy the model and use it in production—when you integrate it into your pre-existing systems or incorporate your model into a novel application. If your model has multiple possible consumers, you might want to deploy the model as an independent, coherent microservice that is invoked via a REST API that can automatically scale to meet demand. Although AI Platform may be better known for its training abilities, it can also serve TensorFlow, Keras, scikit-learn, and XGBoost models with REST endpoints for online prediction.While training that model, it’s common to transform the input data into a format that improves model performance. But when performing predictions, the model expects the input data to already exist in that transformed form. For example, the model might expect a normalized numerical feature, for example TF-IDF encoding of terms in text, or a constructed feature based on a complex, custom transformation. However, the callers of your model will send “raw”, untransformed data, and the caller doesn’t (or shouldn’t) need to know which transformations are required. This means the model microservice will be responsible for applying the required transformation on the data before invoking the model for prediction.The affinity between the preprocessing routines and the model (i.e., having both of them coupled in the same service) is crucial to avoid training-serving skew, since you’ll want to ensure that these routines are applied on any data sent to the model, with no assumptions about how the callers prepare the data. Moreover, the model-preprocessing affinity helps to decouple the model from the caller. That is, if a new model version requires new transformations, these preprocessing routines can change independently of the caller, as the caller will keep on sending data in its raw format.Beside preprocessing, your deployed model’s microservice might also perform other operations, including postprocessing of the prediction produced by the model, or even more complex prediction routines that combine the predictions of multiple models.To help maintain affinity of preprocessing between training and serving, AI Platform Serving now lets you customize the prediction routine that gets called when sending prediction requests to a model deployed on AI Platform Serving. This feature allows you to upload a custom model prediction class, along with your exported model, to apply custom logic before or after invoking the model for prediction.Customizing prediction routines can be useful for the following scenarios:Applying (state-dependent) preprocessing logic to transform the incoming data points before invoking the model for prediction.Applying (state-dependent) post-processing logic to the model prediction before sending the response to the caller. For example, you might want to convert the class probabilities produced by the model to a class label.Integrating rule-based and heuristics-based prediction with model-based prediction.Applying a custom transform used in fitting a scikit-learn pipeline.Performing complex prediction routines based on multiple models, that is, aggregating predictions from an ensemble of estimators, or calling a model based on the output of the previous model in a hierarchical fashion.The above tasks can be accomplished by custom online prediction, using the standard framework supported by AI Platform Serving, as well as with any model developed by your favorite Python-based framework, including PyTorch. All you need to do is to include the dependency libraries in the setup.py of your custom model package (as discussed below). Note that without this feature, you would need to implement the preprocessing, post-processing, or any custom prediction logic in a “wrapper” service, using, for example, App Engine. This App Engine service would also be responsible for calling the AI Platform Serving models, but this approach adds complexity to the prediction system, as well as latency to the prediction time.Next we’ll demonstrate how we built a microservice that can handle both preprocessing and post-processing scenarios using the AI Platform custom online prediction, using text classification as the example. We chose to implement the text preprocessing logic and built the classifier using Keras, but thanks to AI Platform custom online prediction, you could implement the preprocessing using any other libraries (like NLTK or Scikit-learn), and build the model using any other Python-based ML framework (like TensorFlow or PyTorch). You can find the code for this example in this Notebook.A text classification exampleText classification algorithms are at the heart of a variety of software systems that process text data at scale. The objective is to classify (categorize) text into a set of predefined classes, based on the text’s content. This text can be a tweet, a web page, a blog post, user feedback, or an email: in the context of text-oriented ML models, a single text entry (like a tweet) is usually referred to as a “document.”Common use cases of text classification include:Spam-filtering: classifying an email as spam or not.Sentiment analysis: identifying the polarity of a given text, such as tweets, product or service reviews.Document categorization: identifying the topic of a given document (for example, politics, sports, finance, etc.)Ticket routing: identifying to which department to dispatch a ticketYou can design your text classification model in two different ways; choosing one versus the other will influence how you’ll need to prepare your data before training the model.N-gram models: In this option, the model treats a document as a “bag of words,” or more precisely, a “bag of terms,” where a term can be one word (uni-gram), two words (bi-gram) or n words (n-grams). The ordering of the words in the document is not relevant. The feature vector representing a document encodes whether a term occurs in the document or not (binary encoding), how many times the term occurs in the document (count encoder) or more commonly, Term Frequency Inverse Document Frequency (TF-IDF encoder). Gradient-boosted trees and Support Vector Machines are typical techniques to use in n-gram models.Sequence models: With this option, the text is treated as a sequence of words or terms, that is, the model uses the word ordering information to make the prediction. Types of sequence models include convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variations.In our example, we utilize the sequence model approach.Hacker News is one of many public datasets available in BigQuery. This dataset includes titles of articles from several data sources. For the following tutorial, we extracted the titles that belong to either GitHub, The New York Times, or TechCrunch, and saved them as CSV files in a publicly shared Cloud Storage bucket at the following location:gs://cloud-training-demos/blogs/CMLE_custom_predictionHere are some useful statistics about this dataset:Total number of records: 96,203Min, Max, and Average number of words per title: 1, 52, and 8.7Number of records in GitHub, The New York Times, and TechCrunch: 36,525, 28,787, and 30,891Training and evaluation percentages: 75% and 25%The objective of the tutorial is to build a text classification model, using Keras to identify the source of the article given its title, and deploy the model to AI Platform serving using custom online prediction, to be able to perform text pre-processing and prediction post-processing.Preprocessing textSequence tokenization with KerasIn this example, we perform the following preprocessing steps:Tokenization: Divide the documents into words. This step determines the “vocabulary” of the dataset (set of unique tokens present in the data). In this example, you’ll make use of the most frequently 20,000 words, and discard the other ones from the vocabulary. This value is set through the VOCAB_SIZE parameter.Vectorization: Define a good numerical measure to characterize these documents. A given embedding’s representation of the tokens (words) will be helpful when you’re ready to train your sequence model. However, these embeddings are created as part of the model, rather than as a preprocessing step. Thus, what you need here is to simply convert each token to a numerical indicator. That is, each article’s title is represented as a sequence of integers, and each is an indicator of a token in the vocabulary that occured in the title.Length fixing: After vectorization, you have a set of variable-length sequences. In this step, the sequences are converted into a single fixed length: 50. This can be configured using MAX_SEQUENCE_LENGTH parameter. Sequences with more than 50 tokens will be right-trimmed, while sequences with fewer than 50 tokens will be left-padded with zeros.Both the tokenization and vectorization steps are considered to be stateful transformations. In other words, you extract the vocabulary from the training data (after tokenization and keeping the top frequent words), and create a word-to-indicator lookup, for vectorization, based on the vocabulary. This lookup will be used to vectorize new titles for prediction. Thus, after creating the lookup, you need to save it to (re-)use it when serving the model.The following block shows the code for performing text preprocessing. The TextPreprocessor class in the preprocess.py module includes two methods.fit(): applied on training data to generate the lookup (tokenizer). The tokenizer is stored as an attribute in the object.transform(): applies the tokenizer on any text data to generate the fixed-length sequence of word indicators.Preparing training and evaluation dataThe following code prepares the training and evaluation data (that is, it converts each raw text title to a NumPy array with 50 numeric indicator). Note that, you use both fit() and transform() with the training data, while you only use transform() with the evaluation data, to make use of the tokenizer generated from the training data. The outputs, train_texts_vectorized and eval_texts_vectorized, will be used to train and evaluate our text classification model respectively.Next, save the processor object (which includes the tokenizer generated from the training data) to be used when serving the model for prediction. The following code dumps the object to processor_state.pkl file.Training a Keras modelThe following code snippet shows the method that creates the model architecture. We create a Sequential Keras model, with an Embedding layer, Dropout layer, followed by two Conv1d and Pooling Layers, then a Dense layer with Softmax activation at the end. The model is compiled with sparse_categorical_crossentropy loss and accuracy acc (accuracy) evaluation metric.The following code snippet creates the model by calling the create_model method with the required parameters, trains the model on the training data, and evaluates the trained model’s quality using the evaluation data. Lastly, the trained model is saved to keras_saved_model.h5 file.Implementing a custom model prediction classIn order to apply a custom prediction routine that includes preprocessing and postprocessing, you need to wrap this logic in a Custom Model Prediction class. This class, along with the trained model and the saved preprocessing object, will be used to deploy the AI Platform Serving microservice. The following code shows how the Custom Model Prediction class (CustomModelPrediction) for our text classification example is implemented in the model_prediction.py module.Note the following points in the Custom Model Prediction class implementation:from_path is a “classmethod”, responsible for loading both the model and the preprocessing object from their saved files, and instantiating a new CustomModelPrediction object with the loaded model and preprocessor object (which are both stored as attributes to the object).predict is the method invoked when you call the “predict” API of the deployed AI Platform Serving model. The method does the following:Receives the instances (list of titles) for which the prediction is neededPrepares the text data for prediction by applying the transform() method of the "stateful" self._processor object.Calls the self._model.predict() to produce the predicted class probabilities, given the prepared text.Post-processes the output by calling the _postprocess method._postprocess is the method that receives the class probabilities produced by the model, picks the label index with the highest probability, and converts this label index to a human-readable label  github,  nytimes,  or techcrunch .Deploying to AI Platform ServingFigure 1 shows an overview of how to deploy the model, along with its required artifacts for a custom prediction routine to AI Platform Serving.Uploading the artifacts to Cloud StorageThe first thing you want to do is to upload your artifacts to Cloud Storage. First, you need to upload:Your saved (trained) model file: keras_saved_model.h5 (see the Training a Keras model section).Your pickled (serialized) preprocessing objects (which contain the state needed for data transformation prior to prediction): processor_state.pkl. (see the Preprocessing Text section). Remember, this object includes the tokenizer generated from the training data.Second, upload a python package including all the classes you need for prediction (e.g., preprocessing, model classes, and post-processing). In this example, you need to create a pip-installable tar with model_prediction.py and preprocess.py. First, create the following setup.py file:Now, generate the package by running the following command:This creates a .tar.gz package under a new /dist directory, created in your working directory. The name of the package will be $name-$version.tar.gz where $name and $version are the ones specified in the setup.py.Once you have successfully created the package, you can upload it to Cloud Storage:Deploying the model to AI Platform ServingLet’s define the model name, the model version, and the AI Platform Serving runtime (which corresponds to a TensorFlow version) required to deploy the model.First, create a model in AI Platform Serving using the following gcloud command:Second, create a model version using the following gcloud command, in which you specify the location of the model and preprocessing object (--origin), the location the package(s) including the scripts needed for your prediction (--package-uris), and a pointer to your Custom Model Prediction class (--prediction-class).This should take one to two minutes.Calling the deployed model for online predictionsAfter deploying the model to AI Platform Serving, you can invoke the model for prediction using the following code:Given the titles defined in the request object, the predicted source of each title from the deployed model would be as follows: [techcrunch, techcrunch, techcrunch, nytimes, nytimes, nytimes, github, github, techcrunch]. Note that the last one was mis-classified by the model.ConclusionIn this tutorial, we built and trained a text classification model using Keras to predict the source media of a given article. The model required text preprocessing operations for preparing the training data, and preparing the incoming requests to the model deployed for online predictions. Then, we showed you how to deploy the model to AI Platform Serving with custom online prediction code, in order to perform preprocessing to the incoming prediction requests and post-processing to the prediction outputs. Enabling a custom online prediction routine in AI Platform Serving allows for affinity between the preprocessing logic, the model, and the post-processing logic required to handle prediction request end-to-end. This helps to avoid training-serving skew, and simplifies deploying ML models for online prediction.Thanks for following along. To find out more about these features, you can read the documentation. If you’re curious to try out some other machine learning tasks on GCP, take this specialization on Coursera. If you want to try out these examples for yourself in a local environment, run this Notebook. Send a tweet to @GCPcloud if there’s anything we can change or add to make text analysis even easier on Google Cloud.AcknowledgementsWe would like to thank Lak Lakshmanan, Technical Lead, Machine Learning and Big Data in Google Cloud, for reviewing and improving the blog post. Read more »
  • All 29 AI announcements from Google Next ‘19: the smartest laundry list
    With AI helping to solve so many business challenges, we think it’s important to give you a comprehensive wrap-up of all 29 announcements we made this year involving AI and machine learning at Google Next ‘19. We hope you can put some or many of these developments to use to help improve some of your business’s processes.Here’s what happened:Cloud AI solutions1. Document Understanding AIIf your business requires any manual paperwork processing, you can use Document Understanding AI to classify documents, extract crucial information from scanned images, and apply industry-specific, custom analysis to automate your processing needs.2. Contact Center AIAfter announcing Contact Center AI at Google Next ‘18, today we’re making it available in beta, and announcing integrations provided by Cisco, Five9, Genesys, Mitel, Twilio, and Vonage.3. Recommendations AIIf yours is a retail-oriented business, Recommendations AI helps you deliver highly personalized product recommendations to your customers at scale. A fully managed service, Recommendations AI puts all of your data to work to deliver high-quality, relevant, recommended products.4. Visual Product SearchIf you’re looking to deliver relevant products to your customers, Visual Product Search helps you match customer-generated images with images from your product catalog. These results reduce purchasing friction for your customers by prompting them with products based on their interests.Cloud AutoML5. AutoML Natural Language custom entity extraction and sentiment analysis (beta)*These additions to AutoML Natural Language let you identify and isolate custom fields from input text and also train and serve domain-specific sentiment analysis models on your unstructured data, including customer feedback.6. AutoML Tables (beta)If you’re looking for a way to train models without coding, AutoML Tables lets you turn your structured data into predictive insights. You can ingest your data for modeling from BigQuery, Cloud Storage, and other sources.7. AutoML Vision object detection (beta)Surpassing its prior image classification abilities, AutoML Vision now helps you detect multiple objects in images, providing bounding boxes to identify object locations.8. AutoML Vision Edge (beta)If your business needs to run classifier models on edge devices, AutoML Vision Edge helps you deploy fast, high accuracy models at the edge, and trigger real-time actions based on local data. AutoML Video Edge supports a variety of edge devices where low latency is critical, including Edge TPUs for fast inference.9. AutoML Video (beta)For those who need custom video classification models with custom labels capabilities beyond the Video Intelligence API, AutoML Video now lets you upload your own video footage and custom tags, in order to train models that are specific to your business needs for tagging and retrieving video with custom attributes.BigQuery ML10. BigQuery Insights: BigQuery ML core (GA)After releasing BigQuery ML in beta at Google Next ‘18, BigQuery ML is now generally available, and you can even call new model types with SQL code.11. BigQuery: k-means clustering ML (beta)K-means clustering helps you establish groupings of data points based on axes or attributes that you specify, and now you can establish groupings for your data via convergence, straight from Standard SQL in BigQuery.12. BigQuery: Import TensorFlow Models (alpha)A much-requested feature: you can now import your TensorFlow models and call them straight from BigQuery to create classifier and predictive models right from BigQuery.13. BigQuery: TensorFlow DNN classifierDeep neural networks (DNNs) can help you classify your data on a large number of features or signals. You can train and deploy a DNN model of your choosing straight from BigQuery’s Standard SQL interface.14. BigQuery: TensorFlow DNN regressorIf a regression is more useful to fit your data than a classifier, you can design a regression in TensorFlow and then call it to analyze your data in BigQuery.Data science platform15. AI Platform—notebooks, data labeling, SDKs, and console interface (beta)AI Platform, available in beta, provides a single location from which to select models, or train you own models and set them up to serve in production, whether you’re a data scientist or a software engineer. This includes a development environment, Jupyter notebooks, pre-built algorithms, customer containers, custom user code support for prediction, and 4-core support for prediction.16. AI Platform Notebooks (beta)If you’re eager to test out models and hyperparameter configurations in an interactive and shared environment, you can deploy JupyterLab iPython notebooks on a semi-managed service.17. Cloud AI Data Labeling Service (beta)Cloud AI now provides you with a paid service to request labelers to classify your uploaded business data for use in training models, or use automated tools that let you efficiently label your data at scale.18. Hybrid SDK (alpha)This is the underlying technology that helps users move their ML code from their on-premise cluster running on Kubeflow to GCP with almost no code changes.19. AI Platform Online Prediction: User Code Support (beta)AI Platform’s online prediction functionality now supports user-supplied custom code, which helps you pre-process your data in the way of your choosing, both at training time and at serving time.20. AI Hub (beta)AI Hub, available in beta, provides a single location from which your team can test out and share APIs, Google-provided models, third-party models, learning content, and data science notebooks, as you experiment and iterate your machine learning models.21. Kubeflow 0.5Kubeflow helps you orchestrate your machine learning training pipelines across on-prem and cloud-based resources. As a cloud-native platform that integrates Kubernetes with TensorFlow, you can now containerize your training and serving infrastructure.Pre-trained machine learning API updates22. Cloud Vision API—bundled enhancements (beta)The Vision API can now operate on batches of images through batch prediction, and document text detection now supports online annotation of PDFs, as well as files that contain a mix of scanned (raster) and rendered text.23. Cloud Natural Language API—bundled enhancements (beta)Cloud Natural Language now includes support for Russian and Japanese languages, as well as built in entity-extraction for receipts and invoices.24. Cloud Translation API v3 (beta)*Our third version of the Cloud Translation API launched with new features to advance enterprise translation needs, with the first 500k characters for free. Model selection enables you to choose the best model to fit your needs and batch translations with Google Cloud Storage supports multiple files and larger volumes of content in a single request.  Glossary gives customers control of the terminology that matters most, with glossary files to more easily integrate the brand-specific terms into your translation workflow.25. Video Intelligence API—bundled enhancements (beta)The Video Intelligence API lets content creators search for tagged aspects of their video footage. The API now supports optical character recognition (generally available), object tracking (also generally available), and new streaming video annotation capability (in beta).Compute infrastructure26. Cloud TPU v3 (GA)Our third-generation, liquid-cooled Tensor Processing Units provide some of the fastest training times when used at scale. These Compute Engine resources are now generally available to help you train your machine learning models faster.27. NVIDIA Tesla T4 GPU for Compute Engine (GA)NVIDIA’s Tensor Core-enabled GPU, the Tesla T4, is now generally available on Compute Engine. This GPU is primarily designed for runtime inference, but also enables lower-cost ML training, and visualization with new ray-tracing accelerations.These GPUs are now available in eight regions.Dialogflow for the enterprise28. Sentiment Analysis (GA) for Dialogflow Enterprise Edition*Sentiment analysis is now seamlessly integrated, and generally available in Dialogflow Enterprise Edition, which lets you model chat-oriented conversations and responses, to assist you as you build interactive chatbots.29. Text-to-Speech (GA) for Dialogflow Enterprise Edition*Text-to-Speech is now also integrated and generally available in Dialogflow Enterprise Edition, letting your chatbots trigger synthesized speech for more natural user interaction.Products denoted with an asterisk * are newly HIPAA compliant.Wow, that was a lot! As you can see, we’re constantly making updates to our APIs to better support developers, and we’re also launching new solutions to meet an ever growing breadth of business and industry needs. These changes can help you, especially if  you’re looking to build off existing reference architectures rather than to re-invent how you integrate AI into your business from the ground up. Please also check out our recorded sessions, in case there was anything at Google Next ‘19 that you missed. Read more »
  • Unlocking the power of AI with solutions designed for every enterprise
    Many enterprises see the value in applying AI and machine learning to their business challenges, but not all have the necessary resources to do it. Where should your organization begin if you don’t already have a team of data scientists, or if your team is fully committed to other tasks? Businesses need a quick and easy way to bring AI to their organizations.From the beginning, our goal has been to make AI accessible to as many businesses as possible. For example, last year we introduced Cloud AutoML to help businesses with limited ML expertise start building their own high-quality custom models. We also introduced BigQuery ML, which put the power of predictive analytics in reach of millions of users—even those without a data science background. And we’ve seen some amazing growth in demand for these services.Today, we’re excited to announce a number of new solutions that provide an easy way to use AI to address common business challenges—such as analyzing documents, forecasting inventory and demand, or managing multiple customer service touchpoints such as chatbots, phone, and e-mail.Here’s what’s new:Document Understanding AI (beta)Contact Center AI (beta)Google Cloud for Retail:Vision Product Search (GA)Recommendations AI (beta)AutoML Tables (beta)Unlock insights from documents with Document Understanding AI—now in betaMost companies have billions of documents—and moving that information into digital or cloud-native solutions where it can be easily accessed and analyzed can involve many hours of manual entry. These businesses need a way to automate this work as well as archive documents from multiple content sources into one cloud-based system.Today we’re announcing Document Understanding AI, in beta, offering a scalable, serverless platform to automatically classify, extract, and enrich data within your scanned or digital documents. By turning your documents into structured data, Document Understanding AI can help automate document processing workflows. This means you can take advantage of the facts, insights, relationships and knowledge hidden in your unstructured documents and start making data-driven business decisions faster and more accurately. For instance, customers that use custom document classification have achieved up to 96% accuracy. Document Understanding AI easily integrates with technology stacks from partners and third parties—Iron Mountain, Box, DocuSign, Egnyte, Taulia, UiPath, and Accenture are already using it today."As the world's leading information management provider, Iron Mountain scans over 627 million pages every year as part of our digital transformation solutions. Google Cloud's Document Understanding AI helps us identify form fields, text passage, tables and graphs, as well as customer-specific keyword matching, for customized workloads,” says Jim O’Dorisio, Senior Vice President for Emerging Commercial Solutions, Iron Mountain. “Document Understanding AI provides a foundation to help us deliver a far more valuable set of services to our customers—assisting them in automated data understanding, enabling compliance, business value, and delivering peace of mind."Improve customer care with Contact Center AI—now in betaLast year we introduced our first AI solution, Contact Center AI to help businesses build modern, intuitive customer care experiences with the help of AI. Since then, Google Cloud customers have chosen to run substantial customer service workloads on Contact Center AI implementations built by partners like Cisco, Five9, Genesys, Mitel, Twilio, and Vonage.Today, we’re announcing that Contact Center AI is now in beta. Contact Center AI builds on Dialogflow Enterprise Edition and provides key capabilities for your Contact Center—Virtual Agent, Agent Assist, and Topic Modeler—which are also available today in beta. The updates to our voice models, for example, make it easier for customers to have conversations with virtual agents. We’ve also made improvements to Agent Assist to quickly help surface useful content for live agents as they assist customers.We're also thrilled to welcome new partners to the Contact Center AI program including 8x8, Avaya, Salesforce, and Accenture. Together, we'll integrate these partner services with Google's world-class speech recognition, synthesis, natural language understanding, and agent assist to improve the contact center experience.Chris McGugan, Avaya’s Senior Vice President, Solutions and Technology explains, “We continue to expand our AI-enabled solutions as well as our cloud offerings for customers ranging from small-medium business to the largest global enterprises, and our ongoing collaboration with Google Cloud is providing additional capabilities to augment the innovation. By bringing these innovations to market for Avaya customers and partners, we enable them to make every customer interaction more meaningful and insightful, and more productive for their businesses. Avaya is also very encouraged by the enthusiastic response we have received from customers, analysts, and industry partners alike.”Helping more retailers take advantage of AIWhether they need to predict demand or provide automated product recommendations, retailers often have business challenges that can benefit greatly from AI. Google Cloud for Retailenables retailers to quickly take advantage of AI for retail-specific specific use cases.And Vision Product Search, now generally available, makes it possible for retailers to build visual search functionality into their mobile apps, allowing customers to photograph an item and get a list of similar products from the retailer’s catalog. Recommendations AI, in beta, helps retailers provide personalized 1:1 recommendations to drive customer engagement and growth. It has generated up to 40 percent increases in recommendation-driven revenue and up to 5 percent increases in total revenue per session. Lastly, AutoML Tables, in beta, makes it possible for retailers to automatically build and deploy state-of-the-art machine learning models on structured data, reducing the total time required for modeling from weeks to days. This means they can easily leverage their enterprise data to predict outcomes that can help maximize their revenue, optimize their product portfolios, and better understand their customers.To learn more, read our retail solutions blog post.Continuing to bring AI to everyoneToday’s announcements build on our goal of making AI accessible to every business, wherever they may be in their AI journey. As applied machine learning serves more industries, our goal is to provide more packaged solutions as well as the best-in-class AI tools you need to deploy and customize solutions to suit your business or industry. To learn more about the full breadth of machine learning on Google Cloud, visit our website. Read more »
  • Expanding Google Cloud AI to make it easier for developers to build and deploy AI
    Every year, more and more businesses look to AI to help them solve complex business challenges. Whether they’re using AI to anticipate demand, predict when equipment will need routine maintenance, or deliver better customer experiences, they all have one thing in common: they need a workforce that can help them do it.Our goal has always been to make AI simpler, faster, and more useful for businesses. This means easy-to-use AI solutions that make it simple for enterprises to adopt them. But it also means making it simpler for developers, data scientists, and data engineers to build and deploy machine learning models.Today we’re announcing a number of new ways we’re doing exactly that—from introducing an integrated platform of AI services that helps you build AI capabilities, then run them in the cloud or on premises, to expanding our AutoML offerings to make it easier for businesses to build and deploy their own custom ML models.Here’s a selection of what’s new:AI Platform (beta)AutoML updates, including:AutoML Tables (beta)AutoML Video Intelligence (beta)AutoML VisionAutoML Vision Edge (beta)Object detection (beta)AutoML Natural LanguageCustom entity extraction (beta)Custom sentiment analysis (beta)Introducing AI Platform: build AI applications, then run them in the cloud or on premisesWhen approaching AI projects, businesses grapple with a variety of problems—from unstructured data to siloed teams to complex deployments. They need a place that brings all these things together in a way that makes ML easier and more collaborative.Today, we’re announcing AI Platform in beta, a comprehensive, end-to-end development platform that helps teams prepare, build, run, and manage ML projects via the same shared interface. Whether you’re a developer, data scientist, or data engineer, you can collaborate on model sharing, training, and scaling workloads from the same dashboard within Cloud Console.With AI Platform, you can ingest streaming or batch data, and use a built-in labeling service to easily label training data—like images, videos, audio, and text—by applying classification, object detection, entity extraction, and other processes. You can import your data directly into AutoML, or use Cloud Machine Learning Engine, now part of AI Platform, to train and serve your own custom-built ML models on GCP. AI Platform complements AI Hub, so developers can discover ML pipelines, notebooks, and other instructional content, and because AI Platform supports Kubeflow, Google’s open-source platform, you can build portable ML pipelines that you can then run on premises or in the cloud with almost no code changes.Learn more about AI Platform on our website.Making AI more accessible with updates to Cloud AutoMLWhen we first introduced Cloud AutoML, our goal was to help developers with limited ML expertise train high-quality custom machine learning models and deploy them in their business. Today, we're excited to announce new and enhanced AutoML solutions that will further our mission of making it easy, fast, and useful for all developers and enterprises to use AI.AutoML Tables: easily create ML models from datasets with no coding necessaryEnterprises are generating more structured data than ever, and tools that help them easily turn all that data into actionable predictive insights can be a huge help. AutoML Tables, now available in beta, lets you build and deploy state-of-the-art machine learning models on structured tabular datasets with zero code. With just a few clicks, you can ingest data from BigQuery and other GCP storage services into AutoML Tables and build and deploy ML models in just days versus weeks. The codeless interface guides you through the full end-to-end machine learning lifecycle, making it easy for anyone on your team—whether data scientist, analyst, or developer—to build models and reliably incorporate them into broader applications.For an ever deeper look at AutoML Tables, read our data analytics blog post.Extending AutoML Vision to the edgeOptimizing machine learning models to run on edge devices, like connected sensors or cameras, can be challenging because these devices often grapple with latency and unreliable connectivity. Last year, we announced AutoML Vision to make it easier for developers to create custom ML models for image recognition. Today we’re announcing AutoML Vision Edge to simplify training and deployment of high-accuracy, low-latency custom ML models for (on premises or remote) edge devices. AutoML Vision Edge supports a variety of devices and can take advantage of Edge TPUs for faster inference. For example, LG CNS is using AutoML Vision Edge to create manufacturing intelligence solutions that detect defects in everything from LCD screens to optical films to automotive fabrics on the assembly line.Enabling powerful content discovery and engaging experiences with AutoML VideoAnalyzing volumes of video footage to identify specific moments, prepare special cuts, or better classify visual data can be a difficult and time-consuming process. Today, we’re announcing AutoML Video, in beta, so that developers can easily create custom models that automatically classify video content with labels they define. Companies that deal with mountains of diverse video data can instantly discover content according to their own taxonomy. This means media and entertainment businesses can simplify tasks like automatically removing commercials or creating highlight reels, and other industries can apply it to their own specific video analysis needs—for example, better understanding traffic patterns or overseeing manufacturing processes.In addition to these three entirely new AutoML solutions, we are continuing to improve the core functionality of AutoML Vision and AutoML Natural Language. AutoML Vision object detection (beta) can identify the position of objects within an image, and in context with one another, for example, a pedestrian walking in a crosswalk. AutoML Natural Language custom entity extraction (beta) helps you automatically identify entities—such as medical terms or contractual clauses—within documents and label them based on company-specific keywords and phrases. And AutoML Natural Language custom sentiment analysis (beta) helps you apply machine learning to better understand the overall opinion, feeling or attitude expressed in a block of text, tuned to your organization’s own domain-specific sentiment scores.Continuing to make machine learning faster with the latest acceleratorsWe continue to invest in the infrastructure that makes machine learning possible for you. Our Cloud TPUs, custom-built to quickly train ML models, lets you iterate at scale to achieve higher classification accuracy, at a lower cost. Our third generation liquid-cooled TPUs are now generally available, and all Cloud TPUs are also generally available in Google Kubernetes Engine (GKE), which is a new and flexible way to run your containerized ML workloads, giving you the flexibility to switch between on-prem and cloud-based training. GCP is also the first cloud provider to offer the new NVIDIA Tesla T4, now generally available across eight regions.A fully-featured, user-centric ecosystem for machine learningAs part of today’s announcements, we’re also working with numerous partners—including Accenture, Atos, Cisco, Gigster, Intel, NVIDIA, Pluto 7, SpringML, and UiPath—to build Kubeflow pipelines to grow and extend AI Hub. It takes a robust partner ecosystem to build a successful platform, and we’re grateful to all of our partners who enable our customers to train and serve machine learning pipelines on the infrastructure of their choosing.To learn more about our AI solutions for businesses and industries, read this blog post. And to learn more about AI on Google Cloud, visit our website. Read more »
  • 9 mustn’t-miss machine learning sessions at Next ‘19
    From predicting appliance usage from raw power readings, to medical imaging, machine learning has made a profound impact on many industries. Our AI and machine learning sessions are amongst our most popular each year at Next, and this year we’re offering more than 30 on topics ranging from building a better customer service chatbot to automated visual inspection for manufacturing.If you’re joining us at Next, here are nine AI and machine learning sessions you won’t want to miss.1. Automating Visual Inspections in Energy and Manufacturing with AIIn this session, you can learn from two global companies that are aggressively shaping practical business solutions using machine vision. AES is a global power company that strives to build a future that runs on greener energy. To serve this mission, they are rigorously scaling the use of drones in their wind farm operations with Google’s AutoML Vision to automatically identify defects and improve the speed and reliability of inspections. Our second presenter joins us from LG CNS, a global subsidiary of LG Corporation and Korea’s largest IT service provider. LG's Smart Factory initiative is building an autonomous factory to maximize productivity, quality, cost, and delivery. By using AutoML Vision on edge devices, they are detecting defects in various products during the manufacturing process with their visual inspection solution.2. Building Game AI for Better User ExperiencesLearn how DeNA, a mobile game studio, is integrating AI into its next-generation mobile games. This session will focus on how DeNA built its popular mobile game Gyakuten Othellonia on Google Cloud Platform (GCP) and how they’ve integrated AI-based assistance. DeNA will share how they designed, trained, and optimized models, and then explain how they built a scalable and robust backend system with Cloud ML Engine.3. Cloud AI: Use Case Driven Technology (Spotlight)More than ever, today’s enterprises are relying on AI to reach their customers more effectively, deliver the experiences they expect, increase efficiency and drive growth across their organizations. Join Andrew Moore and Rajen Sheth in a session with three of Google Cloud’s leading AI innovators, Unilever, Blackrock, and FOX Sports Australia, as they discuss how GCP and Cloud AI services, like the Vision API, Video Intelligence API, and Cloud Natural Language have made their products more intelligent, and how they can do the same for yours.4. Fast and Lean Data Science With TPUsGoogle's Tensor Processing Units (TPUs) are revolutionizing the way data scientists work. Week-long training times are a thing of the past, and you can now train many models in minutes, right in a notebook. Agility and fast iterations are bringing neural networks into regular software development cycles and many developers are ramping up on machine learning. Machine learning expert Martin Görner will introduce TPUs, then dive deep into their microarchitecture secrets. He will also show you how to use them in your day-to-day projects to iterate faster. In fact, Martin will not just demo but train most of the models presented in this session on stage in real time, on TPUs.5. Serverless and Open-Source Machine Learning at Sling MediaThis session covers Sling’s incremental adoption strategy of Google Cloud’s serverless machine learning platforms that enable data scientists and engineers to build business-relevant models quickly. Sling will explain how they use deep learning techniques to better predict customer churn, develop a traditional pipeline to serve the model, and enhance the pipeline to be both serverless and scalable. Sling will share best practices and lessons learned deploying Beam, tf.transform, and TensorFlow on Cloud Dataflow and Cloud ML Engine.6. Understanding the Earth: ML With Kubeflow PipelinesPetabytes of satellite imagery contain valuable indicators of scientific and economic activity around the globe. In order to turn its geospatial data into conclusions, Descartes Labs has built a data processing and modeling platform for which all components run on Google Cloud. Descartes leverages tools including Kubeflow Pipelines as part of their model-building process to enable efficient experimentation, orchestrate complicated workflows, maximize repeatability and reuse, and deploy at scale. This session will explain how you can implement machine learning workflows in Kubeflow Pipelines, and cover some successes and challenges of using these tools in practice.7. Virtual Assistants: Demystify and DeployIn this session, you'll learn how Discover built a customer service solution around Dialogflow. Discover’s data science team will explain how to execute on your customer service strategy, and how you can best configure your agent’s Dialogflow "model" before you deploy it to production.8. Reinventing Retail with AIToday’s retailers must have a deep understanding of each of their customers to earn and maintain their loyalty. In this session, Nordstrom and Disney explain how they’ve used AI to create engaging and highly personalized customer experiences. In addition, Google partner Pitney Bowes will discuss how they’re predicting credit card fraud for luxury retail brands. This session will discuss new Google products for the retail industry, as well as how they fit into a broader data-driven strategy for retailers.9. GPU Infrastructure on GCP for ML and HPC WorkloadsML researchers want a GPU infrastructure they can get started with quickly, run consistently in production, and dynamically scale as needed. Learn about GCP’s various GPU offerings and features often used with ML. From there, we will discuss real-world customer story of how they manage their GPU compute infrastructure on GCP.  We’ll cover the new NVIDIA Tesla T4 and V100 GPU, Deep Learning VM Image for quickly getting started, preemptible GPUs for low cost, GPU integration with Kubernetes Engine (GKE), and more.If you’re looking for something that’s not on our list, check out the full schedule. And, don’t forget to register for the sessions you plan to attend—seats are limited. Read more »
  • TensorFlow 2.0 and Cloud AI make it easy to train, deploy, and maintain scalable machine learning models
    true
    Since it was open-sourced in 2015, TensorFlow has matured into an entire end-to-end ML ecosystem that includes a variety of tools, libraries, and deployment options to help users go from research to production easily. This month at the 2019 TensorFlow Dev Summit we announced TensorFlow 2.0 to make machine learning models easier to use and deploy.TensorFlow started out as a machine learning framework and has grown into a comprehensive platform that gives researchers and developers access to both intuitive higher-level APIs and low-level operations. In TensorFlow 2.0, eager execution is enabled by default, with tight Keras integration. You can easily ingest datasets via tf.data pipelines, and you can monitor your training in TensorBoard directly from Colab and Jupyter Notebooks. The TensorFlow team will continue to work on improving TensorFlow 2.0 alpha with a general release candidate coming later in Q2 2019. Making ML easier to useThe TensorFlow team’s decision to focus on developer productivity and ease of use doesn’t stop at iPython notebooks and Colab, but extends to make API components integrate far more intuitively with tf.keras (now the standard high level API), and to TensorFlow Datasets, which let users import common preprocessed datasets with only one line of code. Data ingestion pipelines can be orchestrated with tf.data, pushed into production with TensorFlow Extended (TFX), and scaled to multiple nodes and hardware architectures with minimal code change using distribution strategies.The TensorFlow engineering team has created an upgrade tool and several migration guides to support users who wish to migrate their models from TensorFlow 1.x to 2.0. TensorFlow is also hosting a weekly community testing stand-up for users to ask questions about TensorFlow 2.0 and migration support. If you’re interested, you can find more information on the TensorFlow website.Upgrading a model with the tf_upgrade_v2 tool.Experiment and iterateBoth researchers and enterprise data science teams must continuously iterate on model architectures, with a focus on rapid prototyping and speed to a first solution. With eager execution a focus in TensorFlow 2.0, researchers have the ability to use intuitive Python control flows, optimize their eager code with tf.function, and save time with improved error messaging. Creating and experimenting with models using TensorFlow has never been so easy.Faster training is essential for model deployments, retraining, and experimentation. In the past year, the TensorFlow team has worked diligently to improve training performance times on a variety of platforms including the second-generation Cloud TPU (by a factor of 1.6x) and the NVIDIA V100 GPU (by a factor of more than 2x). For inference, we saw speedups of over 3x with Intel’s MKL library, which supports CPU-based Compute Engine instances.Through add-on extensions, TensorFlow expands to help you build advance models. For example, TensorFlow Federated lets you train models both in the cloud and on remote (IoT or embedded) devices in a collaborative fashion. Often times, your remote devices have data to train on that your centralized training system may not. We also recently announced the TensorFlow Privacy extension, which helps you strip personally identifiable information (PII) from your training data. Finally, TensorFlow Probability extends TensorFlow’s abilities to more traditional statistical use cases, which you can use in conjunction with other functionality like estimators.Deploy your ML model in a variety ofenvironments and languagesA core strength of TensorFlow has always been the ability to deploy models into production. In TensorFlow 2.0, the TensorFlow team is making it even easier. TFX Pipelines give you the ability to coordinate how you serve your trained models for inference at runtime, whether on a single instance, or across an entire cluster. Meanwhile, for more resource-constrained systems, like mobile or IoT devices and embedded hardware, you can easily quantize your models to run with TensorFlow Lite. Airbnb, Shazam, and the BBC are all using TensorFlow Lite to enhance their mobile experiences, and to validate as well as classify user-uploaded content.Exploring and analyzing data with TensorFlow Data Validation.JavaScript is one of the world’s most popular programming languages, and TensorFlow.js helps make ML available to millions of JavaScript developers. The TensorFlow team announced TensorFlow.js version 1.0. This integration means you can not only train and run models in the browser, but also run TensorFlow as a part of server-side hosted JavaScript apps, including on App Engine. TensorFlow.js now has better performance than ever, and its community has grown substantially: in the year since its initial launch, community members have downloaded TensorFlow.js over 300,000 times, and its repository now incorporates code from over 100 contributors.How to get startedIf you’re eager to get started with TensorFlow 2.0 alpha on Google Cloud, start up a Deep Learning VM and try out some of the tutorials. TensorFlow 2.0 is available through Colab via pip install, if you’re just looking to run a notebook anywhere, but perhaps more importantly, you can also run a Jupyter instance on Google Cloud using a Cloud Dataproc Cluster, or launch notebooks directly from Cloud ML Engine, all from within your GCP project.Using TensorFlow 2.0 with a Deep Learning VM and GCP Notebook Instances.Along with announcing the alpha release of TensorFlow 2.0, we also announced new community and education partnerships. In collaboration with O’Reilly Media, we’re hosting TensorFlow World, a week-long conference dedicated to fostering and bringing together the open source community and all things TensorFlow. Call for proposals is open for attendees to submit papers and projects to be highlighted at the event. Finally, we announced two new courses to help beginners and learners new to ML and TensorFlow. The first course is deeplearning.ai’s Course 1 - Introduction to TensorFlow for AI, ML and DL, part of the TensorFlow: from Basic to Mastery series. The second course is Udacity’s Intro to TensorFlow for Deep Learning.If you’re using TensorFlow 2.0 on Google Cloud, we want to hear about it! Make sure to join our Testing special interest group, submit your project abstracts to TensorFlow World, and share your projects in our #PoweredByTF Challenge on DevPost. To quickly get up to speed on TensorFlow, be sure to check out our free courses on Udacity and DeepLearning.ai. Read more »
  • NVIDIA’s RAPIDS joins our set of Deep Learning VM images for faster data science
    true
    If you’re a data scientist, researcher, engineer, or developer, you may be familiar with Google Cloud’s set of Deep Learning Virtual Machine (VM) images, which enable the one-click setup machine learning-focused development environments. But some data scientists still use a combination of pandas, Dask, scikit-learn, and Spark on traditional CPU-based instances. If you’d like to speed up your end-to-end pipeline through scale, Google Cloud’s Deep Learning VMs now include an experimental image with RAPIDS, NVIDIA’s open source and Python-based GPU-accelerated data processing and machine learning libraries that are a key part of NVIDIA’s larger collection of CUDA-X AI accelerated software. CUDA-X AI is the collection of NVIDIA's GPU acceleration libraries to accelerate deep learning, machine learning, and data analysis.The Deep Learning VM Images comprise a set of Debian 9-based Compute Engine virtual machine disk images optimized for data science and machine learning tasks. All images include common machine learning (typically deep learning, specifically) frameworks and tools installed from first boot, and can be used out of the box on instances with GPUs to accelerate your data processing tasks. In this blog post you’ll learn to use a Deep Learning VM which includes GPU-accelerated RAPIDS libraries.RAPIDS is an open-source suite of data processing and machine learning libraries developed by NVIDIA that enables GPU-acceleration for data science workflows. RAPIDS relies on NVIDIA’s CUDA language allowing users to leverage GPU processing and high-bandwidth GPU memory through user-friendly Python interfaces. It includes the DataFrame API based on Apache Arrow data structures called cuDF, which will be familiar to users of pandas. It also includes cuML, a growing library of GPU-accelerated ML algorithms that will be familiar to users of scikit-learn. Together, these libraries provide an accelerated solution for ML practitioners to use requiring only  minimal code changes and no new tools to learn.RAPIDS is available as a conda or pip package, in a Docker image, and as source code.Using the RAPIDS Google Cloud Deep Learning VM image automatically initializes a Compute Engine instance with all the pre-installed packages required to run RAPIDS. No extra steps required!Creating a new RAPIDS virtual machine instanceCompute Engine offers predefined machine types that you can use when you create an instance. Each predefined machine type includes a preset number of vCPUs and amount of memory, and bills you at a fixed rate, as described on the pricing page.If predefined machine types do not meet your needs, you can create an instance with a custom virtualized hardware configuration. Specifically, you can create an instance with a custom number of vCPUs and amount of memory, effectively using a custom machine type.In this case, we’ll create a custom Deep Learning VM image with 48 vCPUs, extended memory of 384 GB, 4 NVIDIA Tesla T4 GPUs and RAPIDS support.Notes:You can create this instance in any available zone that supports T4 GPUs.The option install-nvidia-driver=True Installs NVIDIA GPU driver automatically.The option proxy-mode=project_editors makes the VM visible in the Notebook Instances section.To define extended memory, use 1024*X where X is the number of GB required for RAM.Using RAPIDSTo put the RAPIDS through its paces on Google Cloud Platform (GCP), we focused on a common HPC workload: a parallel sum reduction test. This test can operate on very large problems (the default size is 2TB) using distributed memory and parallel task processing.There are several applications that require the computation of parallel sum reductions in high performance computing (HPC). Some examples include:Solving linear recurrencesEvaluation of polynomialsRandom number generationSequence alignmentN-body simulationIt turns out that parallel sum reduction is useful for the data science community at large. To manage the deluge of big data, a parallel programming model called “MapReduce” is used for processing data using distributed clusters. The “Map” portion of this model supports sorting: for example, sorting products into queues. Once the model maps the data, it then summarizes the output with the “Reduce” algorithm—for example, count the number of products in each queue. A summation operation is the most compute-heavy step, and given the scale of data that the model is processing, these sum operations must be carried out using parallel distributed clusters in order to complete in a reasonable amount of time.But certain reduction sum operations contain dependencies that inhibit parallelization. To illustrate such a dependency, suppose we want to add a series of numbers as shown in Figure 1.From the figure 1 on the left, we must first add 7 + 6 to obtain 13, before we can add 13 + 14 to obtain 27 and so on in a sequential fashion. These dependencies inhibit parallelization. However, since addition is associative, the summation can be expressed as a tree (figure 2 on the right). The benefit of this tree representation is that the dependency chain is shallow, and since the root node summarizes its leaves, this calculation can be split into independent tasks.Speaking of tasks, this brings us to the Python package Dask, a popular distributed computing framework. With Dask, data scientists and researchers can use Python to express their problems as tasks. Dask then distributes these tasks across processing elements within a single system, or across a cluster of systems. The RAPIDS team recently integrated GPU support into a package called dask-cuda. When you import both dask-cuda and another package called CuPY, which allows data to be allocated on GPUs using familiar numpy constructs, you can really explore the full breadths of models you can build with your data set. To illustrate, shown in Figures 3 and 4 show side-by-side comparisons of the same test run. On the left, 48 cores of a single system are used to process 2 terabytes (TB) of randomly initialized data using 48 Dask workers. On the right, 4 Dask workers process the same 2 TB of data, but dask-cuda is used to automatically associate those workers with 4 Tesla T4 GPUs installed in the same system.Running RAPIDSTo test parallel sum-reduction, perform the following steps:1. SSH into the instance. See Connecting to Instances for more details.2. Download the code required from this repository and upload it to your Deep Learning Virtual Machine Compute Engine instance. These two files are of particular importance as you profile performance:run.sh helper bash shell scriptsum.py summation Python scriptYou can find the sample code to run these tests, based on this blog, GPU Dask Arrays, below.3. Run the tests:Run test on the instance’s CPU complex, in this case specifying 48 vCPUs (indicated by the -c flag):Now, run the test using 4 (indicated by the -g flag) NVIDIA Tesla T4 GPUs:Figure 3.c: CPU-based solution. Figure 4.d: GPU-based solutionHere are some initial conclusions we derived from these tests:Processing 2 TB of data on GPUs is much faster (an ~12x speed-up for this test)Using Dask’s dashboard, you can visualize the performance of the reduction sum as it is executingCPU cores are fully occupied during processing on CPUs, but the GPUs are not fully utilizedYou can also run this test in a distributed environmentIn this example, we allocate Python arrays using the double data type by default. Since this code allocates an array size of (500K x 500K) elements, this represents 2 TB  (500K × 500K × 8 bytes / word). Dask initializes these array elements randomly via normal Gaussian distribution using the dask.array package.Running RAPIDS on a distributed clusterYou can also run RAPIDS in a distributed environment using multiple Compute Engine instances. You can use the same code to run RAPIDS in a distributed way with minimal modification and still decrease the processing time. If you want to explore RAPIDS in a distributed environment please follow the complete guide here.ConclusionAs you can see from the above example, the RAPIDS VM Image can dramatically speed up your ML workflows. Running RAPIDS with Dask lets you seamlessly integrate your data science environment with Python and its myriad libraries and wheels, HPC schedulers such as SLURM, PBS, SGE, and LSF, and open-source infrastructure orchestration projects such as Kubernetes and YARN. Dask also helps you develop your model once, and adaptably run it on either a single system, or scale it out across a cluster. You can then dynamically adjust your resource usage based on computational demands. Lastly, Dask helps you ensure that you’re maximizing uptime, through fault tolerance capabilities intrinsic in failover-capable cluster computing.It’s also easy to deploy on Google’s Compute Engine distributed environment. If you’re eager to learn more, check out the RAPIDS project and open-source community website, or review the RAPIDS VM Image documentation.Acknowledgements: Ty McKercher, NVIDIA, Principal Solution Architect; Vartika Singh, NVIDIA, Solution Architect; Gonzalo Gasca Meza, Google, Developer Programs Engineer; Viacheslav Kovalevskyi, Google, Software Engineer Read more »
  • Cloud AI helps you train and serve TensorFlow TFX pipelines seamlessly and at scale
    true
    Last week, at the TensorFlow Dev Summit, the TensorFlow team released new and updated components that integrate into the open source TFX Platform (TensorFlow eXtended). TFX components are a subset of the tools used inside Google to power hundreds of teams’ wide-ranging machine learning applications. They address critical challenges to successful deployment of machine learning (ML) applications in production, such as:The prevention of training-versus-serving skewInput data validation and quality checksVisualization of model performance on multiple slices of dataA TFX pipeline is a sequence of components that implements an ML pipeline that is specifically designed for scalable, high-performance machine learning tasks. TFX pipelines support modeling, training, serving/inference, and managing deployments to online, native mobile, and even JavaScript targets.In this post, we‘ll explain how Google Cloud customers can use the TFX platform for their own ML applications, and deploy them at scale.Cloud Dataflow as a serverless autoscaling execution engine for (Apache Beam-based) TFX componentsThe TensorFlow team authored TFX components using Apache Beam for distributed processing. You can run Beam natively on Google Cloud with Cloud Dataflow, a seamless autoscaling runtime that gives you access to large amounts of compute capability on-demand. Beam can also run in many other execution environments, including Apache Flink, both on-premises and in multi-cloud mode. When you run Beam pipelines on Cloud Dataflow—the execution environment they were designed for—you can access advanced optimization features such as Dataflow Shuffle that groups and joins datasets larger than 200 terabytes. The same team that designed and built MapReduce and Google Flume also created third-generation data runtime innovations like dynamic work rebalancing, batch and streaming unification, and runner-agnostic abstractions that exist today in Apache Beam.Kubeflow Pipelines makes it easy to author, deploy, and manage TFX workflowsKubeflow Pipelines, part of the popular Kubeflow open source project, helps you author, deploy and manage TFX workflows on Google Cloud. You can easily deploy Kubeflow on Google Kubernetes Engine (GKE), via the 1-click deploy process. It automatically configures and runs essential backend services, such as the orchestration service for workflows, and optionally the metadata backend that tracks information relevant to workflow runs and the corresponding artifacts that are consumed and produced. GKE provides essential enterprise capabilities for access control and security, as well as tooling for monitoring and metering.Thus, Google Cloud makes it easy for you to execute TFX workflows at considerable scale using:Distributed model training and scalable model serving on Cloud ML EngineTFX component execution at scale on Cloud DataflowWorkflow and metadata orchestration and management with Kubeflow Pipelines on GKEFigure 1: TFX workflow running in Kubeflow PipelinesThe Kubeflow Pipelines UI shown in the above diagram makes it easy to visualize and track all executions. For deeper analysis of the metadata about component runs and artifacts, you can host a Jupyter notebook in the Kubeflow cluster, and query the metadata backend directly. You can refer to this sample notebook for more details.At Google Cloud, we work to empower our customers with the same set of tools and technologies that we use internally across many Google businesses to build sophisticated ML workflows. To learn more about using TFX, please check out the TFX user guide, or learn how to integrate TFX pipelines into your existing Apache Beam workflows in this video.Acknowledgments:Sam McVeety, Clemens Mewald, and Ajay Gopinathan also contributed to this post. Read more »
  • New study: The state of AI in the enterprise
    Editor’s note: Today we hear from one of our Premier partners, Deloitte.  Deloitte’s recent report, The State of AI in the Enterprise, 2nd Edition, examines how businesses are thinking about—and deploying—AI services.From consumer products to financial services, AI is transforming the global business landscape. In 2017, we began our relationship with Google Cloud to help our joint customers deploy and scale AI applications for their businesses. These customers frequently tell us they’re seeing steady returns on their investments in AI, and as a result, they’re interested in more ways to increase those investments.We regularly conduct research on the broader market trends for AI, and in November of 2018, we released our second annual “State of AI in the Enterprise” study. It showed that industry trends at large reflect what we hear from our customers: the business community remains bullish on AI’s impact.In this blog post, we’ll examine some of the key takeaways from our survey of 1,100 IT and line-of-business executives and discuss how these findings are relevant to our customers.Enterprises are doubling down on AI—and seeing financial benefitsMore than 95 percent of respondents believe that AI will transform both their businesses and their industries. A majority of survey respondents have already made large-scale investments in AI, with 37 percent saying they have committed $5 million or more to AI-specific initiatives. Nearly two-thirds of respondents (63 percent) feel AI has completely upended the marketplace and they need to make large-scale investments to catch up with rivals—or even to open a narrow lead.A surprising 82 percent of our respondents told us they’ve already gained a financial return from their AI investments. But that return is not equal across industries. Technology, media, and telecom companies, along with professional services firms, have made the biggest investments and realized the highest returns. In contrast, the public sector and financial services, with lower investments, lag behind. With 88 percent of surveyed companies planning to increase AI spending in the coming year, there’s a significant opportunity to increase both revenue and cost savings across all industries. However, like past transformative technologies, selecting the right AI use cases will be key to recognizing near and long-term benefits.Enterprises are using a broad range of AI technologies, increasingly in the cloudOur findings show that enterprises are employing a wide variety of AI technologies. More than half of respondents say their businesses are using statistical machine learning (63 percent), robotic process automation (59 percent), or natural language processing and generation (53 percent). Just under half (49 percent) are still using expert or rule-based systems, and 34 percent are using deep learning.When asked how they were accessing these AI capabilities, 59 percent said they relied on enterprise software with AI capabilities (much of which is available in the cloud) and 49 percent said, “AI as a service” (again, presumably in the cloud). Forty-six percent, a surprisingly high number, said they were relying upon automated machine learning—a set of capabilities that are only available in the cloud. It’s clear, then, that the cloud is already having a major effect on AI use in these large enterprises.These trends suggest that public cloud providers can become the primary way businesses access AI services. As a result, we believe this could lower the cost of cloud services and enhance its capabilities at the same time. In fact, our research shows that AI technology companies are investing more R&D dollars into enhancing cloud native versions of AI systems. If this trend continues, it seems likely that enterprises seeking best-of-breed AI solutions will increasingly need to access them from cloud providers.There are still challenges to overcomeGiven the enthusiasm surrounding AI technologies, it is not surprising that organizations also need to supplement their investments in talent. Although 31 percent of respondents listed “lack of AI skills” as a top-three concern—below such issues as implementation, integration, and data—HR teams need to look beyond technology skills to understand their organization’s pain points and end goals. Companies should try to secure teams that bring a mix of business and technology experience to help fully realize their AI project potential.Our respondents also had concerns about AI-related risks. A little more than half are worried about cybersecurity issues around AI (51 percent), and are concerned about “making the wrong strategic decisions based on AI recommendations” (43 percent). Companies have also begun to recognize ethical risks from AI, the most common being “using AI to manipulate information and create falsehoods” (43 percent).In conclusionDespite some challenges, our study suggests that enterprises are enthusiastic about AI, have already seen value from their investments, and are committed to expanding those investments. Looking forward, we expect to see substantial growth in AI and its cloud-based implementations, and that businesses will increasingly turn to public cloud providers as their primary method of accessing them.Deloitte was proud to be named Google Cloud’sGlobal Services Partner of the Year for 2017–in part due to our joint investments in AI. To learn more about how we can help you accelerate your organization’s AI journey, contact USGoogleCloudAlliance@deloitte.com.As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting. Read more »
WordPress RSS Feed Retriever by Theme Mason

AI – Latest News

WordPress RSS Feed Retriever by Theme Mason

ScienceDaily – Artificial Intelligence News

  • With a hop, a skip and a jump, high-flying robot leaps through obstacles with ease
    First unveiled in 2016, Salto the jumping robot stands at little less than a foot, but can vault over three times its height in a single bound. Now researchers have equipped the robot with a slew of new skills, giving it the ability to bounce in place like a pogo stick and jump through obstacle courses like an agility dog. Salto can even take short jaunts outside, powered by radio controller. Read more »
  • New framework improves performance of deep neural networks
    Researchers have developed a new framework for building deep neural networks via grammar-guided network generators. In experimental testing, the new networks -- called AOGNets -- have outperformed existing state-of-the-art frameworks, including the widely used ResNet and DenseNet systems, in visual recognition tasks. Read more »
  • 'Spider-like senses' could help autonomous machines see better
    true
    Researchers are building 'spidey senses' into the shells of autonomous cars and drones so that they could detect and avoid objects better. Read more »
  • Dog-like robot made by students jumps, flips and trots
    Students developed a dog-like robot that can navigate tough terrain -- and they want you to make one too. Read more »
  • Artificial intelligence becomes life-long learner with new framework
    Scientists have developed a new framework for deep neural networks that allows artificial intelligence systems to better learn new tasks while forgetting less of what they have learned regarding previous tasks. Read more »
  • Toy transformers and real-life whales inspire biohybrid robot
    Researchers create a remote-controlled soft robot that can transform itself to conduct targeted drug delivery against cancer cells. Read more »
  • Helping robots remember: Hyperdimensional computing theory could change the way AI works
    A new article introduces a new way of combining perception and motor commands using the so-called hyperdimensional computing theory, which could fundamentally alter and improve the basic artificial intelligence (AI) task of sensorimotor representation -- how agents like robots translate what they sense into what they do. Read more »
  • New AI sees like a human, filling in the blanks
    true
    Computer scientists have taught an artificial intelligence agent how to do something that usually only humans can do -- take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. Read more »
  • Robot therapists need rules
    Interactions with artificial intelligence (AI) will become an increasingly common aspect of our lives. A team has now completed the first study of how 'embodied AI' can help treat mental illness. Their conclusion: Important ethical questions of this technology remain unanswered. There is urgent need for action on the part of governments, professional associations and researchers. Read more »
  • Speech recognition technology is not a solution for poor readers
    Could artificial intelligence be a solution for people who cannot read well (functional illiterates) or those who cannot read at all (complete illiterates)? According to psycholinguists, speech technology should never replace learning how to read. Researchers argue that literacy leads to a better understanding of speech because good readers are good at predicting words. Read more »
  • Tech-savvy people more likely to trust digital doctors
    Would you trust a robot to diagnose your cancer? According to new research, people with high confidence in machine performance and also in their own technological capabilities are more likely to accept and use digital healthcare services and providers. Read more »
  • Inspired by a soft body of a leech: A wall-climbing robot
    Scientists have successfully developed a leech-shaped robot, 'LEeCH,' which can climb vertical walls. LEeCH is capable of elongating and bending its body without any constraints; just like a leech. Thanks to its flexible body structure and the suction cups, the robot has successfully climbed a vertical wall and even reached to the other side of the wall. Read more »
  • Use of robots and artificial intelligence to understand the deep sea
    true
    Artificial intelligence (AI) could help scientists shed new light on the variety of species living on the ocean floor, according to new research. Read more »
  • Hummingbird robot uses AI to soon go where drones can't
    true
    Researchers have engineered flying robots that behave like hummingbirds, trained by machine learning algorithms based on various techniques the bird uses naturally every day. Read more »
  • Step towards light-based, brain-like computing chip
    true
    Scientists have succeeded in developing a piece of hardware which could pave the way for creating computers resembling the human brain. They produced a chip containing a network of artificial neurons that works with light and can imitate neurons and their synapses. This network is able to 'learn' information and use this as a basis for computing. The approach could be used later in many different fields for evaluating patterns in large quantities of data. Read more »
  • New artificial synapse is fast, efficient and durable
    true
    A battery-like device could act as an artificial synapse within computing systems intended to imitate the brain's efficiency and ability to learn. Read more »
  • Putting vision models to the test
    true
    Neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain's visual cortex. The results suggest that the current versions of these models are similar enough to the brain to allow them to actually control brain states in animals. Read more »
  • Half a face enough for recognition technology
    Facial recognition technology works even when only half a face is visible, researchers have found. Read more »
  • An army of micro-robots can wipe out dental plaque
    true
    A swarm of micro-robots, directed by magnets, can break apart and remove dental biofilm, or plaque, from a tooth. The innovation arose from a cross-disciplinary partnership among dentists, biologists, and engineers. Read more »
  • Magnets can help AI get closer to the efficiency of the human brain
    true
    Researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects. Read more »
  • A first in medical robotics: Autonomous navigation inside the body
    Bioengineers report the first demonstration of a robot able to navigate autonomously inside the body. In an animal model of cardiac valve repair, the team programmed a robotic catheter to find its way along the walls of a beating, blood-filled heart to a leaky valve -- without a surgeon's guidance. Read more »
  • Synthetic speech generated from brain recordings
    true
    A state-of-the-art brain-machine interface created by neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract -- an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis or neurological damage. Read more »
  • New way to 'see' objects accelerates the future of self-driving cars
    Researchers have discovered a simple, cost-effective, and accurate new method for equipping self-driving cars with the tools needed to perceive 3D objects in their path. Read more »
  • Snake-inspired robot slithers even better than predecessor
    true
    Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new and improved snake-inspired soft robot that is faster and more precise than its predecessor. Read more »
  • Adding human touch to unchatty chatbots may lead to bigger letdown
    Sorry, Siri, but just giving a chatbot a human name or adding humanlike features to its avatar might not be enough to win over a user if the device fails to maintain a conversational back-and-forth with that person, according to researchers. In fact, those humanlike features might create a backlash against less responsive humanlike chatbots. Read more »
  • Using the physics of airflows to locate gaseous leaks more quickly in complex scenarios
    true
    Engineers are developing a smart robotic system for sniffing out pollution hotspots and sources of toxic leaks. Their approach enables a robot to incorporate calculations made on the fly to account for the complex airflows of confined spaces rather than simply 'following its nose.' Read more »
  • Can science writing be automated?
    A team of researchers has developed a neural network, a form of artificial intelligence, that can read scientific papers and render a plain-English summary in a sentence or two. Read more »
  • Why language technology can't handle Game of Thrones (yet)
    true
    Researchers have performed a thorough evaluation of four different name recognition tools on popular 40 novels, including A Game of Thrones. Their analyses highlight types of names and texts that are particularly challenging for these tools to identify as well as solutions for mitigating this. Read more »
  • Giving robots a better feel for object manipulation
    true
    A new learning system improves robots' abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch -- and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi. Read more »
  • Harnessing microorganisms for smart microsystems
    A research team has developed a method to construct a biohybrid system that incorporates Vorticella microorganisms. The method allows movable structures to be formed in a microchannel and harnessed to Vorticella. The biohybrid system demonstrates the conversion of motion from linear motion to rotation. These fundamental technologies help researchers to create wearable smart microsystems by using autonomous microorganisms. Read more »
  • Scientists build a machine to see all possible futures
    true
    Researchers have implemented a prototype quantum device that can generate and analyze a quantum superposition of possible futures. Using a novel quantum algorithm, the possible outcomes of a decision process are encoded as a superposition of different photon locations. Using interferometry, the team show that it is possible to conduct a search through the set of possible futures without looking at each future individually. Read more »
  • AI agent offers rationales using everyday language to explain its actions
    true
    Researchers have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. Read more »
  • Engineers tap DNA to create 'lifelike' machines
    true
    Tapping into the unique nature of DNA, engineers have created simple machines constructed of biomaterials with properties of living things. Read more »
  • Meet Blue, the low-cost, human-friendly robot designed for AI
    true
    Researchers have created a new low-cost, human friendly robot named Blue, designed to use recent advances in artificial intelligence and deep reinforcement learning to master intricate human tasks, all while remaining affordable and safe enough that every AI researcher could have one. The team hopes Blue will accelerate the development of robotics for the home. Read more »
  • The cost of computation
    There's been a rapid resurgence of interest in understanding the energy cost of computing. Recent advances in this 'thermodynamics of computation' are now summarized. Read more »
  • Robots created with 3D printers could be caring for those in golden years
    true
    Researchers have developed a new design method to create soft robots that may help in caregiving for elderly family members. Read more »
  • Laying the ground for robotic strategies in environmental protection
    true
    Roboticists have developed a robot named 'Romu' that can autonomously drive interlocking steel sheet piles into soil. The structures that it builds could function as retaining walls or check dams for erosion control, and, according to computer simulations, the robot could be deployed in swarms to help protect threatened areas that are flooded or extremely arid more effectively. Read more »
  • Teaching computers to intelligently design 'billions' of possible materials
    Researchers are applying one of the first uses of deep learning -- the technology computers use to intelligently perform tasks such as recognizing language and driving autonomous vehicles -- to the field of materials science. Read more »
  • Artificial intelligence enables recognizing and assessing a violinist's bow movements
    true
    In playing music, gestures are extremely important, in part because they are directly related to the sound and the expressiveness of the musicians. Today, technology exists that captures movement and is capable of detecting gestural details very precisely. Read more »
  • New, more realistic simulator will improve self-driving vehicle safety before road testing
    Scientists have developed data-driven simulation technology that combines photos, videos, real-world trajectory, and behavioral data into a scalable, realistic autonomous driving simulator. Read more »
  • A rubber computer eliminates the last hard components from soft robots
    A new rubber computer combines the feel of a human hand with the thought process of an electronic computer, replacing the last hard components in soft robots. Now, soft robotics can travel where metals and electronics cannot -- like high-radiation disaster areas, outer-space, and deep underwater -- and turn invisible to the naked eye or even sonar detection. Read more »
  • Researchers get humans to think like computers
    Computers, like those that power self-driving cars, can be tricked into mistaking random scribbles for trains, fences and even school buses. People aren't supposed to be able to see how those images trip up computers but in a new study, researchers show most people actually can. Read more »
  • Brain-inspired AI inspires insights about the brain (and vice versa)
    Researchers have described the results of experiments that used artificial neural networks to predict with greater accuracy than ever before how different areas in the brain respond to specific words. The work employed a type of recurrent neural network called long short-term memory (LSTM) that includes in its calculations the relationships of each word to what came before to better preserve context. Read more »
  • Robotic 'gray goo'
    true
    Researchers have demonstrated for the first time a way to make a robot composed of many loosely coupled components, or 'particles.' Unlike swarm or modular robots, each component is simple, and has no individual address or identity. In their system, which the researchers call a 'particle robot,' each particle can perform only uniform volumetric oscillations (slightly expanding and contracting), but cannot move independently. Read more »
  • Google research shows how AI can make ophthalmologists more effective
    As artificial intelligence continues to evolve, diagnosing disease faster and potentially with greater accuracy than physicians, some have suggested that technology may soon replace tasks that physicians currently perform. But a new study shows that physicians and algorithms working together are more effective than either alone. Read more »
  • The robots that dementia caregivers want: Robots for joy, robots for sorrow
    true
    A team of scientists spent six months co-designing robots with informal caregivers for people with dementia, such as family members. They found that caregivers wanted the robots to fulfill two major roles: support positive moments shared by caregivers and their loved ones; and lessen caregivers' emotional stress by taking on difficult tasks, such as answering repeated questions and restricting unhealthy food. Read more »
  • Water-resistant electronic skin with self-healing abilities created
    true
    Inspired by jellyfish, researchers have created an electronic skin that is transparent, stretchable, touch-sensitive, and repairs itself in both wet and dry conditions. The novel material has wide-ranging uses, from water-resistant touch screens to soft robots aimed at mimicking biological tissues. Read more »
  • Seeing through a robot's eyes helps those with profound motor impairments
    true
    An interface system that uses augmented reality technology could help individuals with profound motor impairments operate a humanoid robot to feed themselves and perform routine personal care tasks such as scratching an itch and applying skin lotion. The web-based interface displays a 'robot's eye view' of surroundings to help users interact with the world through the machine. Read more »
  • Can artificial intelligence solve the mysteries of quantum physics?
    A new study has demonstrated mathematically that algorithms based on deep neural networks can be applied to better understand the world of quantum physics, as well. Read more »
  • How intelligent is artificial intelligence?
    true
    Scientists are putting AI systems to a test. Researchers have developed a method to provided a glimpse into the diverse 'intelligence' spectrum observed in current AI systems, specifically analyzing these AI systems with a novel technology that allows automatized analysis and quantification. Read more »
  • Faster robots demoralize co-workers
    New research finds that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort -- and they tend to dislike the robots. Read more »
  • A robotic leg, born without prior knowledge, learns to walk
    true
    Researchers believe they have become the first to create an AI-controlled robotic limb driven by animal-like tendons that can even be tripped up and then recover within the time of the next footfall, a task for which the robot was never explicitly programmed to do. Read more »
  • How to train your robot (to feed you dinner)
    true
    Researchers have developed a robotic system that can feed people who need someone to help them eat. Read more »
  • Ultra-low power chips help make small robots more capable
    true
    An ultra-low power hybrid chip inspired by the brain could help give palm-sized robots the ability to collaborate and learn from their experiences. Combined with new generations of low-power motors and sensors, the new application-specific integrated circuit (ASIC) -- which operates on milliwatts of power -- could help intelligent swarm robots operate for hours instead of minutes. Read more »
  • Robots can detect breast cancer as well as radiologists
    A new article suggests that artificial intelligence systems may be able to perform as accurately as radiologists in the evaluation of digital mammography in breast cancer screening. Read more »
  • Neurodegenerative diseases identified using artificial intelligence
    Researchers have developed an artificial intelligence platform to detect a range of neurodegenerative disease in human brain tissue samples, including Alzheimer's disease and chronic traumatic encephalopathy. Read more »
  • Mini cheetah is the first four-legged robot to do a backflip
    true
    New mini cheetah robot is springy and light on its feet, with a range of motion that rivals a champion gymnast. The four-legged powerpack can bend and swing its legs wide, enabling it to walk either right-side up or upside down. The robot can also trot over uneven terrain about twice as fast as an average person's walking speed. Read more »
  • Spiking tool improves artificially intelligent devices
    The aptly named software package Whetstone enables neural computer networks to process information up to 100 times more efficiently than current standards, making possible an increased use of artificial intelligence in mobile phones, self-driving cars, and image interpretation. Read more »
  • Robots track moving objects with unprecedented precision
    true
    A novel system uses RFID tags to help robots home in on moving objects with unprecedented speed and accuracy. The system could enable greater collaboration and precision by robots working on packaging and assembly, and by swarms of drones carrying out search-and-rescue missions. Read more »
  • Artificial intelligence to boost Earth system science
    true
    A new study shows that artificial intelligence can substantially improve our understanding of the climate and the Earth system. Read more »
WordPress RSS Feed Retriever by Theme Mason

AI Trends – AI News and Events

  • The Rules Governing AI Are Being Shaped by Tech Firms – Here’s How
    true
    IN EARLY APRIL, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.” One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI. Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says. When a formal draft was released in December, uses that had been suggested as requiring “red lines” were presented as examples of “critical concerns.” That shift appeared to please Microsoft. The company didn’t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoft’s senior director for EU government affairs, said the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’” Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted toward industry. “We need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.” The brouhaha over Europe’s guidelines for AI was an early skirmish in a debate that’s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interest—and in some cases appear to be trying to steer construction of any new guardrails to their own benefit. Harvard law professor Yochai Benkler warned in the journalNature this month that “industry has mobilized to shape the science, morality and laws of artificial intelligence.” Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into “Fairness in Artificial Intelligence” that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed. Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,” he says. Read the source article in Wired. Read more »
  • System Load Balancing for AI Systems: The Case Of AI Autonomous Cars
    true
    By Lance Eliot, the AI Trends Insider I recall an occasion when my children had decided to cook a meal in our kitchen and went whole hog into the matter (so to speak). I’m not much of a cook and tend to enjoy eating a meal more so than the labor involved in preparing a meal. In this case, it was exciting to see the joy of the kids as they went about putting together a rather amazing dinner. Perhaps partially due to watching the various chef competitions on TV and cable, and due to their own solo cooking efforts, when they joined together it was a miraculous sight to see them bustling about in the kitchen in a relatively professional manner. I mainly aided by asking questions and serving as a taste tester. From their perspective, I was more of an interloper than someone actually helping to progress the meal making process. One aspect that caught my attention was the use of our stove top. The stove top has four burner positions. On an everyday cooking process, I believe that four heating positions is sufficient. I could see that with the extravagant dinner that was being put together, the fact that there were only four available was a constraint. Indeed, seemingly a quite difficult constraint. During the cooking process, there were quite a number of pots and pans containing food that needed to be heated-up. I’d wager that at one point there were at least a dozen of such pots and pans in the midst of containing food and requiring some amount of heating. Towards the start of the cooking, it was somewhat manageable because they only were using three of the available heating spots. By using just three, it allowed them to then allocate one spot, the fourth one, as an “extra” for round robin needs. For this fourth spot, they were using it to do quick warm-ups and meanwhile the other three spots were for truly doing a thorough cooking job that required a substantive amount of dedicated cooking time. Pots and pans were sliding on and off that fourth spot like a hockey puck on ice. The other three spots had large pots that were gradually each coming to a bubbling and high-heat condition. When one of the three pots had cooked well enough, the enterprising cooks took it off the burner almost immediately and placed it onto a countertop waiting area they had established for super-heated pots and pans that could simmer for a bit. The moment that one pot came off of any of the three spots, another one was instantly put into its place. Around and around this went, in a dizzying manner as they contended with only having four available heating spots. They kept one spot in reserve and used it for more of a quick paced warm-up and had opted to use the other three for deep heated cooking. As they neared the end of the cooking process for this meal, they began to use nearly all of the spots for the quick paced warm-up needs, apparently because they had by then done the needed cooking already and no longer needed to devote any of the pots to a prolonged period on a heating spot. As a computer scientist at heart, I was delighted to see them performing a delicate dance of load balancing. System Load Balancing Is Unheralded But Crucial You’ve probably had situations involving multiple processors or maybe multiple web sites wherein you had to do a load balance across them. In the case of web sites, it’s not uncommon for some popular web sites to be replicated at multiple geographic sites around the world, allowing for more ready speed responses to those from that part of the world. It also can help when one part of the world starts to bombard one of your sites and you need to flatten out the load else that particular web site might choke due to the volume. In the cooking situation, the kids realized that having just four burner stove top positions was insufficient for the true amount of cooking that needed to take place for the dinner. If they had opted to sequentially and serially have placed pots of food onto the burners in a one-at-a-time manner, they would have had some parts of the meal cooked much earlier than other parts of the meal. In the end, when trying to serve the meal, it would have been a nightmarish result of some food that had been cooked earlier and was now cold, and perhaps other parts of the meal that were superhot and would need to wait to be eaten. If the meal had been one involving much less preparation, such as if they had only three items to be cooked, they would have readily been able to use the stove top without any of the shenanigans of having to float around the pots and pans. They could have just put on the three pots and then waited until the food was cooked. But, since they had more needs for cooking then just the available heating spots, they needed to devise a means to make use of the constrained resources in a manner that would still allow for the cooking process to proceed properly. This is what load balancing is all about. There are situations wherein there are a limited available supply of resources, and the number of requests to utilize those resources might exceed the supply. The load balancer is a means or technique or algorithm or automation that can try to balance out the load. Another valuable aspect of a load balancer is that it can try to even out the workload, which might help in various other ways. Suppose that one of the stove tops was known to sometimes get a bit cantankerous when it is on high-heat for a long time. One approach of a load balance might be to try and keep that resource from peaking and so purposely adjust to use some other resource for a while. We can also consider the aspect of resiliency. You might have a situation wherein one of the resources might unexpectedly go bad or otherwise not be usable. Suppose that one of the burners broke down during the cooking process. A load balance would try to ascertain that a resource is no longer functioning, and then see if it might possible to shift the request or consumption over to another resource instead. Load Balancing Difficulties And Challenges Being a load balancer can be a tricky task. Suppose the kids had decided that they would keep one of stove top burners in reserve and not use it unless it was absolutely necessary. In that case, they might have opted to use the three other burners in a manner of allocating two for the deep heating and one for the warming up. All during this time, the other fourth burner would remain unused, being held in reserve. Is that a good idea? It depends. I’d bet that the cooking with just the three burners would have stretched out the time required to cook the dinner. I can imagine that someone waiting to eat the dinner might become disturbed if they saw that there was a fourth burner that could be used for cooking, and yet it was not, and the implication being that the hungry person had to wait longer to eat the dinner. This person might go ballistic that a resource sat unused for that entire time. What a waste of a resource, it would seem to that person. Imagine further if at the start of the cooking process we were to agree that there should be an idle back-up for each of the stove burners being used. In other words, since we only have four, we might say that two of the burners will be active and the other two are the respective back-up for each of them. Let’s number the burners as 1, 2, 3, and 4. We might decide that burner 1 will be active and it’s back-up is burner 2, and burner 3 will be active and its back-up is burner 4. While the cooking is taking place, we won’t place anything onto the burners 2 and 4, until or if a primary of the burners 1 or burner 3 goes out. We might decide to keep the back-up burners entirely turned-off, in which case as a back-up they would be starting at a cold condition if we needed to suddenly switch over to one of them. We might instead agree that we’ll go ahead and put the two back-ups onto a low-heat position, without actually heating anything per se, and generally be ready then to rapidly go to high-heat if they are needed in their back-up failover mode. I had just now said that burner 2 would be the back-up for primary burner 1. Suppose I adhered to that aspect and would not budge. If burner 3 went suddenly out and I reverted to using burner 4 as the back-up, but then somehow burner 4 went out, should I go ahead and use burner 2 at that juncture? If I was insistent that burner 2 would only and always be a back-up exclusively for burner 1, presumably I would want the load balancer to refuse to now use burner 2, even though burners 3 and 4 are kaput. Maybe that’s a good idea, maybe not. These are the kinds of considerations that go into establishing an appropriate load balancer. You need to try and decide what the rules are for the load balancer. Different circumstances will dictate different aspects of how you want the load balancer to do its thing. Furthermore, you might not just setup the load balancer entirely in-advance, such that it is acting in a static manner during the load balancing, but instead might have the load balancer figuring out what action to take dynamically, in real-time. When using load balancing for resiliency or redundancy purposes, there is a standard nomenclature of referring to the number of resources as N, and then appending a plus sign along with an integer value that ranges from 0 to some number M. If I say that my system is setup as N+0, I’m saying that there are zero or no redundancy devices. If I say it is N+1, then that implies there is 1 and only 1 such redundancy device. And so on. You might be thinking that I should always have a plentiful set of redundancy devices, since that would seem the safest bet. But, there’s a cost associated with the redundancy. Why was my stove top limited to just four burners? Because I wasn’t willing to shell out the bigger bucks to get the model that had eight. I had assumed that for my cooking needs, the four sized stove was sufficient, and actually ample. For computer systems, the same kind of consideration needs to come to play. How many devices do I need and how much redundancy do I need, which has to be considered in light of the costs involved. This can be a significant decision in that later on it can be harder and even costlier to adjust. In the case of my stove top, the kitchen was built in such a manner that the four burner sized stove top fits just right. If I were to now decide that I want the eight burner version, it’s not just a simple plug-and-play, instead they would need to knock out my kitchen counters, and likely some of the flooring, and so on. The choice I made at the start has somewhat locked me in, though of course if I want to have the kids doing cooking more of the time, it might be worth the dough to expand the kitchen accordingly. In computing, you can consider load balancing for just about anything. It might be the CPU processors that underlie your system. It could be the GPUs. It could be the servers. You can load balance on an actual hardware basis, and you can also do load balancing on a virtualized system. The target resource is often referred to as an endpoint, or perhaps a replica, or a device, or some other such wording. Those in computing that don’t explicitly consider the matter of load balancing are either unaware of the significance of it or are unsure of what it can achieve. For many AI software developers, they figure that it’s really a hardware issue or maybe an operating system issue, and thus they don’t put much of their own attention toward the topic. Instead, they hope or assume that those OS specialists or hardware experts have done whatever is required to figure out any needed load balancing. Similar to my example about my four burner stove, the problem with this kind of thinking is that if later on the AI application is not running at a suitable performance level and all of a sudden you want to do something about load balancing, the horse is already out of the barn. Just like my notion of possibly replacing the four burner stove with an eight burner, it can take a lot of effort and cost to retrofit for load balancing. AI Autonomous Cars And Load Balancing The On-Board Systems What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. One key aspect of an AI system for a self-driving car is its ability to perform responsively in real-time. On-board of the self-driving car you have numerous processors that are intended to run the AI software. This can also include various GPUs and other specialized devices. Per my overall framework of AI self-driving cars, here are some the key driving tasks involved:         Sensor data collection and interpretation         Sensor fusion         Virtual world model updating         AI action planning         Car controls command issuance For my framework, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ For my article about real-time performance aspects, see: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/ For aspects about AI developers, see my article: https://aitrends.com/ai-insider/developer-burnout-and-ai-self-driving-cars/ For the dangers of Groupthink, see my article: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/ You’ve got software that needs to run in real-time and direct the activities of a car. The car will at times be in motion. There will be circumstances wherein the AI is relatively at ease and there’s not much happening, and there will be situations whereby the AI is having to work at a rip-roaring pace. Imagine going on a freeway at 75 miles per hour, and there’s lots of other nearby traffic, along with foul weather, the road itself has potholes, there’s debris on the roadway, and so on. A lot of things, all happening at once. The AI holds in its automation the key to whether the self-driving car safely navigates and avoids getting into a car accident. This is not just a real-time system, it is a real-time system that can spell life and death. Human occupants in the AI self-driving car can get harmed if the AI can’t operate in time to make the proper decision. Pedestrians can get harmed. Other cars can get hit, and thus the human occupants of those cars can get harmed. All in all, this is quite serious business. To achieve this, the on-board hardware generally has lots of computing power and lots of redundancy. Is it enough? That’s the zillion dollar question. Similar to my choice of a four burner stove, when the automotive engineers for the auto maker or tech firm decide to outfit the self-driving car with whatever number and type of processors and other such devices, they are making some hard choices about what the performance capability of that self-driving car will be. If the AI cannot run fast enough to make sound choices, it’s a bad situation all around. Imagine too that you are fielding your self-driving car. It seems to be running fine in the roadway trials underway. You give the green light to ramp up production of the self-driving car. These self-driving cars start to roll off the assembly line and the public at large is buying them. Suppose after this has taken place for a while, you begin to get reports that there are times that the AI seemed to not perform in time. Maybe it even froze up. Not good. Some self-driving car pundits say that it’s easy to solve this. Via OTA (Over The Air) updates, you just beam down into the self-driving cars a patch for whatever issue or flaw there was in the AI software. I’ve mentioned many times that the use of OTA is handy, important, and significant, but it is not a cure all. Let’s suppose that the AI software has no bugs or errors in this case. Instead, it’s that the AI running via the on-board processors is exhausting the computing power at certain times. Maybe this only happens once in a blue moon, but if you are depending upon your life and the life of others, even a once in a blue moon is too much of a problem. It could be that the computing power is just insufficient. What do you do then? Yes, you can try to optimize the AI and get it to somehow not consume so much computing power. This though is harder than it seems. If you opt to toss more hardware at this problem, sure, that’s’ possible, but now this means that all of those AI self-driving cars that you sold will need to come back into the auto shop and get added hardware. Costly. Logistically arduous. A mess. For my article about the freezing robot problem and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/ For my article about bugs and errors in AI self-driving cars, see: https://aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/ For my article about automobile recalls and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/auto-recalls/ For product liability claims against AI self-driving cars, see my article: https://aitrends.com/ai-insider/product-liability-self-driving-cars-looming-cloud-ahead/ Dangers Of Silos Among Autonomous Car Components Some auto makers and tech firms find themselves confronting the classic silo mentality of the software side and the hardware side of their development groups. The software side developing the AI is not so concerned about the details of the hardware and just expect that their AI will run in proper time. The hardware side puts in place as much computing power as it seems can be suitably provided, depending on cost considerations, physical space considerations, etc. If there is little or no load balancing that comes to play, in terms of making sure that both the software and hardware teams come together on how to load balance, it’s a recipe for disaster. Some might say that all they need to know is how much raw speed is needed, whether it is MIPS (millions of instructions per second), FLOPS (floating point operations per second), TPU’s (tensor processing units), or other such metrics. This though doesn’t fully answer the performance question. The AI software side often doesn’t really know what kind of performance resources they’ll need per se. You can try to simulate the AI software to gauge how much performance it will require. You can create benchmarks. There are all sorts of “lab” kinds of ways to gauge usage. Once you’ve got AI self-driving cars in the field for trials, you should also be pulling stats about performance. Indeed, it’s quite important that their be on-board monitoring to see how the AI and the hardware are performing. For my article about simulations and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/ For my article about benchmarks and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/ For my article about AI self-driving cars involved in accidents, see: https://aitrends.com/selfdrivingcars/accidents-happen-self-driving-cars/ With proper load balancing on-board the self-driving car, the load balancer is trying to keep the AI from getting starved, it is trying to ensure that the AI runs undisrupted by whatever might be happening at the hardware level. The load balance is monitoring the devices involved. When saturation approaches, this can be potentially handled via static or dynamic balancing, and thus the load balancer needs to come to play. If an on-board device goes sour, the load balancer hopefully has a means to deal with the loss. Whether it’s redundancy or whether it is shifting over to have another device now do double-duty, you’ve got to have a load balancer on-board to deal with those moments. And do so in real-time. While the self-driving car is possibly in motion, on a crowded freeway, etc. Fail-Safe Aspects To Keep In Mind Believe it or not, I’ve had some AI developers say to me that it is ridiculous to think that any of the on-board hardware devices are going to just up and quit. They cannot fathom any reason for this to occur. I point out that the on-board devices are all prone to the same kinds of hardware failures as any piece of hardware. There’s nothing magical about being included into a self-driving car. There will be “bad” devices that will go out much sooner than their life expectancy. There will be devices that will go out due to some kind of in-car issue that arises, maybe overheating or maybe somehow a human occupant manages to bust it up. There are bound to be recalls on some of that hardware. Also, I’ve seen some of them deluded by the aspect that during the initial trials of self-driving cars, the auto maker or tech firm is pampering the AI self-driving car. After each journey or maybe at the end of the day, the tech team involved in the trials are testing to make sure that all of the hardware is still in pristine shape. They swap out equipment as needed. They act like a race car team, continually tuning and making sure that everything on-board is in top shape. There’s nearly an unlimited budget of sorts during these trials in that the view is do whatever it takes to keep the AI self-driving car running. This is not what’s going to happen once the real-world occurs. When those self-driving cars are being used by the average Joe or Samatha, they will not have a trained team of self-driving car specialists at the ready to tweak and replace whatever might need to be replaced. The equipment will age. It will suffer normal wear and tear. It will even be taxed beyond normal wear and tear since it is anticipated that AI self-driving cars will be running perhaps 24×7, nearly non-stop. For my article about non-stop AI self-driving cars, see: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/ For repairs of AI self-driving cars, see my article: https://aitrends.com/ai-insider/towing-and-ai-self-driving-cars/ Conclusion For those auto makers and tech firms that are giving short shrift right now to the importance of load balancing, I hope that this might be a wake-up call. It’s not going to do anyone any good, neither the public and nor the makers of AI self-driving cars, if it turns out that the AI is unable to get the performance it needs out of the on-board devices. A load balancer is not a silver bullet, but it at least provides the kind of added layer of protection that you’d expect for any solidly devised real-time system. Presumably, there aren’t any auto makers or tech firms that opted to go with the four burner stove when an eight burner stove was needed. Copyright 2019 Dr. Lance Eliot This content is originally posted on AI Trends.   Read more »
  • $15M Global Learning XPRIZE Names Two Grand Prize Winners
    true
    XPRIZE, the global leader in designing and operating incentive competitions to solve humanity’s grand challenges, announced two grand prize winners in the $15M Global Learning XPRIZE. The tie between Kitkit School from South Korea and the United States, and onebillion from Kenya and the United Kingdom, was revealed at an awards ceremony hosted at the Google Spruce Goose Hangar in Playa Vista, where supporters and benefactors including Elon Musk, celebrated all five finalist teams for their efforts. Launched in 2014, the Global Learning XPRIZE challenged innovators around the globe to develop scalable solutions that enable children to teach themselves basic reading, writing and arithmetic within 15 months. After being selected as finalists, five teams received $1M each and went on to field test their education technology solution in Swahili, reaching nearly 3,000 children across 170 villages in Tanzania. To help ensure anyone, anywhere can iterate, improve upon, and deploy the learning solutions in their own community, all five finalists’ software are open source. All five learning programs are currently available in both Swahili and English on GitHub, including instructions on how to localize into other languages. The competition offered a $10 million grand prize to the team whose solution enabled the greatest proficiency gains in reading, writing and arithmetic in the field test. After reviewing the field test data, an independent panel of judges found indiscernible results between the top two performers, and determined two grand prize winners would split the prize purse, receiving $5M each: Kitkit School (Berkeley, United States and Seoul, South Korea) developed a learning program with a game-based core and flexible learning architecture aimed at helping children independently learn, irrespective of their knowledge, skill, and environment. onebillion (London, United Kingdom and Nairobi, Kenya) merged numeracy content with new literacy material to offer directed learning and creative activities alongside continuous monitoring to respond to different children’s needs. Currently, more than 250 million children around the world cannot read or write, and according to data from the UNESCO Institute for Statistics, about one in every five children are out school – a figure that has barely changed over the past five years. Compounding on the issue, there is a massive shortage of teachers at the primary and secondary levels, with research showing that the world must recruit 68.8 million teachers to provide every child with primary and secondary education by 2030. Before the Global Learning XPRIZE field test, 74% of the participating children were reported as never attending school, 80% reported as never being read to at home and over 90% of participating children could not read a single world in Swahili. After 15 months of learning on Pixel C tablets donated by Google and preloaded with one of the five finalists learning software, that number was cut in half. Additionally, in math skills, all five software were equally as effective for girls and boys. Collectively over the course of the competition, the five finalist teams invested approximately $200M in research, development, and testing for their software, a total that rises to nearly $300M when including all 198 registered teams. “Education is a fundamental human right, and we are so proud of all the teams and their dedication and hard work to ensure every single child has the opportunity to take learning into their own hands,” said Anousheh Ansari, CEO of XPRIZE. “Learning how to read, write and demonstrate basic math are essential building blocks for those who want to live free from poverty and its limitations, and we believe that this competition clearly demonstrated the accelerated learning made possible through the educational applications developed by our teams, and ultimately hope that this movement spurs a revolution in education, worldwide.” The grand prize winners and the following finalist teams were chosen from a field of 198 teams from 40 countries: CCI (New York, United States) developed structured and sequential instructional programs, in addition to a platform seeking to enable non-coders to develop engaging learning content in any language or subject area. Chimple (Bangalore, India) created a learning platform aimed at enabling children to learn reading, writing and mathematics on a tablet through more than 60 explorative games and 70 different stories. RoboTutor (Pittsburgh, United States) leveraged Carnegie Mellon’s research in reading and math tutors, speech recognition and synthesis, machine learning, educational data mining, cognitive psychology, and human-computer interaction. See the source release XPRIZE.org. Read more »
  • Microsoft and Sony Become Partners Around Gaming and AI
    true
    Microsoft and Sony announced an unusual partnership on May 16, allowing the two rivals to partner on cloud-based gaming services. “The two companies will explore joint development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services,” Microsoft said in a statement. Sony’s existing game and content-streaming services will also be powered by Microsoft Azure in the future. Microsoft says “these efforts will also include building better development platforms for the content creator community,” which sounds like both Sony and Microsoft are planning to partner on future services aimed at creators and the gaming community. Both companies say they will “share additional information when available,” but the partnership means Microsoft and Sony will collaborate on cloud gaming. That’s a pretty big deal, and it’s a big loss for Microsoft’s main cloud rival, Amazon. It also means Google, a new gaming rival to Microsoft and Sony, will miss out on hosting Sony’s cloud services. Google unveiled its Stadia game streaming service earlier this year, and the company will use YouTube to push it to the masses. Stadia is a threat to both Microsoft and Sony, and it looks like the companies are teaming up so Sony has some underlying infrastructure assistance to fight back. Stadia will stream games from the cloud to the Chrome browser, Chromecast, and Pixel devices. Sony already has a cloud gaming service, but Microsoft is promising trials of its own xCloud gaming service later this year. Microsoft’s gaming boss, Phil Spencer, has also promised the company will “go big” for E3 [Electronic Entertainment Expo]. As part of the partnership, Sony will use Microsoft’s AI platform in its consumer products. Read the source article in The Verge. Read more »
  • Arkansas Government Moving Aggressively to Shore Up Cybersecurity
    true
    Arkansas will soon launch an ambitious initiative to include AI to bolster the state’s cybersecurity stance, while developing a scalable defense model that others can use in the future. Senate Bill 632, recently signed into law by Gov. Asa Hutchinson, authorizes the state’s Economic Development Commission (AEDC) to create a Cyber Initiative. This initiative will be responsible for working to mitigate the cyber-risks to Arkansas; increasing education relative to threats and defense; providing the public and private sectors with threat assessments and other intelligence; and fostering growth and development around tech including AI, IT and defense. The initiative will also create a “cyber alliance” made up of partnerships with a variety of insitutitions like “universities, colleges, government agencies and the private business sector,” all of which will work in a unified fashion toward realizing the initiative’s priorities. The bill also gives the program a potentially extensive financing framework, establishing a special fund that will consist of all money appropriated by the General Assembly, as well as “gifts, contributions, grants, or bequests received from federal, private, or other sources,” according to the text of the legislation. That money will go toward a wide variety of activities conducted through its myriad partnerships — including research, training of officials at public and private institutions in best defense, business and academic opportunities. The initiative will also have a considerable privacy component, as it will be exempt from the Freedom of Information Act (FOIA) if the request is deemed a “security risk,” according to bill text. Much of the initiative’s work will be centered around finding more effective methods to ferret out bad actors and identifying where and what those actors are looking to target within the state, said retired Col. Rob Ator, who serves as the director of Military Affairs for the AEDC. Arkansas, Ator said, is an attractive target to potential hackers because — as the bill notes — it is “home to national and global private sector companies that are considerable targets in the financial services, food and supply chain and electric grid sectors.” “For the first time in our nation’s history, the outward-facing defense for our critical infrastructure is no longer the folks in uniform and it’s no longer the government — it’s our private industry,” Ator said, adding that, as potential targets for cyberattacks, companies are now responsible for their own defense like never before. Read the source article in Government Technology. Read more »
  • S.F. Passes Facial-Recognition Ban; Capitalizing on AI’s Opportunities
    true
    San Francisco passes facial-recognition surveillance ban. San Francisco on May 14 became the first U.S. city to pass a ban on the use of facial recognition by local agencies, reported WSJ’s Asa Fitch. The move comes amid a broad push to regulate the technology, which critics contend perpetuates police bias and provides excessive surveillance powers, although San Francisco’s own police force doesn’t use it. San Francisco isn’t alone. Similar bans have been proposed in Oakland, Calif., and Somerville, Mass. Facial recognition proponents cite the technology’s benefits. Law-enforcement groups say bans are an overreaction, adding that the technology can assist in catching criminals and locating missing people when used with police investigative techniques. Dozens of police forces around the country use the technology to analyze mug shots and driver’s-license photos to identify suspects. Opponents troubled by facial recognition’s possible flaws. Researchers at the Massachusetts Institute of Technology found that facial-recognition tools created by Amazon.com Inc. and others had significantly higher error rates when identifying darker-skinned people and women, according to the WSJ. Amazon disputed the findings. Critics say the system’s flaws raise concerns when the technology is used in decisions that affect people’s liberty. What’s needed to capitalize on AI’s opportunities. AI has the potential to alter economic growth, commerce and trade. But for AI to develop, there need to be new regulations for AI ethics and data access as well as a revisiting of existing regulations and laws around privacy and intellectual property, according to a report from the Brookings Institute. There also needs to be an international AI development agenda to avoid having a variety of unnecessary regulations that impede the technology’s adoption. The Brookings Institute offers a number of suggestions for maximizing AI’s benefits, among them: Strengthen AI diffusion within and across countries. A lack of AI diffusion is producing a widening gap between companies in business sectors. “Policies are needed to increase the rates and depth of technology diffusion across the economy,” according to the report. ·         Develop education and skills. AI will require science, technology, engineering and math education, and training. And those skills need to be developed globally. ·         Establish sound cybersecurity. Governments need to develop national cybersecurity strategies. However, international cybersecurity standards could create unneeded barriers to trade in AI, the report says. ·         Protect the privacy of personal data. Strong rules are needed, but, at the same time, those rules shouldn’t put unnecessary restrictions on cross-border data flows. ·         Domestic agenda. A “consistent, transparent, and standardized framework” is needed for sharing data sets across government agencies, researchers and the private sector, the report states. ·         Develop a balanced intellectual-property framework. AI technology needs a supportive intellectual-property rule base that includes fair-use exceptions to copyright, which would allow duplicating data for training purposes. ·         Develop AI ethical principles. In order to develop trust in artificial intelligence, AI processes and outcomes need to be ethical, according to the report. Read the source article in the Wall Streeet Journal. Read more »
  • Human-To-Machine Depersonalization Considerations and AI Systems: The Case of Autonomous Cars
    true
    By Lance Eliot, the AI Trends Insider Is automation and in particular AI leading us toward a service society that depersonalizes us? In a prior column, I had described how AI can provide deep personalization via Machine Learning, and readers responded by asking about the concerns expressed in the mass media about the possibility of advanced automation depersonalizing us as humans and what I had to say about those qualms, so thanks to your feedback I’m covering herein the depersonalization topic. For my prior column on deep personalization via Machine Learning, see: https://www.aitrends.com/selfdrivingcars/ai-deep-personalization-the-case-of-ai-self-driving-cars/ Some pundits say yes, arguing that the human touch of providing services is becoming scarcer and scarcer, and eventually we’ll all be getting served by AI systems that treat us humans as though we are non-human. More and more we’ll see and experience that humans will lose their jobs to AI and be replaced by automation that is less costly, and notably less caring, eschewing us as individuals and abandoning personalized service. Those uncaring and heartless AI systems will simply stamp out whatever service is being sought by a human and there won’t be any soul in it, there won’t be any spark of humanity, it will be push-button automata only. In my view, those pundits are seeing the glass as only half empty. They seem to not either notice or want to observe that the glass is also half full. Let me share with you some examples of what I mean. Before I do so, please be aware that the word “depersonalization” can have a multitude of meanings. In the clinical or psychology field, it has a meaning of feeling detached or disconnected from your body and would be considered a type of mental disorder. I’m not using the word in that manner herein. Instead, the theme I’m invoking involves the humanization or dehumanization around us, floating the idea of being personalized to human needs or being depersonalized to them. That’s a societal frame rather than a purely psychological portrayal. With that said, let’s use an example to get at my depersonalization and personalization theme. Banking ATM As An Example Of Alleged Depersonalization Banking is an area ripe with prior exhortations of how bad things would become once ATM’s took over as there would no longer be in-branch banking with human tellers (that’s what was predicted). We would all be slaves to ATM machines. You were going to stand in front of the ATM and yell out “where have all the humans gone?” as you fought with the banking system to perform a desired transaction. Recently, I went to my local bank branch to make a deposit. I was in a hurry and opted to squeeze in this errand on my way to a business meeting. The deposit was somewhat sizable so I opted to go and perform the transaction with the human teller, rather than feed my deposit into the “impersonal” ATM. Upon my coming up to the teller window, the teller provided a big smile and welcomed me to the bank. How’s my day going, the teller asked. The teller proceeded to mention that it had been a busy morning at the branch. Looking outside the branch window, the teller remarked that it looked like rain was on its way. I wanted to make the deposit and get going, yet could see that the chitchat, though friendly and warm, would keep dribbling along and wasn’t seemingly in pursuit of my desired transaction. I presented my check to be deposited and was asked to first run my banking card through the pad at the teller window. I did so. The teller looked at my info on their screen and noted that I had not talked with one of their bankers for quite a while, offering to bring over a bank manager to say hello. I declined the gracious offer. The teller then noted that one of my CD’s was coming due in a month and suggested that I might want to consider various renewal options. Not just now, I demurred. The teller finally proceeded with the deposit and then, just as I was stepping away to head-out, called me back to mention that they were having a special this week on opening new accounts. Would I be interested in doing so? As you can imagine, in my haste to get going, I quickly said no thanks and tried to make my way to the door. Turns out that the teller had already signaled to the bank manager and I was met at the door with a thanks for coming in by the pleasant manager, along with handing me a business card in case I had any follow-up needs. Let’s unpack or dissect my in-branch experience. On the one hand, you could say that I was favorably drenched in high-touch service. The teller engaged me in dialogue and tried to create a human-to-human connection. Rather than solely focusing on my transaction, I was offered a bevy of other options and possibilities. My banking history at the bank was used to identify opportunities for me to enhance my banking efforts at the bank. All in all, this would seem to be the picture-perfect example of human personalized service. Having done lots of systems work in the banking industry, I know how hard it can be to get a branch to provide personalized and friendly service. One bank that I helped fix had a lousy reputation when I first was brought in, known for having branches that were terribly run. Whenever you went into a branch it was like going to a gulag. There were long lines, the tellers were ogres, and you felt as though you were a mere cog in a gigantic wheel of their banking procedures, often making the simplistic banking acts into a torturous affair. Thus, my recent experience of making my deposit at my local branch should be a shining example of what a properly run bank branch is all about. If I were to have to choose between the somewhat over-friendly experience versus going to a branch that was like descending into Dante’s inferno, I certainly would choose the overly friendly case. Nonetheless, I’d like to explore more deeply the enriched banking experience. I was in a hurry. The friendly dialogue and attempts to upsell me were getting in the way of a quick in-and-out banking transaction. In theory, the teller should have judged that I was in a hurry (I assure you that I offered numerous overt signals as such) and toned down the ultra-service effort. It is hard perhaps to fault the teller and one might point at whatever pressures there are on the teller to do the banking jingle, perhaps drummed into the teller as part of the training efforts at the bank and likely baked into performance metrics and bonuses. In any case, I walked out of the branch grumbling and vowed that in the future I would use the ATM. Unfair, you say? Maybe. Am I being a whiner by “complaining” about too much personalized service? Maybe. But it shouldn’t be that I have to make a choice between the rampant personalized service versus the utterly depersonalized gulag service. I should be able to choose which suits my service needs at the time of consuming the service. About a week later, I had to make another deposit and this time used the drive-thru ATM. After entering my PIN, the screen popped-up asking if I was there to make a deposit, and if so, there was a one-click offered to immediately shift into deposit taking mode. I used the one-click, slipped my check into the ATM, it then scanned and asked me to confirm the amount, which I did, and the ATM then indicated that I usually don’t get a printed receipt and unless I wanted one this time, it wasn’t going to print one out. I was satisfied that the deposit seemed to have been made and so I put my car into gear and drove on. The entire transaction time must have been around 30 seconds at most, making it many times faster than when I had made a deposit via the teller. I did not have to endure any chitchat about the weather. I was not bombarded with efforts to upsell me. In-and-out, the effort was completed, readily, without fanfare. Notice that the ATM had predicted that I was there to make a deposit. That was handy. Based on my last several transactions with the bank, the banking system had analyzed my pattern and logically deduced that I was most likely there to make a deposit. And, I was offered a one-click option to proceed with making my deposit, which showcased that not only was my behavior anticipated, the ATM tailored its actions to enable my transaction to proceed smoothly. Would you say that my ATM experience was one of a personalized nature or a depersonalized nature? Deciding On Whether There Is Depersonalization Or Personalization We always tend to assume that whenever something is “depersonalized” that it must be bad. The word has a connotation that suggests something untoward. Nobody wants to be depersonalized. In the case of the ATM, I wasn’t asked about the weather and there wasn’t a smiling human that spoke with me. I interacted solely with the automation. If that’s the case, ergo I must be getting “depersonalized” service, one would assume. Yet, my ATM experience was actually personalized. The system had anticipated that I wanted to make a deposit. This had been followed-up by making the act of depositing easy. Once I had made the deposit, the ATM did not just spit out a receipt, which often is what happens (and I frequently see deposit receipts laying on the ground near the ATM, presumably leftover by hurried humans). The ATM knew via my prior history that I tended to not get a receipt and therefore the default was going to be that it would not produce one in this instance. Given the other kinds of more sophisticated patterns in my banking behavior that could be found by using AI capabilities, I thought that this ATM experience was illustrative of how even simple automation can provide a personalized service experience. Imagine what more could be done if we added some hefty Machine Learning or Deep Learning into this. I’ve used the case of the banking effort to help illuminate the notion of what constitutes personalization versus depersonalization. Many seem to assume that if you remove the human service provider, you are axiomatically creating a depersonalized service. I don’t agree. Take a look at Figure 1. As shown, the performance of a service act consists of the service provider and the receiver of the service, the customer. Generally, when considering depersonalized service, we think about the service provider as being perfunctory, dry, lacking in emotion, unfeeling, aloof, and otherwise without an expression of caring for the customer. We also then think about the receiver of the service, the customer, and their reaction of presumably becoming upset at the lack of empathy to their plight as they are trying to obtain or consume the service. I argue that the service provider can provide a personalized or depersonalized service, either one, even if it is a human providing the service. The mere act of having a human provide a service does not make it personalized. I’m sure you’ve encountered humans that treated you as though you were inconsequential, as though you were on an assembly line, and they had very little if any personalization, likely bordering on or perhaps fully enmeshed into depersonalization. A month ago, I ventured to use the Department of Motor Vehicles (DMV) office and was amazed at how depersonalized they were able to make things. Each of the human workers in the DMV office had that look as though they would prefer to be anyplace but working in the DMV. The people flowing into the DMV were admittedly rancorous and often difficult to contend with. I’m sure these DMV workers had their fill each day of people that were grotesquely impolite and cantankerous. In any case, there were signs telling you to stand here, move there, wait for this, wait for that. Similar to getting through an airport screening, this was a giant mechanism to move the masses through the process. I’m sure it was as numbing for the DMV workers as it was for those of us there to get a driver’s license transaction undertaken. Let’s all agree then that you can have a human that provides a personalized or a depersonalized service, which will be contingent on a variety of factors, such as the processes involved, the incentives for the human service provider, and the like. I’d to like next assert that automation can also provide either a personalized service or a depersonalized service. Those are both viable possibilities. Depends On How The Automation Is Devised It all depends upon how the automation has been established. In my view, if you add AI to providing a service, and do it well, you are going to have a solid chance of making that service personalized. This won’t happen by chance alone. In fact, by chance alone, you are probably going to have AI service that seems depersonalized. We might at first assume that the automation is going to be providing a depersonalized service, likewise we might at first assume that a human will provide a personalized service. That’s our usual bias. Either of those assumptions can be upended. Furthermore, it can be tricky to ascertain what personalized versus depersonalized service consists of. In my example about the bank branch and the teller, everything about the setup would seem to suggest a high-touch personalized service. I’m sure the bank spent a lot of money to try and arrive at the high-touch level of service. Yet, in my case, in the instance of wanting to hurriedly do a transaction, the high-touch personalized service actually defeated itself. That’s a problem with having personalized service that is ironically inflexible. It is ironic in that the assumption is that personalized means that you will be incessantly presented with seeming personalization. Instead, the personalization should be based on the human receiving the service and what makes most sense for them. Had the teller picked-up on the aspect that I was in a hurry, it would have been relatively easy to switch into a mode of aiding my transaction and getting me out of the bank, doing so without undermining the overarching notion of personalization. For those of you that are AI developers, I hope that you keep in mind these facets about depersonalization and personalization. Namely, via AI, you have a chance at making a service providing system more responsive and able to provide personalization, yet if you don’t seek that possibility, the odds are that your AI system will appear to be the furtherance of depersonalization. Humans interacting with your AI system are more likely to be predisposed to the belief that your AI will be depersonalizing. In that sense, you have a larger hurdle to jump over. In the case of a human providing a service, by-and-large we all tend to assume that it is likely to be imbued with personalization, though for circumstances like the DMV and airport screening we’ve all gotten used to the idea that you are unlikely to get personalized service in those situations (when it happens, we are often surprised and make special note of it). You also need to take into account that there is personalization of an inflexible nature, which can then undermine the personalization being delivered. As indicated via the bank branch example, using that as an analogy, consider that if you do have AI that seems to provide personalization, don’t go overboard and force whatever monolithic personalization that you came up with onto all cases of providing the service. Truly, personalized service should be personalized to the needs of the customer in-hand. AI Self-Driving Cars As An Example What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. There are numerous ways in which the AI can either come across as personalized or depersonalized, and it is important for auto makers and tech firms to realize this and devise their AI systems accordingly. Allow me to elaborate. I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car. For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results. For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/ For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/ For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/ For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/ Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion. Here’s the usual steps involved in the AI driving task: Sensor data collection and interpretation Sensor fusion Virtual world model updating AI action planning Car controls command issuance Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight. Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/ See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/ For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/ Returning to the topic of depersonalization and personalization, let’s consider how AI self-driving cars can get involved in and perhaps mired into these facets. Bike Riders And AI Self-Driving Cars I was speaking at a recent conference on AI self-driving cars and during the Q&A there was an interesting question or point made by an audience member. The audience member stood-up and derided human drivers that often cut-off bike riders. She indicated that to get to the conference, she had ridden her bike, which she also rides when going to work (this event was in the Silicon Valley, where bike riding for getting to work is relatively popular). While riding to the convention, she had narrowly gotten hit at an intersection when a car took a right turn and seemed to have little regard for her presence as she rode in the bike lane. You might assume that the car driver was not aware that she had been in the bike lane and therefore mistakenly cut her off. If that was the case, her point could be that an AI self-driving car would presumably not make that same kind of human error. The AI sensors would hopefully detect a bike rider and then appropriately the AI action planner would attempt to avoid cutting off the bike rider. It seemed though that she believed the human driver did see her. The act of cutting her off was actually deliberate. The driver was apparently of a mind that the car had higher priority over the bike rider, and thus the car could dictate what was going to happen, namely cut-off the bike rider so that the car could proceed to make the right turn. I’m sure we’ve all had situations of a car driver that wanted to demand the right-of-way and figured that a multi-ton car has more heft to decide the matter than does a fragile human on a bicycle. What would an AI self-driving car do? Right now, assuming that the AI sensors detected the bike rider, and assuming that the virtual world model was updated with the path of the bike rider, and assuming that the AI action planner portion of the system was able to anticipate a potential collision, presumably the AI would opt to brake and allow the bike rider to proceed. We must also consider the traffic situation at the time, since we don’t know what else might have been happening. Suppose a car was on the tail of the AI self-driving car and there was a risk that if the AI self-driving car abruptly halted, allowing the bike rider to proceed, the car behind the AI self-driving car might smack into the rear of the AI self-driving car. In that case, perhaps the risk of being hit from behind might lead the AI to determine that the risk of cutting off the bike rider is less overall and therefore proceed to cut-off the bike rider. I mention this nuance about the AI self-driving car and its choice of what to do because of the oft times assumption by many that an AI self-driving car is always going to do “the right thing” in terms of making car driving decisions. In essence, people often tell me about situations of driving that they assume an AI system would “not make the same mistake” that a human made, and yet this assumption is often in a vacuum. Without knowing the context of the driving situation, how can we really say what the “right thing” was to do. For my article about the human foibles of driving, see: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/ For the use of probabilities and uncertainty in AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/probabilistic-reasoning-ai-self-driving-cars/ For the safety aspects of AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/ For my article about defensive driving for AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/ In any case, you might argue that the question brought up by the audience member is related to personalization and depersonalization. If the human driver was considering the human bike rider in a depersonalized way, they might have made the cut-off decision without any sense of humanity being involved. Here’s what might have been taking place. That’s not a human on that bicycle, it is instead a thing on an object that is moving into my path and getting in my way of making that right turn, the driver might have been thinking. Furthermore, the driver might have been contemplating this: I am a human and my needs are important, and I need to make that right turn to proceed along smoothly and not be slowed down. The human driver objectifies the bike rider. The bike is an impediment. The human on the bike is meshed into the object. Now, we don’t know that’s what the human driver was contemplating, but it is somewhat likely. It is easy when driving a car to fall into the mental trap that you are essentially in a video game. Around you are these various objects that are there to get in your way. Using your video gaming skills, you navigate in and around those objects. If this seems farfetched, you might consider the emergence of road rage. People driving a car will at times become emboldened while in the driver’s seat. They are in command of a vehicle that can determine life-or-death of others. This can inflate their sense of self-importance. They can become irked by other drivers and by pedestrians and decide to take this out on those around them. For my article about road rage, see: https://www.aitrends.com/selfdrivingcars/road-rage-and-ai-self-driving-cars/ As I’ve said many times in my speeches and in my writings, it is a miracle that we don’t have more road rage than we already have. It is estimated that in the United States alone we drive a combined 3.22 trillion miles. Consider those 250 million cars in the United States and the drivers of those cars, and how unhinged some of them might be, or how unhinged they might become as a result of being slighted or they perceived they were slighted while driving, and it really is a miracle that we don’t have more untoward driving acts. Encountering Humans In A Myriad of Ways Back to the bike rider that got cut-off, there is a possibility that the human driver depersonalized the bike rider. This once again illustrates that humans are not necessarily going to provide or undertake personalized acts in what they do. An AI self-driving car might or might not be undertaking a more personalized approach, depending upon how the AI has been designed, developed, and fielded. Take a look at Figure 2. As shown, an AI self-driving car is going to encounter humans in a variety of ways. There will be human passengers inside the AI self-driving car. There will be pedestrians outside of the AI self-driving car and that the AI self-driving car comes across. There will be human drivers in other cars, of which the AI self-driving car will encounter while driving on the roadways. There will be human bike riders, along with other humans on motorcycles, scooters, and so on. If you buy into the notion that the AI is by necessity a depersonalizing mechanism, meaning that in comparison to human drivers the AI driver will be acting toward humans in a depersonalized manner, more so than presumably other human drivers would, this seems to spell possible disaster for humans. Are all of these humans that might be encountered going to be treated as mere objects and not as humans? The counter-argument is that the AI can be embodied with a form of personalization that would enhance the AI driver over the at-times depersonalizing human driver. The AI system might have a calculus that assesses the value of the bike as based on the human riding the bike. Unlike the human driver of earlier mention, presumably the AI is going to take into account that a human is riding the bike. In the case of interacting with human passengers, there is a possibility of having the AI make use of sophisticated Natural Language Processing (NLP) and socio-behavioral conversational computing. In some ways, this could be done such that the personalization of interaction is on par with a typical human driver, perhaps even better so. Have you been in a cab or taxi whereby the human driver was lacking in conversational ability, and unable to respond when you asked where’s a good place to eat in this town? Or, the opposite extreme, you’ve been in a ridesharing car and the human driver was trying to be overly responsive by chattering the entire time, along with quizzing you about who you are, where you work, what you do. That’s akin to my bank teller example earlier. Goldilocks Approach Is Best AI developers ought to be aiming for the Goldilocks version of interaction with human passengers. Not too little of conversation, and not too much. On some occasions, the human passenger will just want to say where they wish to go and not want any further discussion. In other cases, the human passenger might be seeking a vigorous dialogue. One size does not fit all. For socio-behavioral computing, see my article: https://www.aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/ For the use of ESP2, see my article: https://www.aitrends.com/selfdrivingcars/extra-scenery-perception-esp2-self-driving-cars-beyond-norm/ For how the AI might interact during family trips, see: https://www.aitrends.com/selfdrivingcars/family-road-trip-and-ai-self-driving-cars/ For the use of NLP for AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/ In terms of interacting with humans that are outside of the AI self-driving car, there is definitely a bit of a problem on that end of things. Just the other day, I drove up to a four-way stop. There was another car already stopped, sitting on the other side of the intersection, and apparently waiting. I wasn’t sure why the other driver wasn’t moving forward. They had the right-of-way. Were they worried that I wasn’t going to come to a stop? Maybe they feared that I was going to run thru the stop signs and so they were waiting to make sure I came to… Read more »
  • How AI in the Energy Sector Can Help Solve the Climate Crisis
    true
    Artificial intelligence conjures fears of job loss and privacy concerns — not to mention sci-fi dystopias. But machine learning can also help us save energy and make renewables better. Artificial intelligence (AI) is infiltrating every corner of our lives. Video streaming services use it to learn our tastes and suggest what we might like to watch next. AIs have beaten the world’s best players in complex board games like chess and Go. Some scientists even believe AI could one day achieve superhuman intelligence resulting in apocalyptic scenarios reminiscent of films like “The Matrix.” As if to dispel such fears, the UN AI For Good Global Summit in Geneva later this month highlights AI applications to address the pressing problems of our time, including climate change. Most countries aren’t cutting emissions nearly fast enough. AI could help speed things up. In particular, a field called machine learning can process colossal amounts of data to make energy systems more efficient. To fulfil the Paris Agreement, we will have to virtually eliminate fossil-fueled energy from all sectors of the economy. This will mean networking decentralized, fluctuating renewable power generation with consumers that automatically adjust to minimize waste and balance the entire system. Hendrik Zimmermann, a researcher into digitalization and sustainability at environmental NGO Germanwatch, says efficiently managing data on this scale is only possible with AI. “To be able to design this system, we need digital technologies and a lot of data that have to be quickly collected and analyzed,” Zimmerann told DW. “AI or machine learning algorithms can help us manage this complexity and get to zero emissions.” Cutting energy consumption But digitalization comes with a host of problems, too — not least the huge amount of energy all this data processing itself consumes. Sims Witherspoon is a program manager at Deepmind, the British AI firm owned by Google’s parent company Alphabet that developed the Go-playing bot. She told DW that data centers — the huge “server farms” around the world that store users’ data — now consume 3% of global energy. Which is why Deepmind decided to use its “general purpose learning algorithms” to reduce the energy needed to cool Google data centers by up to 40 percent. Read the source article in DW. Read more »
  • AI System Tries to Match Accuracy of 101 Radiologists on Mammograms
    true
    A commercial artificial intelligence (AI) system matched the accuracy of over 28,000 interpretations of breast cancer screening mammograms by 101 radiologists. Although the most accurate mammographers outperformed the AI system, it achieved a higher performance than the majority of radiologists. With the addition of deep-learning convolutional neural networks, new AI systems for breast cancer screening improve upon the computer-aided detection (CAD) systems that radiologists have used since the 1990s. The AI system evaluated in this study — conducted by radiologists and medical physicists at Radboud University Medical Centre — has a feature classifier and image analysis algorithms to detect soft-tissue lesions and calcifications, and generates a “cancer suspicion” ranking of 1 to 10. The researchers examined unrelated datasets of images from nine previous clinical studies. The images were acquired from women living in seven countries using four different vendors’ digital mammography systems. Every dataset included diagnostic images, radiologists’ scores of each exam and the actual patient diagnosis. The 2652 cases, of which 653 were malignant, incorporated a total of 28,296 individual single reading interpretations by 101 radiologists participating in previous multi-reader, multi-case observer studies. The readers included 53 radiologists from the United States, who represented an equal mix of breast imagers and general radiologists, plus 48 European radiologists who were all breast specialists. Principal investigator Ioannis Sechopoulos and colleagues reported that the performance of the AI tool (ScreenPoint Medical’s Transpara) was statistically non-inferior to that of the radiologists, with an AUC (area under the ROC curve) of 0.840, compared with 0.814 for the radiologists. The AI system had a higher AUC than 62 of the radiologists and higher sensitivity than 55 radiologists. The performance of the AI system was, however, consistently lower than the best performing radiologists in all datasets. The authors suggested that this may be because the radiologists had more information available for assessment, such as prior mammograms, for the majority of cases. However, the team did not have access to the experience levels of the 101 radiologists, and therefore could not determine whether the radiologists who outperformed the AI system also were the most experienced. The researchers suggest that there may be several ways that an AI system designed to detect breast cancer could be used. One possibility is its use as an independent first or second reader in regions with a shortage of radiologists to interpret screening mammograms. It also could be employed in the same manner as CAD systems, as a clinical decision support tool to aid an interpreting radiologist. Sechopoulos also thinks that AI will be useful for identifying normal mammograms that do not need to be read by a screening radiologist. “With the right developments, it could also be used to identify cases that can be read by only one radiologist to confirm that recalling the patient is necessary,” he tells Physics World. “These strategies could give radiologists more time to focus on more complex cases, and eventually could be part of the solution needed to implement digital tomosynthesis in screening programs. This is important because tomosynthesis takes considerably longer to read than mammography.” Read the source article in physicsworld. Read more »
  • AI at Mastercard: Here Are Current Projects and Services
    true
    Banks and financial institutions are particularly opaque when it comes to how they implement and leverage AI for their business. Mastercard is a key example of this because they use most of their AI applications internally and have only recently begun to make their technology more transparent to the greater financial industry. Since their initial adoption of AI and machine learning in 2016, Mastercard acquired Brighterion in 2017 and has continually expanded their AI capabilities. In this article, we give an overview of three AI initiatives from Mastercard. We detail the use cases for each one and highlight other possibilities in those areas. Mastercard offers the following AI services: Credit card fraud detection AI consulting services Biometric authentication for purchases and account access Before we start our overview, we discuss the most important insights from our research into Mastercard’s AI projects: AI at Mastercard – Key Insights It is clear that Mastercard’s most refined application of artificial intelligence is in their fraud detection solutions. Using predictive analytics technology to detect and score transactions on how likely they are to be fraudulent allows for continued learning of new fraud techniques. In addition, their solution allows for the detection of abnormal shopping behavior based on a customer’s spending history. Our research led us to Mastercard’s artificial intelligence consulting service, an AI development crash course called AI Express. The service is for businesses with little experience in AI that are looking to get acquainted with machine learning quickly. AI Express is intended to allow businesses to get a closer look at the machine learning algorithms MasterCard uses for various services to learn from their work so that they might develop their own machine learning models. It is also likely that MasterCard provides AI consulting with the data science and machine learning talent at their company, many of whom they hired when they acquired AI consulting firm Brighterion. Mastercard claims that through AI Express they can help companies create tailor-made machine learning models for their specific business problems, but the exact degree of specificity is unclear. It’s also likely that data science expertise is required to make sense of Mastercard’s machine learning, and so business leaders should not expect to look “under the hood” at Mastercard’s AI and easily understand how to implement something similar at their own business. Businesses will likely need to have their own in-house data scientists work with Mastercard’s in order to build a model for use in business. Mastercard’s AI-based biometric authentication software, Mastercard Identity Check, seems capable of facial recognition and analyzing live video. The software enables two-factor authentication which resembles taking a selfie. First, it verifies the face of the account owner, then prompts the user to blink. The software then detects the blink and uses it to confirm the authenticated face is alive. We begin our overview of Mastercard’s AI initiatives with their predictive analytics approach to detecting and preventing fraud: Credit Card Fraud Detection Mastercard’s most prominent use of artificial intelligence is their fraud detection solution called Decision Intelligence. The software uses predictive analytics to analyze customer and transaction data to determine a score of how likely a transaction might be fraud or not. These scores could help Mastercard both decline fraudulent transactions in real time and prevent false declines of legitimate transactions. “We are solving a major consumer pain point of being falsely declined when trying to make a purchase,” said Ajay Bhalla, President of Cyber and Intelligence Solutions at Mastercard, regarding the first implementation of Decision Intelligence. It is clear that false declines and fraud prevention were among Mastercard’s chief concerns while developing the model behind Decision Intelligence. Read the source article at emerj. Read more »
WordPress RSS Feed Retriever by Theme Mason

Artificial Intelligence Technology and the Law

  • Civil Litigation Discovery Approaches in the Era of Advanced Artificial Intelligence Technologies
    For nearly as long as computers have existed, litigators have used software-generated machine output to buttress their cases, and courts have had to manage a host of machine-related evidentiary issues, including deciding whether a machine’s output, or testimony based on the output, could fairly be admitted as evidence and to what extent. Today, as litigants begin contesting cases involving aspects of so-called intelligent machines–hardware/software systems endowed with machine learning algorithms and other artificial intelligence-based models–their lawyers and the judges overseeing their cases may need to rely on highly-nuanced discovery strategies aimed at gaining insight into the nature of those algorithms, the underlying source code’s parameters and limitations, and the various possible alternative outputs the AI model could have produced given a set of training data and inputs.  A well-implemented strategy will lead to understanding how a disputed AI system worked and how it may have contributed to a plaintiff’s alleged harm, which is necessary if either party wishes to present an accurate and compelling story to a judge or jury. Parties in civil litigation may obtain discovery regarding any non-privileged matter that is relevant to any party’s claim or defense and that is proportional to the needs of the case, unless limited by a court, taking into consideration the following factors expressed in Federal Rules of Civil Procedure (FRCP) Rule 26(b): The importance of the issues at stake in the action The amount in controversy The parties’ relative access to relevant information The parties’ resources The importance of the discovery in resolving the issues, and Whether the burden or expense of the proposed discovery outweighs its likely benefit. Evidence is relevant to a party’s claim or defense if it tends “to make the existence of any fact that is of consequence to the determination of the action more or less probable that it would be without the evidence.” See Fed. R. Evid. 401.  Even if the information sought in discovery is relevant and proportional, discovery is not permitted where no need is shown. Standard Inc. v. Pfizer Inc., 828 F.2d 734, 743 (Fed. Cir. 1987). Initial disclosures Early in a lawsuit, the federal rules require parties to make initial disclosures involving the “exchange of core information about [their] case.”  ADC Ltd. NM, Inc. v. Jamis Software Corp., slip op. No. 18-cv-862 (D. NM Nov. 5, 2018).  This generally amounts to preliminary identifications of individuals likely to have discoverable information, types and locations of documents, and other information that a party in good faith believes may be relevant to a case, based on each parties’ claims, counterclaims, facts, and various demands for relief set forth in their pleadings.  See FRCP 26(a)(1).  A party failing to comply with initial disclosure rules “is not allowed to use” the information or person that was not disclosed “on a motion, at a hearing, or at a trial, unless the failure was substantially justified or is harmless.”  Baker Hughes Inc. v. S&S Chemical, LLC, 836 F. 3d 554 (6th Cir. 2016) (citing FRCP 37(c)(1)).  In a lawsuit involving an AI technology, individuals likely to have discoverable information about the AI system may include: Data scientists Software engineers Stack engineers/systems architects Hired consultants (even if they were employed by a third party) A company’s data scientists may need to be identified if they were involved in selecting and processing data sets, and if they trained, validated, and tested the algorithms at issue in the lawsuit.  Data scientists may also need to be identified if they were involved in developing the final deployed AI model.  Software engineers, depending on their involvement, may also need to be disclosed if they were involved in writing the machine learning algorithm code, especially if they can explain how parameters and hyperparameters were selected and which measures of accuracy were used.  Stack engineers and systems architects may need to be identified if they have discoverable information about how the hardware and software features of the contested AI systems were put together.  Of course, task and project managers and other higher-level scientists and engineers may also need to be identified. Some local court rules require initial or early disclosures beyond what is required under Rule 26.  See Drone Technologies, Inc. v. Parrot SA, 838 F. 3d 1283, 1295 (Fed. Cir. 2016) (citing US District Court for the Western District of Pennsylvania local rule LPR3.1, requiring, in patent cases, initial disclosure of source code and other documentation… sufficient to show the operation of any aspects or elements of each accused apparatus, product, device, process, method or other instrumentality identified in the claims pled of the party asserting patent infringement…”)) (emphasis added).  Thus, depending on the nature of the relevant AI technology at issue in a lawsuit, and the jurisdiction in which the lawsuit is pending, a party’s initial disclosure burden may involve identifying the location of relevant source code (and who controls it), or they could be required to make source code and other technical documents available for inspection early in a case (more on source code reviews below).  Where the system is cloud-based operable on a machine learning as a service (MLaaS) platform, a party may need to identify the platform service where its API requests are piped. Written discovery requests Aside from the question of damages, in lawsuits involving an AI technology, knowing how the AI system made a decision or took an action may be highly relevant to a party’s case, and thus the party seeking to learn more may want to identify information about at least the following topics, which may be obtained through targeted discovery requests, assuming, as required by FRCP 26(b), the requesting party can justify a need for the information: Data sets considered and used (raw and processed) Software, including earlier and later versions of the contested version Software development process Sensors for collecting real-time observational data for use by the AI model Source code Specifications Schematics Flow charts Formulas Drawings Other documentation A party may seek that information by serving requests for production of document and interrogatories.  In the case of document requests, if permitted by the court, a party may wish to request source code to understand the underlying algorithms used in an AI system, and the data sets used to train the algorithms (if the parties’ relevant dispute turns on a question of a characteristic of the data, e.g., did the data reflect biases? Is it out of date? Is it of poor quality due to labeling errors? etc.).  A party may wish to review software versions and software development documents to understand if best practices were followed.  AI model development often involves trial and error, and thus documentation regarding various inputs used, algorithm architectures selected and de-selected, and the hyperparameters chosen for the various algorithms–anything related to product development–may be relevant and should be requested.  In a lawsuit involving an AI system that uses sensor data (e.g., cameras providing image data to a facial recognition system), a party may want to obtain documentation about the chosen sensor to understand its performance capabilities and limitations. With regard to interrogatories, a party may use interrogatories to ask an opposing party to explain its basis for assertions, made in its pleadings or contentions regarding a challenged AI system, such as: The basis underlying a contention about the foreseeability by a person (either the system’s developer or its end user) of an AI system’s errors The basis for the facts regarding the transparency of the system from the developer’s and/or a user’s perspective The reasonableness of an assertion that a person could foresee the errors made by the AI system The basis underlying a contention that a particular relevant technical standard is applicable to the AI system The nature and extent of the contested AI system’s testing conducted prior to deployment The basis for alleged disparate impacts from an automated decision system The identities and their involvement in making final algorithmic decisions leading to a disparate impact, and how and how much they relied on machine-based algorithmic outputs The modeled feature space used in developing an AI model and its relationship to the primary decision variables at issue (e.g., job promotion qualifications, eligibility for housing assistance) Who makes up the relevant scientific community for the relevant technology and what are the relevant industry standards to apply to the disputed AI system and its output Source code reviews Judges and/or juries are often asked to measure a party’s actions against a standard, which may be defined by one or more objective factors.  In the case of an AI system, judging whether a standard has been met may involve assessing the nature of the underlying algorithm.  Without that knowledge, a party with the burden of proof may only be able to offer evidence of the system’s inputs and results, but would have no information about what happened inside the AI system’s black box.  That may be sufficient when a party’s case rests on a comparison of the system’s result or impact with a relevant standard; but in some cases, understanding the inner workings of the system’s algorithms, and how well they model the real world, could help buttress (or undermine) a party’s case in chief and support (or mitigate) a party’s potential damages.  Thus a source code review may be a necessary component of discovery in some lawsuits. For example, assume a technical standard for a machine learning-based algorithmic decision system requires a minimum accuracy (i.e., recall, precision, and/or f1 score), and the developer’s documentation demonstrates that its model met that standard.  An inspection of the source code, however, might reveal that the “test size” parameter was set too low (compared to what is customary), meaning most of the available data in the data set was used to train the model and the model may suffer from overfitting (and maybe the developer forgot to cross-validate).  A source code review might reveal those potential problems.  a source code review might also reveal which features were used to create the model and how many features were used compared to the number of data observations, both of which might reveal that the developer overlooked an important feature or used a feature that caused the model to reflect an implicit bias in the data. Because of source code’s proprietary and trade secret nature, parties requested to produce their code may resist inspection over concerns about the code getting out into the wild.  The burden falls to the requestor to establish a need and that procedures will safeguard the source code.  Cochran Consulting, Inc. v. Uwatec USA, Inc., 102 F.3d 1224, 1231 (Fed. Cir. 1996) (vacating discovery order pursuant to FRCP 26(b) requiring the production of computer-programming code because the party seeking discovery had not shown that the code was necessary to the case); People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018) (concluding that the “black box” nature of software is not itself sufficient to warrant its production); FRCP 26(c)(1)(G) (a court may impose a protective order for trade secrets specifying how they are revealed). Assuming a need for source code has been demonstrated, the parties will need to negotiate terms of a protective order defining what constitutes source code and how source code reviews are to be conducted.  See Vidillion, Inc. v. Pixalate, Inc., slip. op. No. 2:18-cv-07270 (C.D. Cal. Mar. 22, 2019) (describing terms and conditions for disclosure and review of source code, including production at a secure facility, use of non-networked standalone computers, exclusion of recording media/recording devices by inspectors during review, and handling of source code as exhibits during depositions). In terms of definitions, it is not unusual to define source code broadly, relying on principles of trade secret law, to include things that the producing party believes in good faith are not generally known to others and have significant competitive value such that unrestricted disclosure to others would harm the producing party, and which the producing party would not normally reveal to third parties except in confidence or has undertaken with others to maintain in confidence.  Such things may include: Computer instructions (reflected in, e.g., .jupyter or .py files) Data structures Data schema Data definitions (that can be sharable or expressed in a form suitable for input to an assembler, compiler, translator, or other data processing module) Graphical and design elements (e.g., SQL, HTML, XML, XSL, and SGML files) In terms of procedure, source code inspections are typically conducted at the producing party’s law firm or possibly at the developer’s facility, where the inspection can be monitored to ensure compliance with the court’s protective order.  The inspectors will typically comprise a lawyer for the requesting party along with a testifying expert who should be familiar with multiple programming languages and developer’s tools.  Keeping in mind that the inspection machine will not have access to any network, and no recordable media or recording devices will be allowed in the space where the inspection machine is located, the individuals performing the review will need to ensure they’ve requested all the resources installed locally to facilitate inspection testing, including applications to create virtual servers to simulate remote API calls, if that is an element of the lawsuit.  Thus, the reviewers might request in advance that the inspection machine be loaded with: The above-listed files Relevant data sets A development environment such as a Jupyter notebook or similar application to facilitate opening python or other source code files and data sets. In some cases, it may be reasonable to request a GPU-based machine to create a run-time environment for instances of the AI model to explore how the code operates and how the model handles inputs and makes decisions/takes actions. Depending on the nature of the disputed AI system, the relevant source code may be embedded on hardware devices (e.g., sensors) that the party’s do not have access to.  For example, in a case involving the cameras and/or lidar sensors installed on an autonomous vehicle or as part of a facial recognition system, the party seeking to review source code may need to obtain third-party discovery via a subpoena duces tecum, as discussed below. Subpoenas (Third-Party) Discovery If source code is relevant to a lawsuit, and neither party has access to it, one or both of them may turn to a third party software developer/authorized seller for production of the code, and seek discovery from that entity through a subpoena duces tecum. It is not unusual for third parties to resist production on the basis doing so would be unduly burdensome, but often as likely they will resist production on the basis that its software is protected by trade secrets and/or is proprietary and disclosing it to others would put their business interests at risk.  Thus, the party seeking access to the source code in a contested AI lawsuit should be prepared for discovery motions in the district where the third-party software developer/authorized seller is being asked to comply with a subpoena. A court “may find that a subpoena presents an undue burden when the subpoena is facially overbroad.” Wiwa, 392 F.3d at 818. Courts have found that a subpoena for documents from a non-party is facially overbroad where the subpoena’s document requests “seek all documents concerning the parties to [the underlying] action, regardless of whether those documents relate to that action and regardless of date”; “[t]he requests are not particularized”; and “[t]he period covered by the requests is unlimited.” In re O’Hare, Misc. A. No. H-11-0539, 2012 WL 1377891 at *2 (S.D. Tex. Apr. 19, 2012).  Additionally, FRCP 45(d)(3)(B) provides that, “[t]o protect a person subject to or affected by a subpoena, the court for the district where compliance is required may, on motion, quash or modify the subpoena if it requires: (i) disclosing a trade secret or other confidential research, development, or commercial information.” But, “the court may, instead of quashing or modifying a subpoena, order appearance or production under specified conditions if the serving party: (i) shows a substantial need for the testimony or material that cannot be otherwise met without undue hardship; and (ii) ensures that the subpoenaed person will be reasonably compensated.” FRCP 45(d)(3)(C). Thus, in the case of a lawsuit involving an AI system in which one or more of the parties can demonstrate it/they have a substantial need to understand how the system made a decision or took a particular action, a narrowly-tailored subpoena duces tecum may be used to gain access to the third party’s source code.  To assuage the producing party’s proprietary/trade secret concerns, the third party may seek a court-issued protective order outlining terms covering the source code inspection. Depositions Armed with the AI-specific written discovery responses, document production, and an understanding of an AI system’s source code, counsel should be prepared to ask questions of an opponent’s witnesses, which in turn can help fill gaps in a party’s understanding of the facts relevant to its case.  FRCP 30 governs depositions by oral examination of a party, party witness, or third party to a matter.  In a technical deposition of a fact or party witness, such as a data scientist, machine learning engineer, software engineer, or stack developer, investigating the algorithm behind an AI model will help answer questions about how and why a particular system caused a particular result that is material to the litigation.  Thus, the deposition taker would want to inquire about some of the following issues: Which algorithms were considered? Were they separately tested? How were they tested? Why was the final algorithm chosen? Did an independent third party review the algorithm and model output? With regard to the data set used to create the AI model, the deposition taker will want to explore the following: What data sets were used for training, validation, and testing of the algorithm? How was testing and validation conducted, and were alternatives considered? What sort of exploratory data analysis was performed on the data set (or sets) to assess usability, quality, and implicit bias? Was the data adequate for the domain that the developer was trying to model, and could other data have been used? With regard to the final model, the deposition taker may want to explore the following issues: How old is the model? If it models a time-series (e.g., a model based on historical data that tends to increase over time), has the underlying distribution shifted enough such that the model is now outdated? If newer data were not considered, why? How accurate is the model, and how is accuracy measured? Finally, if written discovery revealed an independent third party reviewed the model before it was deployed, the deposition taker may want to explore the details about the scope of the testing and its results.  If sensors are used as the source for new observational data fed to an AI model, the deposition taker may want to learn why those sensors were chosen, how they operate, their limitations, and what alternative sensors could have been used instead. In an expert deposition, the goal of the deposition shifts to exploring the expert’s assumptions, inputs, applications, outputs, and conclusions for weaknesses.  If an expert prepared an adversarial or counterfactual model to dispute the contested AI system or an opposing expert’s model, a litigator should keep in mind the factors in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) and FRE 702, when deposing the expert.  For example, the following issues may need to be explored during the deposition: How was the adversarial or counterfactual modeled developed? Can the expert’s model itself be challenged objectively for reliability? Was the model and technique used subject to peer review and/or publication? What was the model’s known or potential rate of error when applied to facts relevant to the lawsuit? What technical standards apply to the model? Is the model based on techniques or theories that have been generally accepted in the scientific community? Conclusion This post explores a few approaches to fact and expert discovery that litigants may want to explore in a lawsuit where an AI technology is contested, though the approaches here are by no means exhaustive of the scope of discovery that one might need in a particular case. Read more »
  • Congress, States Introduce New Laws for Facial Recognition, Face Data – Part 2
    In Part I, new proposed federal and state laws governing the collection, storage, and use of face (biometric) data in connection with facial recognition technology were described.  If enacted, those new laws would join Illinois’ Biometric Information Privacy Act (BIPA), California’s Consumer Data Privacy Act (CCPA), and Texas’ “biometric identifier” regulations in the governance of face-related data.  It is reasonable for businesses to assume that other state laws and regulations will follow, and with them a shifting legal landscape creating uncertainty and potential legal risks.  A thoughtful and proactive approach to managing the risks associated with the use of facial recognition technology will distinguish those businesses that avoid adverse economic and reputational impacts, from those that could face lawsuits and unwanted media attention. Businesses with a proactive approach to risk management will of course already be aware of the proposed new laws that were described in Part I.  S. 847 (the federal Commercial Facial Recognition Privacy Act of 2019) and H1654-2 (Washington State’s legislative house bill) suggest what’s to come, but biometric privacy laws like those in California, Texas, and Illinois have been around for a while.  Companies that do business in Illinois, for instance, already know that BIPA regulates the collection of biometric data, including face scans, and has created much litigation due to its private right of action provision.  Maintaining awareness of the status of existing and proposed laws will be important for businesses that collect, store, and use face data. At the same time, however, federal governance of AI technologies under the Trump Administration is expected to favor a policy and standards governance approach over a more onerous command-and-control-type regulatory agency rulemaking approach (which the Trump administration often refers to “barriers”).  The takeaway for businesses is that the rulemaking provisions of S. 847 may look quite different if the legislation makes it out of committee and is reconciled with other federal bills, adding to the uncertain landscape. But even in the absence of regulations (or at least regulations with teeth) and the threat of private lawsuits (neither S. 847 nor H1654-2 provide a private right of action for violations), managing risk may require businesses that use facial recognition technology, or that directly or indirectly handle face data, or otherwise use the result of a facial recognition technology, to at least minimally self-regulate.  Those that don’t, or those that follow controversial practices involving monetizing face data at the expense of trust, such as obfuscating transparency about how they use consumer data, are more likely to see a backlash. In most cases, companies handling data already have privacy policies and terms of service (TOS) or end-user licensing agreements that address user data and privacy issues.  Those documents could be frequently reviewed and updated to address face data and facial recognition technology concerns.  Moreover, “camera in use” notices are not difficult to implement in the case of entities that deploy cameras for security, surveillance, or other reasons.  Avoiding legalese and use of vague or uncertain terms in those documents and notices could help reduce risks.  H1654-2 provides that a meaningful privacy notice should include: (a) the categories of personal data collected by the controller; (b) the purposes for which the categories of personal data is used and disclosed to third parties, if any; (c) the rights that consumers may exercise, if any; (d) the categories of personal data that the controller shares with third parties, if any; and (e) the categories of third parties, if any, with whom the controller shares personal data.  In the case of camera notices, prominently displaying the notice is standard, but companies should also be mindful of the differences in S. 847 and H1654-2 concerning notice and implied consent: the former may require notice and separate consent, while the later may provide that notice alone equates to implied consent under certain circumstances. Appropriate risk management also means business entities that supply face data sets to others, for machine learning development, training, and testing purposes, understand the source of its data and the data’s potential inherent biases.  Those businesses will be able to articulate the same to users of the data (who may insist on certain assurances about the data’s quality and utility if not provided on an as-is basis).  Ignoring potential inherent biases in data sets is inconsistent with a proactive and comprehensive risk management strategy. Both S. 847 and H1654-2 refer to a human-in-the-loop review process in certain circumstances, such as in cases where a final decision based on the output of a facial recognition technology may result in a reasonably foreseeable and material physical or financial harm to an end user or if it could be unexpected or highly offensive to a reasonable person.  Although “reasonably foreseeable,” “harm,” and “unexpected or highly offensive” are undefined, a thoughtful approach to managing risk and mitigating damages might consider ways to implement human reviews mindful of federal and state consumer protection, privacy, and civil rights laws that could be implicated absent use of a human reviewer. The White House’s AI technology use policy and S. 847 refer to the National Institute of Standards and Technology (NIST), which could play a large role in AI technology governance.  Learning about NIST’s current standards-setting approach and its AI model evaluation process could help companies seeking to do business with the federal government.  Of course, independent third parties could also evaluate a business’ AI models for bias, problematic data sets, model leakiness, and to identify potential problems that might lead to litigation.  While not every situation may require such extra scrutiny, the ability to recognize and avoid risks might justify the added expense. As noted above, neither S. 847 nor SB 5376 include a private right of action like BIPA, but the new laws could allow for states attorneys general to bring civil actions against violators.  Businesses should consider the possibility of such legal actions, as well as the other potential risks from the use of facial recognition technology and face data collection when assessing the risk factors that must be discussed in certain SEC filings. Above are just a few of the factors and approaches that businesses could consider as part of a risk management approach to the use of facial recognition technology in the face of a changing legal landscape. Read more »
  • Congress, States Introduce New Laws for Facial Recognition, Face Data – Part I
    Companies developing artificial intelligence-based products and services have been on the lookout for laws and regulations aimed at their technology.  In the case of facial recognition, new federal and state laws seem closer than ever.  Examples include Washington State’s recent data privacy and facial recognition bill (SB 5376; recent action on March 6, 2019) and the federal Commercial Facial Recognition Privacy Act of 2019 (S. 847, introduced March 14, 2019).  If enacted, these new laws would join others like Illinois’ Biometric Information Privacy Act (BIPA) and California’s Consumer Privacy Act (CCPA) in governing facial recognition systems and the collection, storage, and use of face data.  But even if those new bills fail to become law, they underscore how the technology will be regulated in the US and suggest, as discussed below, the kinds of litigation risks organizations may confront in the future. What is Face Data and Facial Recognition Technology? Definitions of face data often involve information that can be associated with an identified or identifiable person.  Face data may be supplied by a person (e.g., an uploaded image), purchased from a third party (i.e., a data broker), obtained from publicly-available data sets, or collected via audio-video equipment (e.g., using surveillance cameras). Facial recognition refers to extracting data from a camera’s output signal (still image or video), locating faces in the image data (an object detection process typically done using machine learning algorithms), picking out unique features from the faces that can be used to tell them apart from other people (e.g., facial landmarks), and comparing those features to all the faces of people already known to see if there is a match. Advances in the field of computer vision, including a machine learning technique called convolutional neural networks (ConvNets or CNNs), have turned what used to be a laborious manual process of identifying faces in image data into a highly accurate and automated process performed by machines in near real-time.  Online face image sources such as Facebook, Flickr, Twitter, Instagram, YouTube, news media websites, other websites, as well as face data images collected by government agencies from, among other sources, airport cameras, provide the data used to train and test CNNs. Why are Lawmakers Addressing Facial Recognition? Among the several AI technologies attracting lawmakers’ attention, facial recognition seems to top the list due in part to its rapidly-expanding use, especially in law enforcement, and the civil and privacy rights implications associated with the collection and use of face data, often without consent, by both private and public organizations. From a privacy perspective, Microsoft’s President Brad Smith, writing in 2018, expressed a common refrain by those concerned about facial recognition: unconsented surveillance.  “Imagine a government tracking everywhere you walked over the past month without your permission or knowledge.  Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like ‘Minority Report,’ ‘Enemy of the State’ and even ‘1984’ – but now it’s on the verge of becoming possible.” Beyond surveillance, others have expressed concerns about the security of face data. Unlike non-biometric data, which is replaceable (think credit card numbers or passwords), face data represent intimate, unique, and irreplaceable characteristics of a person.  Once hackers have maliciously exfiltrated a person’s face data from a business’ computer system, the person’s privacy is threatened. From a civil rights perspective, known problems with bias in facial recognition systems have been documented.  This issue became a headline in July 2018 when the American Civil Liberties Union (ACLU) reported that a widely-used, commercially-available facial recognition program “incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.”  The report noted that the members of Congress who were “falsely matched with the mugshot database [] used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.”  The report also found that the mismatches disproportionately involved members who are people of color, thus raising questions about the accuracy and quality of the tested facial recognition technique, as well as revealing its possible inherent bias. <https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28>  Bias may arise if a face data set used to train a CNN contains predominantly white male face images, for example.  The facial recognition technology using that algorithm may not perform well (or “generalize”) when it’s asked to classify (identify) non-white male faces.  The bias issue has led many to call for the meaningful ethical-based design of AI systems. But even beneficial uses of facial recognition technology and face data collection and use have been criticized, in part because people whose face data are being collected and used are typically not given an opportunity to give their consent (and in many cases, do not even know their face data is being used).  Thus, automatically identifying people in uploaded images (a process called image “tagging”), improving a person’s experience at a public venue, providing access to a private computer system or a physical location, establishing an employee’s working hours and their movements for safety purposes, and personalizing advertisements and newsfeeds displayed on a computer user’s browser, while arguably beneficial uses of face data, are often conducted without a user’s consent (or its access/use is conditioned upon giving consent) and thus criticized. As much as the many concerns about facial recognition may have piqued lawmakers’ interest in regulating face data, legislation like those mentioned above is just as likely to arise because stakeholders and vocal opponents have called for more certainty in the legal landscape.  Microsoft, for one, in 2018 called for regulating facial recognition technology.  “The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself,” Brad Smith wrote, his words clearly directed to Capitol Hill as well as state lawmakers in Olympia.  “And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.” Comparing the Washington, DC and Washington State Bills [For a summary of Illinois’ face data privacy law, click here.] If S. 847 becomes law, it would cover any person (other than the federal government, state and local governments, law enforcement agencies, a national security agency, or an intelligence agency) that collects, stores, or processes facial recognition data from still or video images, including any unique attribute or feature of the face of a person that is used by facial recognition technology for the purpose of assigning a unique, persistent identifier or for the unique personal identification of a specific individual.  SB 5376, in contrast, would cover any natural or legal persons which, alone or jointly with others, determines the purposes and means of the processing of personal data by a processor, including personal data from a facial recognition technology.  While the federal bill would not cover government agency use, SB 5376 would condition Washington state and local government agencies, including law enforcement agencies, to conduct ongoing surveillance of specified individuals in public spaces only if such use is in support of law enforcement activities and either (a) a court order has been obtained to permit the use of facial recognition services for that ongoing surveillance, or (b) where there is an emergency involving imminent danger or risk of death or serious physical injury to a person. On the issue of consent, S. 847 would require a business that knowingly uses facial recognition technology to collect face data to obtain from a person affirmative consent (opt-in consent).  To the extent possible, if facial recognition technology is present, a business must provide to the person a concise notice that facial recognition technology is present, and, if contextually appropriate, where the person can find more information about the use by the business of facial recognition technology and documentation that includes general information that explains the capabilities and limitations of the technology in terms that the person is able to understand.  SB 5376 would also require controllers to obtain consent from consumers prior to deploying facial recognition services in physical premises open to the public.  The placement of a conspicuous notice in physical premises that clearly conveys that facial recognition services are being used would constitute a consumer’s consent to the use of such facial recognition services when that consumer enters those premises that have such notice. Under S. 847, obtaining affirmative consent would be effective only if a business makes available to a person a notice that describes the specific practices of the business in terms that persons are able to understand regarding the collection, storage, and use of facial recognition data.  These include reasonably foreseeable purposes, or examples, for which the business collects and shares information derived from facial recognition technology or uses facial recognition technology, and information about the practice of data retention and de-identification, and whether a person can review, correct, or delete information derived from facial recognition technology.  Under SB 5376, processors that provide facial recognition services would be required to provide documentation that includes general information that explains the capabilities and limitations of face recognition technology in terms that customers and consumers can understand. S. 847 would prohibit a business from knowingly using a facial recognition technology to discriminate against a person in violation of applicable federal or state law (presumably civil rights laws, consumer protection laws, and others), repurpose facial recognition data for a purpose that is different from those presented to the person, and share the facial recognition data with an unaffiliated third party without affirmative consent (separate from the opt-in affirmative consent noted above). SB 5376 would prohibit processors of face data that provide facial recognition services from using such facial recognition services by controllers to unlawfully discriminate under federal or state law against individual consumers or groups of consumers. S. 847 would require meaningful human review prior to making any final decision based on the output of facial recognition technology if the final decision may result in a reasonably foreseeable and material physical or financial harm to an end user or may be unexpected or highly offensive to a reasonable person. SB 5376 would require controllers that use facial recognition for profiling must employ meaningful human review prior to making final decisions based on such profiling where such final decisions produce legal effects concerning consumers or similarly significant effects concerning consumers. Decisions producing legal effects or similarly significant effects include, but are not limited to, denial of consequential services or support, such as financial and lending services, housing, insurance, education enrollment, criminal justice, employment opportunities, and health care services. S. 847 would require a regulated business that makes a facial recognition technology available as an online service to make available an application programming interface (API) to enable an independent testing company to conduct reasonable tests of the facial recognition technology for accuracy and bias. SB 5376 would require providers of commercial facial recognition services that make their technology available as an online service for developers and customers to use in their own scenarios must make available an API or other technical capability, chosen by the provider, to enable third parties that are legitimately engaged in independent testing to conduct reasonable tests of those facial recognition services for accuracy and unfair bias. S. 847 would provide exceptions for certain facial recognition technology uses, including product or service designed for personal file management or photo or video sorting or storage, if the facial recognition technology is not used for unique personal identification of a specific individual, as well as uses involving the identification of public figures for journalistic media created for public interest. The law would also provide exceptions for the identification of public figures in copyrighted material for theatrical release, or use in an emergency involving imminent danger or risk of death or serious physical injury to an individual. The law would also provide certain exceptions for certain security applications.   Even so, the noted exceptions would not permit businesses to conduct the mass scanning of faces in spaces where persons do not have a reasonable expectation that facial recognition technology is being used on them. SB 5376 would provide exceptions in the case of complying with federal, state, or local laws, rules, or regulations, or with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by federal, state, local, or other governmental authorities.  The law would also provide exemptions to cooperate with law enforcement agencies concerning conduct or activity that the controller or processor reasonably and in good faith believes may violate federal, state, or local law, or to investigate, exercise, or defend legal claims, or prevent or detect identity theft, fraud, or other criminal activity or verify identities.  Other exceptions or exemptions are also provided. Under S. 847, violating aspects of the law would be defined as an unfair or deceptive act or practice under Section 18(a)(1)(B) of the Federal Trade Commission Act (15 USC 57a(a)(1)(B)).  The FTC would regulate the new law and would have authority to assert its penalty powers pursuant to 15 USC 41 et seq.  Moreover, state attorneys general, or any other officer of a state who is authorized by the state to do so, may, upon notice to the FTC, bring a civil action on behalf of state residents if it believes that an interest of the residents has been or is being threatened or adversely affected by a practice by a business covered by the new law that violates one of the law’s prohibitions.  The FTC may intervene in such civil action.  SB 5376 would provide that the state’s attorney general may bring an action in the name of the state, or as parens patriae on behalf of persons residing in the state, to enforce the law. S. 847 would also require the FTC to consult with the National Institute of Standards and Technology (NIST) to promulgate regulations within 180 days after enactment describing basic data security, minimization, and retention standards; defining action that are harmful and highly offensive; expanding the list of exceptions noted above in cases where it is impossible for a business to obtain affirmative consent from, or provide notice to, persons. S. 847 would not preempt tougher state laws covering facial recognition technology and the collection and use of face data, or other state or federal privacy and security laws. In Part II of this post, facial recognition and face data regulation impact for businesses will be discussed. Read more »
  • Government Plans to Issue Technical Standards For Artificial Intelligence Technologies
    On February 11, 2019, the White House published a plan for developing and protecting artificial intelligence technologies in the United States, citing economic and national security concerns among other reasons for the action.  Coming two years after Beijing’s 2017 announcement that China intends to be the global leader in AI by 2030, President Trump’s Executive Order on Maintaining American Leadership in Artificial Intelligence lays out five principles for AI, including “development of appropriate technical standards and reduc[ing] barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.”  The Executive Order, which lays out a framework for an “American AI Initiative” (AAII), tasks the White House’s National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence, established in 2018, with identifying federal government agencies to develop and implement the technical standards (so-called “implementing agencies”). Unpacking the AAII’s technical standards principle suggests two things.  First, federal governance of AI under the Trump Administration will favor a policy and standards governance approach over a more onerous command-and-control-type regulatory agency rulemaking approach leading to regulations (which the Trump administration often refers to as “barriers”).  Second, no technical standards will be adopted that stand in the way of the development or use of AI technologies at the federal level if they impede economic and national security goals. So what sort of technical standards might the Select Committee on AI and the implementing agencies come up with?  And how might those standards impact government agencies, government contractors, and even private businesses from a legal perspective? The AAII is short on answers to those questions, and we won’t know more until at least August 2019 when the Secretary of Commerce, through the Director of the National Institute of Standards and Technology (NIST), is required by the AAII to issue a plan “for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.”  Even so, it is instructive to review some relevant technical standards and related legal issues in anticipation of what might lie ahead for the United States AI industry. A survey of technical standards used across a spectrum of different industries shows that they can take many different forms, but often they classify as prescriptive or performance-based.  Pre-determined prescriptive metrics may specify requirements for things like accuracy, quality, output, materials, composition, and consumption.  In the AI space, a prescriptive standard could involve a benchmark for classification accuracy (loss or error) using a standardized data set (i.e., how well does the system work), or a numerical upper limit on power consumption, latency, weight, and size.  Prescriptive standards can be one-size-fits-all, or they can vary. Performance-based standards describe practices (minimum, best, commercially reasonable, etc.) focusing on results to be achieved.  In many situations, the performance-based approach provides more flexibility compared to using prescriptive standards.  In the context of AI, a performance-based standard could require a computer vision system to detect all objects in a specified field of view, and tag and track them for a period of time.  How the developer achieves that result is less important in performance-based standards. Technical standards may also specify requirements for the completion of risk assessments to numerically compare an AI system’s expected benefits and impacts to various alternatives.  Compliance with technical standards may be judged by advisory committees who follow established procedures for independent and open review.  Procedures may be established for enforcement of technical standards when non-compliance is observed.  Depending on the circumstances, technical standards may be published for the public to see or they may be maintained in confidence (e.g., in the case of national security).  Technical standards are often reviewed on an on-going or periodic basis to assess the need for revisions to reflect changes in previous assumptions (important in cases when rapid technological improvements or shifts in priorities occur). Under the direction of the AAII, the White House’s Select Committee and various designated implementing agencies could develop new technical standards for AI technologies, but they could also adopt (and possibly modify) standards published by others.  The International Organization for Standards (ISO), American National Standards Institute (ANSI), National Institute of Standards and Technology (NIST), and the Institute for Electronics and Electrical Engineers (IEEE) are among the few private and public organizations that have developed or are developing AI standards or guidance.  Individual state legislatures, academic institutions, and tech companies have also published guidance, principles, and areas of concern that could be applicable to the development of technical and non-technical standards for AI technologies.  By way of example, the ISO’s technical standard for “big data” architecture includes use cases for deep learning applications and large scale unstructured data collection.  The Partnership on AI, a private non-profit organization whose board consists of representatives from IBM, Google, Microsoft, Apple, Facebook, Amazon, and others, has developed what it considers “best practices” for AI technologies. Under the AAII, the role of technical standards, in addition to helping build an AI industry, will be to “minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies.”  It is hard to imagine a purely technical standard addressing trust and confidence, though a non-technical standards-setting process could address those issues by, for example, introducing measures related to fairness, accountability, and transparency.  Consider the example of delivering AI-based healthcare services at Veterans Administration facilities, where trust and confidence could be reflected in non-technical standards that provide for the publication of clear, understandable explanations about how an AI system works and how it made a decision that affected a patent’s care.  Addressing trust and confidence could also be reflected in requirements for open auditing of AI systems.  The IEEE’s “Ethically Aligned Design” reference considers these and related issues. Another challenge in developing technical standards is to avoid incorporating patented technologies “essential” to the standards adopted by the government, or if unavoidable, to develop rules for disclosure and licensing of essential patents.  As the court in Apple v. Motorola explained, “[s]ome technological standards incorporate patented technology. If a patent claims technology selected by a standards-setting organization, the patent is called an ‘essential patent.’ Many standards-setting organizations have adopted rules related to the disclosure and licensing of essential patents. The policies often require or encourage members of the organization to identify patents that are essential to a proposed standard and to agree to license their essential patents on fair, reasonable and nondiscriminatory terms to anyone who requests a license. (These terms are often referred to by the acronyms FRAND or RAND.)  Such rules help to insure that standards do not allow the owners of essential patents to abuse their market power to extort competitors or prevent them from entering the marketplace.”  See Apple, Inc. v. Motorola Mobility, Inc., 886 F. Supp. 2d 1061 (WD Wis. 2012).  Given the proliferation of new AI-related US patents issued to tech companies in recent years, the likelihood that government technical standards will encroach on some of those patents seems high. For government contractors, AI technical standards could be imposed on them through the government contracting process.  A contracting agency could incorporate new AI technical standards by reference in government contracts, and those standards would flow through to individual task and work orders performed by contractors under those contracts.  Thus, government contractors would need to review and understand the technical standards in the course of executing a written scope of work to ensure they are in compliance.  Sponsoring agencies would likely be expected to review contractor deliverables to measure compliance with applicable AI technical standards.  In the case of non-compliance, contracting officials and their sponsoring agency would be expected to deploy their enforcement authority to ensure problems are corrected, which could include monetary penalties assessed against contractors. Although private businesses (i.e., not government contractors) may not be directly affected by agency-specific technical standards developed under the AAII, customers of those private businesses could, absent other relevant or applicable technical standards, use the government’s AI technical standards as a benchmark when evaluating a business’s products and services.  Moreover, even if federal AI-based technical standards do not directly apply to private businesses, there is certainly the possibility that Congress could legislatively mandate the development of similar or different technical and non-technical standards and other requirements applicable to a business’ AI technologies sold and used in commerce. The president’s Executive Order on AI has turned an “if” into a “when” in the context of federal governance of AI technologies.  If you are a stakeholder, now is a good time to put resources into closely monitoring developments in this area to prepare for possible impacts. Read more »
  • Washington State Seeks to Root Out Bias in Artificial Intelligence Systems
    The harmful effects of biased algorithms have been widely reported.  Indeed, some of the world’s leading tech companies have been accused of producing applications, powered by artificial intelligence (AI) technologies, that were later discovered to exhibit certain racial, cultural, gender, and other biases.  Some of the anecdotes are quite alarming, to say the least.  And while not all AI applications have these problems, it only takes a few concrete examples before lawmakers begin to take notice. In New York City, lawmakers began addressing algorithmic bias in 2017 with the introduction of legislation aimed at eliminating bias from algorithmic-based automated decision systems used by city agencies.  That effort led to the establishment of a Task Force in 2018 under Mayor de Blasio’s office to examine the issue in detail.  A report from the Task Force is expected this year. At the federal level, an increased focus by lawmakers on algorithmic bias issues began in 2018, as reported previously on this website (link) and elsewhere.  Those efforts, by both House and Senate members, focused primarily on gathering information from federal agencies like the FTC, and issuing reports highlighting the bias problem.  Expect congressional hearings in the coming months. Now, Washington State lawmakers are addressing bias concerns.  In companion bills SB-5527 and HB-1655, introduced on January 23, 2019, lawmakers in Olympia drafted a rather comprehensive piece of legislation aimed at governing the use of automated decision systems by state agencies, including the use of automated decision-making in the triggering of automated weapon systems.  As many in the AI community have discussed, eliminating algorithmic-based bias requires consideration of fairness, accountability, and transparency, issues the Washington bills appear to address.  But the bills also have teeth, in the form of a private right of action allowing those harmed to sue. Although the aspirational language of legislation often only provides a cursory glimpse at how stakeholders might be affected under a future law, especially in those instances where, as here, an agency head is tasked with producing implementing regulations, an examination of automated decisions system legislation like Washington’s is useful if only to understand how  states and the federal government might choose to regulate aspects of AI technologies and their societal impacts. Purpose and need for anti-bias algorithm legislation According to the bills’ sponsors, in Washington, automated decision systems are rapidly being adopted to make or assist in core decisions in a variety of government and business functions, including criminal justice, health care, education, employment, public benefits, insurance, and commerce.  These systems, the lawmakers say, are often deployed without public knowledge and are unregulated.  Their use raises concerns about due process, fairness, accountability, and transparency, as well as other civil rights and liberties.  Moreover, reliance on automated decision systems without adequate transparency, oversight, or safeguards can undermine market predictability, harm consumers, and deny historically disadvantaged or vulnerable groups the full measure of their civil rights and liberties. Definitions, Prohibited Actions, and Risk Assessments The new Washington law would define “automated decision systems” as any algorithm, including one incorporating machine learning or other AI techniques, that uses data-based analytics to make or support government decisions, judgments, or conclusions.  The law would distinguish “automated final decision system,” which are systems that make “final” decisions, judgments, or conclusions without human intervention, and “automated support decision system,” which provide information to inform the final decision, judgment, or conclusion of a human decision maker. Under the new law, in using an automated decision system, an agency would be prohibited from discriminating against an individual, or treating an individual less favorably than another, in whole or in part, on the basis of one or more factors enumerated in RCW 49.60.010.  An agency would be outright prohibited from developing, procuring, or using an automated final decision system to make a decision impacting the constitutional or legal rights, duties, or privileges of any Washington resident, or to deploy or trigger any weapon. Both versions of the bill include lengthy provisions detailing algorithmic accountability reports that agencies would be required to produce and publish for public comment.  Among other things, these reports must include clear information about the type or types of data inputs that a technology uses; how that data is generated, collected, and processed; and the type or types of data the systems are reasonably likely to generate, which could help reveal the degree of bias inherent in a system’s black box model.  The accountability reports also must identify and provide data showing benefits; describe where, when, and how the technology is to be deployed; and identify if results will be shared with other agencies. An agency that deploys an approved report would then be required to follow conditions that are set forth in the report. Although an agency’s choice to classify its automated decision system as one that makes “final” or “support” decisions may be given deference by courts, the designations are likely to be challenged if the classification is not justified.  One reason a party might challenge designations is to obtain an injunction, which may be available in the case where an agency relies on a final decision made by an automated decision system, whereas an injunction may be more difficult to obtain in the case of algorithmic decisions that merely support a human decision-maker.  The distinction between the two designations may also be important during discovery, under a growing evidentiary theory of “machine testimony” that includes cross-examining machines witnesses by gaining access to source code and, in the case of machine learning models, the developer’s data used to train a machine’s model.  Supportive decision systems involving a human making a final decision may warrant a different approach to discovery. Conditions impacting software makers Under the proposed law, public agencies that use automated decision systems would be required to publicize the system’s name, its vendor, and the software version, along with the decision it will be used to make or support.  Notably, a vendor must make its software and the data used in the software “freely available” before, during, and after deployment for agency or independent third-party testing, auditing, or research to understand its impacts, including potential bias, inaccuracy, or disparate impacts.  The law would require any procurement contract for an automated decision system entered into by a public agency to include provisions that require vendors to waive any legal claims that may impair the “freely available” requirement.  For example, contracts with vendors could not contain nondisclosure impairment provisions, such as those related to assertions of trade secrets. Accordingly, software companies who make automated decision systems will face the prospect of waiving proprietary and trade secret rights and opening up their algorithms and data to scrutiny by agencies, third parties, and researchers (presumably, under terms of confidentiality).  If litigation were to ensue, it could be difficult for vendors to resist third-party discovery requests on the basis of trade secrets, especially if information about auditing of the system by the state agency and third-party testers/researchers is available through administrative information disclosure laws.  A vendor who chooses to reveal the inner workings of a black box software application without safeguards should consider at least financial, legal, and market risks associated with such disclosure. Contesting automated decisions and private right of action Under the proposed law, public agencies would be required to announce procedures how an individual impacted by a decision made by an automated decision system can contest the decision.  In particular, any decision made or informed by an automated decision system will be subject to administrative appeal, an immediate suspension if a legal right, duty, or privilege is impacted by the decision, and a potential reversal by a human decision-maker through an open due process procedure.  The agency must also explain the basis for its decision to any impacted individual in terms “understandable” to laypersons including, without limitation, by requiring the software vendor to create such an explanation.  Thus, vendors may become material participants in administrative proceedings involving a contested decision made by its software. In addition to administrative relief, the law would provide a private right of action for injured parties to sue public agencies in state court.  In particular, any person who is injured by a material violation of the law, including denial of any government benefit on the basis of an automated decision system that does not meet the standards of the law, may seek injunctive relief, including restoration of the government benefit in question, declaratory relief, or a writ of mandate to enforce the law. For litigators representing injured parties in such cases, dealing with evidentiary issues involving information produced by machines would likely follow Washington judicial precedent in areas of administrative law, contracts, tort, civil rights, the substantive law involving the agency’s jurisdiction (e.g., housing, law enforcement, etc.), and even product liability.  In the case of AI-based automated decision systems, however, special attention may need to be given to the nuances of machine learning algorithms to prepare experts and take depositions in cases brought under the law.  Although the aforementioned algorithmic accountability report could be useful evidence for both sides in an automated decision system lawsuit, merely understanding the result of an algorithmic decision may not be sufficient when assessing if a public agency was thorough in its approach to vetting a system.  Being able to describe how the automated decision system works will be important.  For agencies, understanding the nuances of the software products they procure will be important to establish that they met their duty to vet the software under the new law. For example, where AI machine learning models are involved, new data, or even previous data used in a different way (i.e., a different cross-validation scheme or a random splitting of data into new training and testing subsets), can generate models that produce slightly different outcomes.  While small, the difference could mean granting or denying agency services to constituents.  Moreover, with new data and model updates comes the possibility of introducing or amplifying bias that was not previously observed.  The Washington bills do not appear to include provisions imposing an on-going duty on vendors to inform agencies when bias or other problems later appear in software updates (though it’s possible the third party auditors or researchers noted above might discover it).  Thus, vendors might expect agencies to demand transparency as a condition set forth in acquisition agreements, including software support requirements and help with developing algorithmic accountability reports.  Vendors might also expect to play a role in defending against claims by those alleging injury, should the law pass.  And they could be asked to shoulder some of the liability either through indemnification or other means of contractual risk-shifting to the extent the bills add damages as a remedy. Read more »
  • What’s in a Name? A Chatbot Given a Human Name is Still Just an Algorithm
    Due in part to the learned nature of artificial intelligence technologies, the spectrum of things that exhibit “intelligence” has, in debates over such things, expanded to include certain advanced AI systems.  If a computer vision system can “learn” to recognize real objects and make decisions, the argument goes, its ability to do so can be compared to that of humans and thus should not be excluded from the intelligence debate.  By extension, AI systems that can exhibit intelligence traits should not be treated like mere goods and services, and thus laws applicable to such good and services ought not to apply to them. In some ways, the marketing of AI products and services using names commonly associated with humans, such as “Alexa,” “Sophia,” and “Siri,” buttresses the argument that laws applicable to non-human things should not strictly apply to AI.  For now, however, lawmakers and the courts struggling with practical questions about regulating AI technologies can justifiably apply traditional goods and services laws to named AI systems just as they do to non-named system.  After all, a robot or chatbot doesn’t become more humanlike and less like a man-made product merely because it’s been anthropomorphized.  Even so, when future technological breakthroughs suggest artificial general intelligence (AGI) is on the horizon, lawmakers and the courts will be faced with the challenge of amending laws to account for the differences between AGI systems and today’s narrow AI and other “unintelligent” goods and services.  For now, it’s instructive to consider why the rise in the use of names for AI system is not a good basis for triggering greater attention by lawmakers.  Indeed, as suggested below, other characteristics of AI system may be more useful in deciding when laws need to be amended.  To begin, the recent case of a chatbot named “Erica” is presented. The birth of a new bot In 2016, machine learning developers at Bank of America created a “virtual financial assistant” application called “Erica” (derived from the bank’s name America).  After conducting a search of existing uses of the name Erica in other commercial endeavors, and finding none in connection with a chatbot like theirs, BoA sought federal trademark protection for the ERICA mark in October 2016.  The US Patent and Trademark Office concurred with BoA’s assessment of prior uses and registered the mark on July 31, 2018.  Trademarks are issued in connection with actual uses of words, phrases, and logos in commerce, and in the case of BoA, the ERICA trademark was registered in connection with computer financial software, banking and financial services, and personal assistant software in banking and financial SaaS (software as a service).  The Erica app is currently described as possessing the utility to answer customer questions and make banking easier.  During its launch, BoA used the “she” pronoun when describing the app’s AI and predictive analytics capabilities, ostensibly because the name Erica is a stereotypical female gender name, but also because of the apparent female-sounding voice the app outputs as part of its human-bot interface. One of the existing uses of an Erica-like mark identified by BoA was an instance of “E.R.I.C.A,” which appeared in October 2010 when Erik Underwood, a Colorado resident, filed a Georgia trademark registration application for “E.R.I.C.A. (Electronic Repetitious Informational Clone Application).”  See Underwood v. Bank of Am., slip op., No. 18-cv-02329-PAB-MEH (D. Colo. Dec. 19, 2018).  On his application, Mr. Underwood described E.R.I.C.A. as “a multinational computer animated woman that has slanted blue eyes and full lips”; he also attached a graphic image of E.R.I.C.A. to his application.  Mr. Underwood later sought a federal trademark application (filed in September 2018) for an ERICA trademark (without the separating periods).  At the time of his lawsuit, his only use of E.R.I.C.A. was on a searchable movie database website. In May 2018, Mr. Underwood sent a cease-and-desist letter to BoA regarding BoA’s use of Erica, and then filed a lawsuit in September 2018 against the bank alleging several causes of action, including “false association” under § 43(a) of the Lanham Act, 15 U.S.C. § 1125(a)(1)(A).  Section 43(a) states, in relevant part, that any person who, on or in connection with any goods or services, uses in commerce a name or a false designation of origin which is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection, or association of such person with another person, or as to the origin, sponsorship, or approval of his or her goods, services, or commercial activities by another person, shall be liable in a civil action by a person who believes that he or she is likely to be damaged by such act.  In testimony, Mr. Underwood stated that the E.R.I.C.A. service mark was being used in connection with “verbally tell[ing] the news and current events through cell phone[s] and computer applications” and he described plans to apply an artificial intelligence technology to E.R.I.C.A.  Mr. Underwood requested the court enter a preliminary injunction requiring BoA to cease using the Erica name. Upon considering the relevant preliminary injunction factors and applicable law, the District Court denied Mr. Underwood’s request for an injunction on several grounds, including the lack of relevant uses of E.R.I.C.A. in the same classes of goods and services that BoA’s Erica was being used in. Giving AI a persona may boost its economic value and market acceptance Not surprisingly, the District Court’s preliminary injunction analysis rested entirely on perception and treatment of the Erica and E.R.I.C.A. systems as nothing more than services, something neither party disputed or challenged.  Indeed, each party’s case-in-chief depended on their convincing the court that their applications fit squarely in the definition of goods and services despite the human-sounding names they chose to attach to them.  The court’s analysis, then, illuminated one of the public policies underlying laws like the Lanham Act, which is the protection of the economic benefits associated with goods and services created by people and companies.  The name Erica provides added economic value to each party’s creation and is an intangible asset associated with their commercial activities. The use of names has long been found to provide value to creators and owners, and not just in the realm of hardware and software.  Fictional characters like “Harry Potter,” which are protected under copyright and trademark laws, can be intellectual assets having tremendous economic value.  Likewise, namesake names carried over to goods and services, like IBM’s “Watson”–named after the company’s first CEO, John Watson–provide real economic benefits that might not have been achieved without a name, or even with a different name.  In the case of humanoid robots, like Hanson Robotics’ “Sophia,” which is endowed with aspects of AI technologies and was reportedly granted “citizenship” status in Saudi Arabia, certain perceived and real economic value is created by distinguishing the system from all other robots by using a real name (as compared to, for example, a simple numerical designation). On the other end of the spectrum are names chosen for humans, the uses of which are generally unrestricted from a legal perspective.  Thus, naming one’s baby “Erica” or even “Harry Potter” shouldn’t land a new parent in hot water.  At the same time, those parents aren’t able to stop others from using the same names for other children.  Although famous people may be able to prevent others from using their names (and likenesses) for commercial purposes, the law only recognizes those situations when the economic value of the name or likeness is established (though demonstrating economic value is not always necessary under some state right of publicity laws).  Some courts have gone so far as to liken the right to protect famous personas to a type of trademark in a person’s name because of the economic benefits attached to it, much the same way a company name, product name, or logo attached to a product or service can add value. Futurists might ask whether a robot or chatbot demonstrating a degree of intelligence and that endowed with unique human-like traits, including a unique persona (e.g., name and face generated from a generative-adversarial network) and the ability to recognize and respond to emotions (e.g., using facial coding algorithms in connection with a human-robot interface), thus making them sufficiently differentiable from all other robots and chatbots (at least superficially), should have special treatment.  So far, endowing AI technologies with a human form, gender, and/or a name has not motivated lawmakers and policymakers to pass new laws aimed at regulating AI technologies.  Indeed, lawmakers and regulators have so far proposed, and in some cases passed, laws and regulations placing restrictions on AI technologies based primarily on their specific applications (uses) and results (impacts on society).  For example, lawmakers are focusing on bot-generated spread and amplification of disinformation on social media, law enforcement use of facial recognition, the private business collection and use of face scans, users of drones and highly automated vehicles in the wild, production of “deepfake” videos, the harms caused by bias in algorithms, and others.  This application/results-focused approach, which acknowledges explicitly or implicitly certain normative standards or criteria for acceptable actions, as a means to regulate AI technology is consistent with how lawmakers have treated other technologies in the past. Thus, marketers, developers, and producers of AI systems who personify their chatbots and robots may sleep well knowing their efforts may add value to their creations and alter customer acceptance and attitudes about their AI systems, but they are unlikely to cause lawmakers to suddenly consider regulating them. At some point, however, advanced AI systems will need to be characterized in some normative way if they are to be governed as a new class of things.  The use of names, personal pronouns, personas, and metaphors associating bots to humans may frame bot technology in a way that ascribes particular values and norms to it (Jones 2017).  These might include characteristics such as utility, usefulness (including positive benefits to society), adaptability, enjoyment, sociability, companionship, and perceived or real “behavioral” control, which some argue are important in evaluating user acceptance of social robots.  Perhaps these and other factors, in addition to some measure of intelligence, need to be considered when deciding if an advanced AI bot or chatbot should be treated under the law as something other than a mere good or service.  The subjective nature of those factors, however, would obviously make it challenging to create legally-sound definitions of AI for governance purposes.  Of course, laws don’t have to be precise (and sometimes they are intentionally written without precision to provide flexibility in their application and interpretation), but a vague law won’t help an AI developer or marketer know whether his or her actions and products are subject to an AI law.  Identifying whether to treat bots as goods and services or as something else deserving of a different set of regulations, like those applicable to humans, is likely to involve a suite of factors that permit classifying advanced AI on the spectrum somewhere between goods/services and humans. Recommended reading  The Oxford Handbook of Law, Regulation, and Technology is one of my go-to references for timely insight about topics discussed on this website.  In the case of this post, I drew inspiration from Chapter 25: Hacking Metaphors in the Anticipatory Governance of Emerging Technology: The Case of Regulating Robots, by Meg Leta Jones and Jason Millar. Read more »
  • The Role of Explainable Artificial Intelligence in Patent Law
    Although the notion of “explainable artificial intelligence” (AI) has been suggested as a necessary component of governing AI technology, at least for the reason that transparency leads to trust and better management of AI systems in the wild, one area of US law already places a burden on AI developers and producers to explain how their AI technology works: patent law.  Patent law’s focus on how AI systems work was not borne from a Congressional mandate. Rather, the Supreme Court gets all the credit–or blame, as some might contend–for this legal development, which began with the Court’s 2014 decision in Alice Corp. Pty Ltd. v. CLS Bank International. Alice established the legal framework for assessing whether an invention fits in one of the patent law’s patent-eligible categories (i.e., any “new and useful process, machine, manufacture, or composition of matter” or improvements thereof) or is a patent-ineligible concept (i.e., law of nature, natural phenomenon, or abstract idea).  Alice Corp. Pty Ltd. v. CLS Bank International, 134 S. Ct. 2347, 2354–55 (2014); 35 USC § 101. Understanding how the idea of “explaining AI” came to be following Alice, one must look at the very nature of AI technology.  At their core, AI systems based on machine learning models generally transform input data into actionable output data, a process US courts and the Patent Office have historically found to be patent-ineligible.  Consider a decision by the US Court of Appeals for the Federal Circuit, whose judges are selected for their technical acumen as much as for their understanding of the nuances of patent and other areas of law, that issued around the same time as Alice: “a process that employs mathematical algorithms to manipulate existing information to generate additional information is not patent eligible.”  Digitech Image Techs, LLC v. Elecs. v. Imaging, Inc., 758 F.3d 1344, 1351 (Fed. Cir. 2014).  While Alice did not specifically address AI or mandate anything resembling explainable AI, it nevertheless spawned a progeny of Federal Circuit, district court, and Patent Office decisions that did just that.  Notably, those decisions arose not because of notions that individuals impacted by AI algorithmic decisions ought to have the right to understand how those decisions were made or why certain AI actions were taken, but because explaining how AI systems works helps satisfy the quid pro quo that is fundamental to patent law: an inventor who discloses to the world details of what she has invented is entitled to a limited legal monopoly on her creation (provided, of course, the invention is patentable). The Rise of Algorithmic Scrutiny Alice arrived not long after Congress passed patent reform legislation called the America Invents Act (AIA) of 2011, provisions of which came into effect in 2012 and 2013.  In part, the AIA targeted a decade of what many consider a time of abusive patent litigation brought against some of the largest tech companies in the world and thousands of mom-and-pop and small business owners who were sued for doing anything computer-related.  This litigious period saw the term “patent troll” used more often to describe patent assertion companies that bought up dot-com-era patents covering the very basics of using the Internet and computerized business methods and then sued to collect royalties for alleged infringement. Not surprisingly, some of the same big tech companies that pushed for patent reform provisions now in the AIA to curb patent litigation in the field of computer technology also filed amicus curiae briefs in the Alice case to further weaken software patents.  The Supreme Court’s unanimous decision in Alice helped curtail troll-led litigation by formalizing a procedure, one that lower court judges could easily adopt, for excluding certain software-related inventions from the list of inventions that are patentable. Under Alice, a patent claim–the language used by inventors to describe what he or she claims to be his or her invention–falls outside § 101 when it is “directed to” one of the patent-ineligible concepts noted above.  If so, Alice requires consideration of whether the particular elements of the claim, evaluated “both individually and ‘as an ordered combination,'” add enough to “‘transform the nature of the claim'” into one of the patent-eligible categories.  Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed.Cir. 2016) (quoting Alice, 134 S. Ct. at 2355).  While simple in theory, it took years of court and Patent Office decisions to explain how that 2-part test is to be employed, and only more recently how it applies to AI technologies.  Today, the Patent Office and courts across the US routinely find that algorithms are abstract (even though algorithms, including certain mental processes embodied in algorithmic form performed by a computer, are by most measures useful processes).  According to the Federal Circuit, algorithmic-based data collection, manipulation, and communication–functions most AI algorithms perform–are abstract. Artificial Intelligence, Meet Alice In a bit of ironic foreshadowing, the Supreme Court issued Alice in the same year that major advances in AI technologies were being announced, such as Google’s deep neural network architecture that prevailed in the 2014 ImageNet challenge (ILSVCR) and Ian Goodfellow’s generative adversarial network (GAN) model, both of which were major contributions to the field of computer vision. Even as more breakthroughs were being announced, US courts and the Patent Office began issuing Alice decisions regarding AI technologies and explaining why it’s crucial for inventors to explain how their AI inventions work to satisfy the second half of Alice’s 2-part test. In Purepredictive, Inc. v. H2O.AI, Inc., for example, the US District Court for the Northern District of California considered the claims of US Patent 8,880,446, which, according to the patent’s owner, involves “AI driving machine learning ensembling.”  The district court characterized the patent as being directed to a software method that performs “predictive analytics” in three steps.  Purepredictive, Inc. v. H2O.AI, Inc., slip op., No. 17-cv-03049-WHO (N.D. Cal. Aug. 29, 2017).  In the method’s first step, it receives data and generates “learned functions,” or, for example, regressions from that data. Second, it evaluates the effectiveness of those learned functions at making accurate predictions based on the test data. Finally, it selects the most effective learned functions and creates a rule set for additional data input. The court found the claims invalid on the grounds that they “are directed to the abstract concept of the manipulation of mathematical functions and make use of computers only as tools, rather than provide a specific improvement on a computer-related technology.” The claimed method, the district court said, is merely “directed to a mental process” performed by a computer, and “the abstract concept of using mathematical algorithms to perform predictive analytics” by collecting and analyzing information.  The court explained that the claims “are mathematical processes that not only could be performed by humans but also go to the general abstract concept of predictive analytics rather than any specific application.” In Ex Parte Lyren, the Patent Office’s Appeals Board, made up of three administrative law judges, rejected a claim directed to customizing video on a computer as being abstract and thus not patent-eligible.  In doing so, the board disagreed with the inventor, who argued the claimed computer system, which generated and displayed a customized video by evaluating a user’s intention to purchase a product and information in the user’s profile, was an improvement in the technical field of generating videos. The claimed customized video, the Board found, could be any video modified in any way.  That is, the rejected claims were not directed to the details of how the video was modified, but rather to the result of modifying the video.  Citing precedent, the board reiterated that “[i]n applying the principles emerging from the developing body of law on abstract ideas under section 101, … claims that are ‘so result-focused, so functional, as to effectively cover any solution to an identified problem’ are frequently held ineligible under section 101.”  Ex ParteLyren, No. 2016-008571 (PTAB, June 25, 2018) (citing Affinity Labs of Texas,LLC v. DirecTV, LLC, 838 F.3d 1253, 1265 (Fed. Cir. 2016) (quoting Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1356 (Fed. Cir, 2016)); see also Ex parte Colcernian et al., No. 2018-002705 (PTAB, Oct. 1, 2018) (rejecting claims that use result-oriented language as not reciting the specificity necessary to show how the claimed computer processor’s operations differ from prior human methods, and thus are not directed to a technological improvement but rather are directed to an abstract idea). Notably, the claims in Ex Parte Lyren were also initially rejected as failing to satisfy a different patentability test–the written description requirement.  35 USC § 112.  In rejecting the claims as lacking sufficient description of the invention, the Patent Office Examiner found that the algorithmic features of the inventor’s claim were “all implemented inside a computer, and therefore all require artificial intelligence [(AI)] at some level” and thus require extensive implementation details “subject of cutting-edge research, e.g.[,] natural language processing and autonomous software agents exhibiting intelligent behavior.” The Examiner concluded that “one skilled in the art would not be persuaded that Applicant possessed the invention” because “it is not readily apparent how to make a device [to] analyze natural language.”  The Appeals Board disagreed and sided with the inventor who argued that his invention description was comprehensive and went beyond just artificial intelligence implementations.  Thus, while the description of how the invention worked was sufficiently set forth, Lyren’s claims focused too much on the results or application of the technology and thus were found to be abstract. In Ex Parte Homere, claims directed to “a computer-implemented method” involving “establishing a communication session between a user of a computer-implemented marketplace and a computer-implemented conversational agent associated with the market-place that is designed to simulate a conversation with the user to gather listing information, the Appeals Board affirmed an Examiner’s rejection of the claims as being abstract.  Ex Parte Homere, Appeal No. 2016-003447 (PTAB Mar. 29, 2018).  In doing so, the Appeals Board noted that the inventor had not identified anything in the claim or in the written description that would suggest the computer-related elements of the claimed invention represent anything more than “routine and conventional” technologies.  The most advanced technologies alluded to, the Board found, seemed to be embodiments in which “a program implementing a conversational agent may use other principles, including complex trained Artificial Intelligence (AI) algorithms.”  However, the claimed conversational agent was not so limited.  Instead, the Board concluded that the claims were directed to merely using recited computer-related elements to implement the underlying abstract idea, rather than being limited to any particular advances in the computer-related elements. In Ex Parte Hamilton, a rejection of a claim directed to “a method of planning and paying for advertisements in a virtual universe (VU), comprising…determining, via the analysis module, a set of agents controlled by an Artificial Intelligence…,” was affirmed as being patent ineligible.  Ex Parte Hamilton et al., Appeal No.2017-008577 (PTAB Nov. 20, 2018).  The Appeals Board found that the “determining” step was insufficient to transform the abstract idea of planning and paying for advertisements into patent-eligible subject matter because the step represented an insignificant data-gathering step and thus added nothing of practical significance to the underlying abstract idea. In Ex Parte Pizzorno, the Appeals Board affirmed a rejection of a claim directed to “a computer implemented method useful for improving artificial intelligence technology” as abstract.  Ex Parte Pizzorno, Appeal No. 2017-002355 (PTAB Sep. 21, 2018).  In doing so, the Board determined that the claim was directed to the concept of using stored health care information for a user to generate personalized health care recommendations based on Bayesian probabilities, which the Board said involved “organizing human activities and an idea in itself, and is an abstract idea beyond the scope of § 101.”  Considering each of the claim elements in turn, the Board also found that the function performed by the computer system at each step of the process was purely conventional in that each step did nothing more than require a generic computer to perform a generic computer function. Finally, in Ex Parte McAfee, the Appeals Board affirmed a rejection of a claim on the basis that it was “directed to the abstract idea of receiving, analyzing, and transmitting data.”  Ex Parte McAfee, Appeal No. 2016-006896 (PTAB May 22, 2018).  At issue was a method that included “estimating, by the ad service circuitry, a probability of a desired user event from the received user information, and the estimate of the probability of the desired user event incorporating artificial intelligence configured to learn from historical browsing information in the received user information, the desired user event including at least one of a conversion or a click-through, and the artificial intelligence including regression modeling.”  In affirming the rejection, the Board found that the functions performed by the computer at each step of the claimed process was purely conventional and did not transform the abstract method into a patent-eligible one. In particular, the step of estimating the probability of the desired user event incorporating artificial intelligence was found to be merely “a recitation of factors to be somehow incorporated, which is aspirational rather than functional and does not narrow the manner of incorporation, so it may include no more than incorporating results from some artificial intelligence outside the scope of the recited steps.” The above and other Alice decisions have led to a few general legal axioms, such as: a claim for a new abstract idea is still an abstract idea; a claim for a beneficial abstract idea is still an abstract idea; abstract ideas do not become patent-eligible because they are new ideas, are not previously well known, and are not routine activity; and, the “mere automation of manual processes using generic computers does not constitute a patentable improvement in computer technology.”  Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016); Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371, 1379-80 (Fed. Cir. 2015); Ultramercial, Inc. v. Hulu, LLC, 772 F.3d. 709, 715-16 (Fed. Cir. 2014); Credit Acceptance Corp. v. Westlake Servs., 859 F.3d 1044, 1055 (Fed. Cir. 2017); see also SAP Am., Inc. v. Investpic, LLC, slip op. No. 2017-2081, 2018 WL2207254, at *2, 4-5 (Fed. Cir. May 15, 2018) (finding financial software patent claims abstract because they were directed to “nothing but a series of mathematical calculations based on selected information and the presentation of the results of those calculations (in the plot of a probability distribution function)”); but see Apple, Inc. v.Ameranth, Inc., 842 F.3d 1229, 1241 (Fed. Cir. 2016) (noting that “[t]he Supreme Court has recognized that all inventions embody, use,reflect, rest upon, or apply laws of nature, natural phenomena, or abstractideas[ ] but not all claims are directed to an abstract idea.”). The Focus on How, not the Results Following Alice, patent claims directed to an AI technology must recite features of the algorithm-based system that represent how the algorithm improves a computer-related technology and is not previously well-understood, routine, and conventional.  In PurePredictive, for example, the Northern California district court, which sees many software-related cases due to its proximity to the Bay Area and Silicon Valley, found that the claims of a machine learning ensemble invention were not directed to an invention that “provide[s] a specific improvement on a computer-related technology.”  See also Neochloris, Inc. v. Emerson Process Mgmt LLLP, 140 F. Supp. 3d 763, 773 (N.D. Ill. 2015) (explaining that patent claims including “an artificial neural network module” were invalid under § 101 because neural network modules were described as no more than “a central processing unit – a basic computer’s brain”). Satisfying Alice, thus, requires claims focusing on a narrow application of how an AI algorithmic model works, rather than the broader and result-oriented nature of what the model is used for.  This is necessary where the idea behind the algorithm itself could be used to achieve many different results.  For example, a claim directed to a mathematical process (even one that is said to be “computer-implemented”), and that could be performed by humans (even if it takes a long time), and that is directed to a result achieved instead of a specific application, will seemingly be patent-ineligible under today’s Alice legal framework. To illustrate, consider an image classification system, one that is based on a convolutional neural network.  Such a system may be patentable if the claimed system improves the field of computer vision technology. Claiming the invention in terms of how the elements of the computer are technically improved by its deep learning architecture and algorithm, rather than simply claiming a deep learning model using results-oriented language, may survive an Alice challenge, provided the claim does not merely cover an automated process that human used to do.  Moreover, claims directed to the multiple hidden layers, convolutions, recurrent connections, hyperperameters, and weights could also be claimed. By way of another example, a claim reciting “a computer-implemented process using artificial intelligence to generate an image of a person,” is likely abstract if it does not explain how the image is generated and merely claims a computerized process a human could perform.  But a claim that describes a unique AI system that specifies how it generates the image, including the details of a generative adversarial network architecture and its various inputs provided by physical devices (not routine data collection), its connections and hyperparameters, has a better chance of passing muster (keeping in mind, this only addresses the question of whether the claimed invention is eligible to be patented, not whether it is, in fact, patentable, which is an entirely different analysis and requires comparing the claim to prior art). Uncertainty Remains Although the issue of explaining how an AI system works in the context of patent law is still in flux, the number of US patents issued by the Patent Office mentioning “machine learning,” or the broader term “artificial intelligence,” has jumped in recent years. Just this year alone, US machine learning patents are up 27% compared to the same year-to-date period in 2017 (thru the end of November), according to available Patent Office records.  Even if machine learning is not the focus of many of them, the annual upward trend in patenting AI over the last several years appears unmistakable. But with so many patents invoking AI concepts being issued, questions about their validity may arise.  As the Federal Circuit has stated, “great uncertainty yet remains” when it comes to the test for deciding whether an invention like AI is patent-eligible under Alice, this despite the large number of cases that have “attempted to provide practical guidance.”  Smart Systems Innovations, LLC v. Chicago Transit Authority, slip. op. No. 2016-1233 (Fed. Cir. Oct. 18, 2017).  Calling the uncertainty “dangerous” for some of today’s “most important inventions in computing,” specifically mentioning AI, the Federal Circuit expressed concern that perhaps the application of the Alice test has gone too far, a concern mirrored in testimony by Andrei Iancu, Director of the Patent Office, before Congress in April 2018 (stating, in response to Judiciary Committee questions, that Alice and its progeny have introduced a degree of uncertainty into the area of subject matter eligibility, particularly as it relates to medical diagnostics and software-related inventions, and that Alice could be having a negative impact on innovation). Absent legislative changes abolishing or altering Alice, a solution to the uncertainty problem, at least in the context of AI technologies, lies in clarifying existing decisions issued by the Patent Office and courts, including the decisions summarized above.  While it can be challenging to explain why an AI algorithm made a particular decision or took a specific action (due to the black box nature of such algorithms once they are fully trained), it is generally not difficult to describe the structure of a deep learning or machine learning algorithm or how it works. Even so, it remains unclear whether and to what extent fully describing how one’s AI technology and including “how” features in patent claims will ever be sufficient to “add[] enough to transform the nature of an abstract algorithm into a patent-eligible [useful process].” If explaining how AI works is to have a future meaningful role in patent law, the courts or Congress will need to provide clarity. Read more »
  • California Appeals Court Denies Defendant Access to Algorithm That Contributed Evidence to His Conviction
    One of the concerns expressed by those studying algorithmic decision-making is the apparent lack of transparency. Those impacted by adverse algorithmic decisions often seek transparency to better understand the basis for the decisions. In the case of software used in legal proceedings, parties who seek explanations about software face a number of obstacles, including those imposed by evidentiary rules, criminal or civil procedural rules, and by software companies that resist discovery requests. The closely-followed issue of algorithmic transparency was recently considered by a California appellate court in People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018), in which the People sought relief from a discovery order requiring the production of software and source code used in the conviction of Florencio Jose Dominguez. Following a hearing and review of the record and amicus briefs in support of Dominguez filed by the American Civil Liberties Union, the American Civil Liberties Union of San Diego and Imperial Counties, the Innocence Project, Inc., the California Innocence Project, the Northern California Innocence Project at Santa Clara University School of Law, Loyola Law School’s Project for the Innocent, and the Legal Aid Society of New York City, the appeals court granted the People’s relief. In doing so, the court considered, but was not persuaded by, the defense team’s “black box” and “machine testimony” arguments. At issue on appeal was Dominguez’s motion to compel production of a DNA testing program called STRmix used by local prosecutors in their analysis of forensic evidence (specifically, DNA found on the inside of gloves). STRmix is a “probabilistic genotyping” program that expresses a match between a suspect and DNA evidence in terms the probability of a match compared to a coincidental match. Probabilistic genotyping is said to reduce subjectivity in the analysis of DNA typing results. Dominguez’s counsel moved the trial court for an order compelling the People to produce the STRmix software program and related updates as well as its source code, arguing that defendant had a right to look inside the software’s “black box.” The trial court granted the motion and the People sought writ relief from the appellate court. On appeal, the appellate court noted that “computer software programs are written in specialized languages called source code” and “source code, which humans can read, is then translated into [a] language that computers can read.” Cadence Design Systems, Inc. v. Avant! Corp., 29 Cal. 4th 215, 218 at fn.3 (2002). The lab that used STRmix testified that it had no way to access the source code, which it licensed from a software authorized seller.  Thus,  the court considered whether the company that created the software should produce it. In concluding that the company was not obligated to produce the software and source code, the court, citing precedent, found that the company would have had no knowledge of the case but for the defendant’s  subpoena duces tecum, and it did not act as part of the prosecutorial team such that it was obligated to turn over exculpatory evidence (assuming software itself is exculpatory, which the court was reluctant to find). With regard to the defense team’s “black box” argument, the appellate court found nothing in the record to indicate that the STRmix software suffered a problem, as the defense team argued, that might have affected its results. Calling this allegation speculative, the court concluded that the “black box” nature of STRmix was not itself sufficient to warrant its production. Moreover, the court was unpersuaded by the defense team’s argument that the STRmix program essentially usurped the lab analyst’s role in providing the final statistical comparison, and so the software program—not the analyst using the software—was effectively the source of the expert opinion rendered at trial. The lab, the defense argued, merely acted in a scrivener’s capacity for STRmix’s analysis, and since the machine was providing testimony, Dominguez should be able to evaluate the software to defend against the prosecution’s case against him. The appellate court disagreed. While acknowledging the “creativity” of the defense team’s “machine testimony” argument (which relied heavily on Berkeley law professor Andrea Roth’s “Machine Testimony” article (126 Yale L.J. 1972 (2017)), the panel noted the testimony that STRmix did not act alone, that there were humans in the loop: “[t]here are still decisions that an analyst has to make on the front end in terms of determining the number of contributors to a particular sample and determin[ing] which peaks are from DNA or from potentially artifacts” and that the program then performs a “robust breakdown of the DNA samples,” based at least in part on “parameters [the lab] set during validation.” Moreover, after STRmix renders “the diagnostics,” the lab “evaluate[s] … the genotype combinations … . to see if that makes sense, given the data [it’s] looking at.” After the lab “determine[s] that all of the diagnostics indicate that the STRmix run has finished appropriately,” it can then “make comparisons to any person of interest or … database that [it’s] looking at.” While the appellate court’s decision mostly followed precedent and established procedure, it could easily have gone the other way and affirmed the trial judge’s decision granting Defendant’s motion to compel the STRmix software and source code, which would have given Dominguez better insight into the nature of the software’s algorithms, its parameters and limitations in view of validation studies, and the various possible outputs the model could have produced given a set of inputs. In particular, the court might have affirmed the trial judge’s decision to grant access to the STRmix software if the policy of imposing transparency in STRmix’s algorithmic decisions were given more consideration from the perspective of actual harm that might occur if software and source code are produced. Here, the source code owner’s objection to production was based in part on trade secret and other confidentiality concerns; however, procedures already exist to handle those concerns. Indeed, source code reviews happen all the time in the civil context, such as in patent infringement matters involving software technologies. While software makers are right to be concerned about the harm to their businesses if their code ends up in the wild, the real risk of this happening can be low if proper procedures, embodied in a suitable court-issued Protective Order, are followed by lawyers on both sides of a matter and if the court maintains oversight and demands status updates from the parties to ensure compliance and integrity in the review process. Instead of following the trial court’s approach, however, the appellate court conditional access to STRmix’s “black box” on the demonstration of specific errors in the program’s results, which seems intractable: only by looking into the black box in the first place is a party able to understand whether problems exist that affect the result. Interestingly, artificial intelligence had nothing to do with the outcome of the appellate court’s decision, yet the panel noted that “We do not underestimate the challenges facing the legal system as it confronts developments in the field of artificial intelligence.” The judges acknowledged that the notion of “machine testimony” in algorithmic decision-making matters is a subject about which there are widely divergent viewpoints in the legal community, a possible prelude to what is ahead when artificial intelligence software cases make their way through the courts in criminal or non-criminal cases.  To that, the judges cautioned, “when faced with a novel method of scientific proof, we have required a preliminary showing of general acceptance of the new technique in the relevant scientific community before the scientific evidence may be admitted at trial.” Lawyers in future artificial intelligence cases should consider how best to frame arguments concerning machine testimony in both civil and criminal contexts to improve their chances of overcoming evidentiary obstacles. Lawyers will need to effectively articulate the nature of artificial intelligence decision-making algorithms, as well as the relative roles of data scientists and model developers who make decisions about artificial intelligence model architecture, hyperparameters, data sets, model inputs, training and testing procedures, and the interpretation of results. Today’s artificial intelligence systems do not operate autonomously; there will always be humans associated with a model’s output or result and those persons may need to provide expert testimony beyond the machine’s testimony.  Even so, transparency will be important to understanding algorithmic decisions and for developing an evidentiary record in artificial intelligence cases. Read more »
  • Thanks to Bots, Transparency Emerges as Lawmakers’ Choice for Regulating Algorithmic Harm
    true
    Digital conversational agents, like Amazon’s Alexa and Apple’s Siri, and communications agents, like those found on customer service website pages, seem to be everywhere.  The remarkable increase in the use of these and other artificial intelligence-powered “bots” in everyday customer-facing devices like smartphones, websites, desktop speakers, and toys, has been exceeded only by bots in the background that account for over half of the traffic visiting some websites.  Recently reported harms caused by certain bots have caught the attention of state and federal lawmakers.  This post briefly describes those bots and their uses and suggests reasons why new legislative efforts aimed at reducing harms caused by bad bots have so far been limited to arguably one of the least onerous tools in the lawmaker’s toolbox: transparency. Bots Explained Bots are software programmed to receive percepts from their environment, make decisions based on those percepts, and then take (preferably rational) action in their environment.  Social media bots, for example, may use machine learning algorithms to classify and “understand” incoming content, which is subsequently posted and amplified via a social media account.  Companies like Netflix uses bots on social media platforms like Facebook and Twitter to automatically communicate information about their products and services. While not all bots use machine learning and other artificial intelligence (AI) technologies, many do, such as digital conversational agents, web crawlers, and website content scrappers, the latter being programmed to “understand” content on websites using semantic natural language processing and image classifiers.  Bots that use complex human behavioral data to identify and influence or manipulate people’s attitudes or behavior (such as clicking on advertisements) often use the latest AI tech. One attribute many bots have in common is that their functionality resides in a black box.  As a result, it can be challenging (if not impossible) for an observer to explain why a bot made a particular decision or took a specific action.  While intuition can be used to infer what happens, secrets inside a black box often remain secret. Depending on their uses and characteristics, bots are often categorized by type, such as “chatbot,” which generally describes an AI technology that engages with users by replicating natural language conversations, and “helper bot,” which is sometimes used when referring to a bot that performs useful or beneficial tasks.  The term “messenger bot” may refer to a bot that communicates information, while “cyborg” is sometimes used when referring to a person who uses bot technology. Regardless of their name, complexity, or use of AI, one characteristic common to most bots is their use as agents to accomplish tasks for or on behalf of a real person.  This anonymity of agent bots makes them attractive tools for malicious purposes. Lawmakers React to Bad Bots While the spread of beneficial bots has been impressive, bots with questionable purposes have also proliferated, such as those behind disinformation campaigns used during the 2016 presidential election.  Disinformation bots, which operate social media accounts on behalf of a real person or organization, can post content to public-facing accounts.  Used extensively in marketing, these bots can receive content, either automatically or from a principal behind the scenes, related to such things as brands, campaigns, politicians, and trending topics.  When organizations create multiple accounts and use bots across those accounts to amplify each account’s content, the content can appear viral and attract attention, which may be problematic if the content is false, misleading, and biased. The success of social media bots in spreading disinformation is evident in the degree to which they have proliferated.  Twitter recently produced data showing thousands of bot-run Twitter accounts (“Twitter bots”) were created before and during the 2016 US presidential campaign by foreign actors to amplify and spread disinformation about the campaign, candidates, and related hot-button campaign issues.  Users who received content from one of these bots would have had no apparent reason to know that it came from a foreign actor. Thus, it’s easy to understand why lawmakers and stakeholders would want to target social media bots and those that use them.  In view of a recent Pew Research Center poll that found most Americans know about social media bots, and those that have heard about them overwhelmingly (80%) believe that such bots are used for malicious purposes, and with technologies to detect fake content at its source or the bias of a news source standing at only about 65-70 percent accuracy, politicians have plenty of cover to go after bots and their owners. Why Use Transparency to Address Bot Harms? The range of options for regulating disinformation bots to prevent or reduce harm could include any number of traditional legislative approaches.  These include imposing on individuals and organizations various specific criminal and civil liability standards related to the performance and uses of their technologies; establishing requirements for regular recordkeeping and reporting to authorities (which could lead to public summaries); setting thresholds for knowledge, awareness, or intent (or use of strict liability) applied to regulated activities; providing private rights of action to sue for harms caused by a regulated person’s actions, inactions, or omissions; imposing monetary remedies and incarceration for violations; and other often seen command-and-control style governance approaches.  Transparency, which is another tool lawmakers could deploy, could impose on certain regulated persons and entities that they provide information publicly or privately to an organization’s users or customers through a mechanism of notice, disclosure, and/or disclaimer (among other techniques). Transparency is a long-used principal of democratic institutions that try to balance open and accountable government action and the notion of free enterprise with the public’s right to be informed.  Examples of transparency may be found in the form of information labels on consumer products and services under consumer laws, disclosure of product endorsement interests under FTC rules, notice and disclosures in financial and real estate transactions under various related laws, employee benefits disclosures under labor and tax laws, public review disclosures in connection with laws related to government decision-making, property ownership public records disclosures under various tax and land ownership/use laws, various healthcare disclosures under state and federal health care laws, and laws covering many other areas of public life.  Of particular relevance to the disinformation problem noted above, and why transparency seems well-suited to social media bots, is current federal campaign finance laws that require those behind political ads to reveal themselves.  See 52 USC §30120 (Federal Campaign Finance Law; publication and distribution of statements and solicitations; disclaimer requirements). A recent example of a transparency rule affecting certain bot use cases is California’s bot law (SB-1001; signed by Gov. Brown on September 28, 2018).  The law, which goes into effect July 2019, will, with certain exceptions, make it unlawful for any person (including corporations or government agencies) to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.  A person using a bot will not be liable, however, if the person discloses using clear, conspicuous, and reasonably designed notice to inform persons with whom the bot communicates or interacts that it is a bot.  Similar federal legislation may follow, especially if legislation proposed this summer by Sen. Diane Feinstein (D-CA) and legislative proposals by Sen. Warner and others gain traction in Congress. So why would lawmakers choose transparency to regulate malicious bot technology use cases rather than use an approach that is arguably more onerous?  One possibility is that transparency is seen as minimally controversial, and therefore less likely to cause push-back by those with ties to special interests that might negatively respond to lawmakers who advocate for tougher measures.  Or, perhaps lawmakers are choosing a minimalist approach just to demonstrate that they are taking action (versus the optics associated with doing nothing).  Maybe transparency is seen as a shot across the bow warning to industry leaders: work hard to police themselves and those that use their platforms by finding technological solutions to preventing the harms caused by bots or else be subject to a harsher regulatory spotlight.  Whatever the reason(s), even something viewed as relatively easy to implement as transparency is not immune from controversy. Transparency Concerns The arguments against the use of transparency applied to bots include loss of privacy, unfairness, unnecessary disclosure, and constitutional concerns, among others. Imposing transparency requirements can potentially infringe upon First Amendment protections if drafted with one-size-fits-all applicability.  Even before California’s bots measure was signed into law, for example, critics warned of the potential chilling effect on protected speech if anonymity is lifted in the case of social media bots. Moreover, transparency may be seen as unfairly elevating the principals of openness and accountability over notions of secrecy and privacy.  Owners of agent-bots, for example, would prefer to not to give up anonymity when doing so could expose them to attacks by those with opposing viewpoints and cause more harm than the law prevents. Both concerns could be addressed by imposing transparency in a narrow set of use cases and, as in California’s bot law, using “intent to mislead” and “knowingly deceiving” thresholds for tailoring the law to specific instances of certain bad behaviors. Others might argue that transparency places too much of the burden on users to understand the information being disclosed to them and to take appropriate responsive actions.  Just ask someone who’s tried to read a financial transaction disclosure or a complex Federal Register rule-making analysis whether the transparency, openness and accountability actually made a substantive impact on their follow-up actions.  Similarly, it’s questionable whether a recipient of bot-generated content would investigate the ownership and propriety of every new posting before deciding whether to accept the content’s veracity, or whether a person engaging with an AI chatbot would forgo further engagement if he or she were informed of the artificial nature of the engagement. Conclusion The likelihood that federal transparency laws will be enacted to address the malicious use of social media bots seems low given the current political situation in the US.  And with California’s bots disclosure requirement not becoming effective until mid-2019, only time will tell whether it will succeed as a legislative tool in addressing existing bot harms or whether the delay will simply give malicious actors time to find alternative technologies to achieve their goals. Even so, transparency appears to be a leading governance approach, at least in the area of algorithmic harm, and could become a go-to approach to governing harms caused by other AI and non-AI algorithmic technologies due to its relative simplicity and ability to be narrowly tailored.  Transparency might be a suitable approach to regulating certain actions by those who publish face images using generative adversarial networks (GANs), those who create and distribute so-called “deep fake” videos, and those who provide humanistic digital communications agents, all of which involve highly-realistic content and engagements in which a user could easily be fooled into believing the content/engagement involves a person and not an artificial intelligence. Read more »
  • AI’s Problems Attract More Congressional Attention
    true
    As contentious political issues continue to distract Congress before the November midterm elections, federal legislative proposals aimed at governing artificial intelligence (AI) have largely stalled in the Senate and House.  Since December 2017, nine AI-focused bills, such as the AI Reporting Act of 2018 (AIR Act) and the AI in Government Act of 2018, have been waiting for congressional committee attention.  Even so, there has been a noticeable uptick in the number of individual federal lawmakers looking at AI’s problems, a sign that the pendulum may be swinging in the direction favoring regulation of AI technologies. Those lawmakers taking a serious look at AI recently include Mark Warner (D-VA) and Kamala Harris (D-CA) in the Senate, and Will Hurd (R-TX) and Robin Kelly (D-IL) in the House.  Along with others in Congress, they are meeting with AI experts, issuing new policy proposals, publishing reports, and pressing federal officials for information about how government agencies are addressing AI problems, especially in hot topic areas like AI model bias, privacy, and malicious uses of AI. Sen. Warner, for example, the Senate Intelligence Committee Vice Chairman, is examining how AI technologies power disinformation.  In a draft white paper first obtained by Axios, Warner’s “Potential Policy Proposals for Regulation of Social Media and Technology Firms” raises concerns about machine learning and data collection, mentioning “deep fake” disinformation tools as one example.  Deep fakes are neural network models that can take images and video of people containing one type of content and superimpose them over different images and videos of other (or the same) people in a way that changes the original’s content and meaning.  To the viewer, the altered images and videos look like the real thing, and many who view them may be fooled into accepting the false content’s message as truth. Warner’s “suite of options” for regulating AI include one that would require platforms to provide notice when users engage with AI-based digital conversational assistants (chatbots) or visit a website the publishes content provided by content-amplification algorithms like those used during the 2016 elections.  Another Warner proposal includes modifying the Communications Decency Act’s safe harbor provisions that currently protects social media platforms who publish offending third-party content, including the aforementioned deep fakes.  This proposal would allow private rights of action against platforms who fail to take steps, after notice from victims, that prevent offending content from reappearing on their sites. Another proposal would require certain platforms to make their customer’s activity data (sufficiently anonymized) available to public interest researchers as a way to generate insight from the data that could “inform actions by regulators and Congress.”  An area of concern is the commercial use, by private tech companies, of their user’s behavior-based data (online habits) without using proper research controls.  The suggestion is that public interest researchers would evaluate a platform’s behavioral data in a way that is not driven by an underlying for-profit business model. Warner’s privacy-centered proposals include granting the Federal Trade Commission with rulemaking authority, adopting GDPR-like regulations recently implemented across the European Union states, and setting mandatory standards for algorithmic transparency (auditability and fairness). Repeating a theme in Warner’s white paper, Representatives Hurd and Kelly conclude that, even if AI technologies are immature, they have the potential to disrupt every sector of society in both anticipated and unanticipated ways.  In their “Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy” report, the co-chairs of the House Oversight and Government Reform Committee make several observations and recommendations, including the need for political leadership from both Congress and the White House to achieve US global dominance in AI, the need for increased federal spending on AI research and development, means to address algorithmic accountability and transparency to remove bias in AI models, and examining whether existing regulations can address public safety and consumer risks from AI.  The challenges facing society, the lawmakers found, include the potential for job loss due to automation, privacy, model bias, and malicious use of AI technologies. Separately, Representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL), and Carlos Curbelo (R-FL), in a September 13, 2018, letter to the Director of National Intelligence, are requesting the Director of National Intelligence provide Congress with a report on the spread of deep fakes (aka “hyper-realistic digital forgeries”), which they contend are allowing “malicious actors” to create depictions of individuals doing or saying things they never did, without those individuals’ consent or knowledge.  They want the intelligence agency’s report to touch on everything from assessing how foreign governments could use the technology to harm US national interests, what sort of counter-measures could be deployed to detect and deter actors from disseminating deep fakes, and if the agency needs additional legal authority to combat the problem. In a September 17, 2018, letter to the Equal Employment Opportunity Commission, Senators Harris, Patty Murray (D-WA), and Elizabeth Warren (D-MA) ask the EEOC Director to address the potentially discriminatory impacts of facial analysis technologies in the enforcement of workplace anti-discrimination laws.  As reported on this website and elsewhere, machine learning models behind facial recognition may perform poorly if they have been trained on data that is unrepresentative of data that the model sees in the wild.  For example, if training data for a facial recognition model contains primarily white male faces, the model may perform well when it sees new white male faces, but may perform poorly when it sees non-white male faces.  The Senators want to know if such technologies amplify bias in race, gender, disadvantaged, and vulnerable groups, and they have tasked the EEOC with developing guidelines for employers concerning fair uses of facial analysis technologies in the workplace. Also on September 17, 2018, Senators Harris, Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR), sent a similar letter to the Federal Trade Commission, expressing concerns that the bias in facial analysis technologies could be considered unfair or deceptive practices under the Federal Trade Commission Act.  Stating that “we cannot wait any longer to have a serious conversation about how we can create sound policy to address these concerns,” the Senators urge the FTC to commit to developing a set of best practices for the lawful, fair, and transparent use of facial analysis. Senators Harris and Booker, joined by Senator Cedric Richmond (D-LA), also sent a letter on September 17, 2018, to FBI Director Christopher Wray asking for the status of the FBI’s response to a 2016 General Accounting Office (GAO) comprehensive report detailing the FBI’s use of face recognition technology. The increasing attention directed toward AI by individual federal lawmakers in 2018 may merely reflect the politics of the moment rather than signal a momentum shift toward substantive federal command and control-style regulations.  But as more states join those states that have begun enacting, in the absence of federal rules, their own laws addressing AI technology use cases, federal action may inevitably follow, especially if more reports of malicious uses of AI, like election disinformation, reach more receptive ears in Congress. Read more »
WordPress RSS Feed Retriever by Theme Mason

Computational Intelligence

  • Soft Computing, Volume 23, Issue 12, June 2019
    1. A uniform solution to SAT problem by symport/antiport P systems with channel states and membrane division  Author(s): Suxia Jiang, Yanfeng Wang, Yansen SuPages: 3903-3911 2. Structures of compactly generated lattices described by cut sets of L-valued setsAuthor(s): Kai Zuo, Xue-ping Wang, Xiaohong ZhangPages: 3913-39203. Method with horizontal fuzzy numbers for solving real fuzzy linear systemsAuthor(s): Marek LandowskiPages: 3921-3933 4. Operations and structures derived from non-associative MV-algebrasAuthor(s): Ivan Chajda, Radomir Halaš, Helmut LängerPages: 3935-39445. Performing CTL model checking via DNA computingAuthor(s): Weijun Zhu, Yingjie Han, Qinglei ZhouPages: 3945-3963 6. On characterizations of a pair of covering-based approximation operatorsAuthor(s): Yan-Lan Zhang, Chang-Qing Li, Jinjin LiPages: 3965-39727. Semi-complement graph of lattice modulesAuthor(s): Narayan Phadatare, Vilas Kharat, Sachin BallalPages: 3973-39788. A topological duality for tense θ-valued Łukasiewicz–Moisil algebrasAuthor(s): Aldo V. Figallo, Inés Pascual, Gustavo PelaitayPages: 3979-3997   9. On an explicit representation of the Łukasiewicz sum as a quantum operationAuthor(s): H. Freytes, F. Holik, G. M. Bosyk, G. SergioliPages: 3999-400710. Attribute-based encryption with adaptive policyAuthor(s): Yiliang HanPages: 4009-401711. On distributed user-centric memetic algorithmsAuthor(s): Antonio J. Fernández-Leiva, Álvaro Gutiérrez-FuentesPages: 4019-403912. An angle-based method for measuring the semantic similarity between visual and textual featuresAuthor(s): Chenwei Tang, Jiancheng Lv, Yao Chen, Jixiang GuoPages: 4041-405013. Fusion of progressive granular neural networks for pattern classificationAuthor(s): D. Arun Kumar, Saroj K. Meher, K. Padma KumariPages: 4051-406414. A stock model with jumps for Itô–Liu financial marketsAuthor(s): Frank Ranganai Matenda, Eriyoti ChikodzaPages: 4065-408015. A simple water cycle algorithm with percolation operator for clustering analysisAuthor(s): Shilei Qiao, Yongquan Zhou, Yuxiang Zhou, Rui WangPages: 4081-409516. Bayesian genetic programming for edge detectionAuthor(s): Wenlong Fu, Mengjie Zhang, Mark JohnstonPages: 4097-411217. Accelerating differential evolution based on a subset-to-subset survivor selection operatorAuthor(s): Jinglei Guo, Zhijian Li, Shengxiang YangPages: 4113-413018. Adaptive cruise control via adaptive dynamic programming with experience replayAuthor(s): Bin Wang, Dongbin Zhao, Jin ChengPages: 4131-414419. Supporting academic decision making at higher educational institutions using machine learning-based algorithmsAuthor(s): Yuri Nieto, Vicente García-Díaz, Carlos Montenegro, Rubén González CrespoPages: 4145-415320. A biased random key genetic algorithm for the protein–ligand docking problemAuthor(s): Pablo Felipe Leonhart, Eduardo Spieler, Rodrigo Ligabue-BraunPages: 4155-417621. Context-sensitive and keyword density-based supervised machine learning techniques for malicious webpage detectionAuthor(s): Betul Altay, Tansel Dokeroglu, Ahmet CosarPages: 4177-419122. Sorting retail locations in a large urban city by using ELECTRE TRI-C and trapezoidal fuzzy numbersAuthor(s): Javier Pereira, Elaine C. B. de Oliveira, Luiz F. A. M. GomesPages: 4193-420623. Automatic identification of characteristic points related to pathologies in electrocardiograms to design expert systemsAuthor(s): Jose Ignacio Peláez, Jose Antonio Gomez-Ruiz, Javier FornariPages: 4207-421924. A multi-objective approach to weather radar network architectureAuthor(s): Redouane Boudjemaa, Diego OlivaPages: 4221-4238           25. An optimization algorithm applied to the class integration and test order problemAuthor(s): Yanmei Zhang, Shujuan Jiang, Xingya Wang, Ruoyu Chen, Miao ZhangPages: 4239-425326. A meta-heuristic approach for RLE compression in a column store tableAuthor(s): Jane Jovanovski, Nino Arsov, Evgenija Stevanoska, Maja Siljanoska SimonsPages: 4255-427627. A two-product, multi-period nonstationary newsvendor problem with budget constraintAuthor(s): Yong Zhang, Weiguo Zhang, Xingyu Yang, Weijun XuPages: 4277-428728. Belief-based chaotic algorithm for support vector data descriptionAuthor(s): Javad Hamidzadeh, Neda NamaeiPages: 4289-431429. Heuristic nonlinear regression strategy for detecting phishing websitesAuthor(s): Mehdi Babagoli, Mohammad Pourmahmood Aghababa, Vahid SoloukPages: 4315-4327    30. C-means clustering and deep-neuro-fuzzy classification for road weight measurement in traffic management systemAuthor(s): Sakhawat Hosain Sumit, Shamim AkhterPages: 4329-4340031. Multi-objective differential evolution with dynamic hybrid constraint handling mechanismAuthor(s): YueFeng Lin, Wei Du, Wenli DuPages: 4341-435532. Multiply information coding and hiding using fuzzy vaultAuthor(s): Katarzyna Koptyra, Marek R. OgielaPages: 4357-436633. A new bi-objective fuzzy portfolio selection model and its solution through evolutionary algorithmsAuthor(s): Mohuya B. Kar, Samarjit Kar, Sini Guo, Xiang Li, Saibal MajumderPages: 4367-438134. An elitism-based self-adaptive multi-population Jaya algorithm and its applicationsAuthor(s): R. Venkata Rao, Ankit SarojPages: 4383-440635. An Electromagnetism-like mechanism algorithm for the router node placement in wireless mesh networksAuthor(s): Lamri Sayad, Louiza Bouallouche-Medjkoune, Djamil AissaniPages: 4407-441936. Particle swarm optimization with convergence speed controller for large-scale numerical optimizationAuthor(s): Han Huang, Liang Lv, Shujin Ye, Zhifeng HaoPages: 4421-443737. Chatter prediction using merged wavelet denoising and ANFISAuthor(s): Shailendra Kumar, Bhagat SinghPages: 4439-4458       38. An adaptive neuro-fuzzy inference system-based caching scheme for content-centric networkingAuthor(s): Nidhi Lal, Shishupal Kumar, Vijay Kumar ChaurasiyaPages: 4459-447039. A simplified implementation of hierarchical fuzzy systemsAuthor(s): Chia-Wen Chang, Chin-Wang TaoPages: 4471-448140. Efficient and merged biogeography-based optimization algorithm for global optimization problemsAuthor(s): Xinming Zhang, Qiang Kang, Qiang Tu, Jinfeng Cheng, Xia WangPages: 4483-450241. Bipolar fuzzy concept learning using next neighbor and Euclidean distanceAuthor(s): Prem Kumar SinghPages: 4503-452042. A new effective solution method for fully intuitionistic fuzzy transportation problemAuthor(s): Ali Mahmoodirad, Tofigh Allahviranloo, Sadegh NiroomandPages: 4521-453043. MS-ACO: a multi-stage ant colony optimization to refute complex software systems specified through graph transformationAuthor(s): Vahid Rafe, Mahsa Darghayedi, Einollah PiraPages: 4531-455644. AFCGD: an adaptive fuzzy classifier based on gradient descentAuthor(s): Homeira Shahparast, Eghbal G. Mansoori, Mansoor Zolghadri JahromiPages: 4557-457145. Research on a novel minimum-risk model for uncertain orienteering problem based on uncertainty theoryAuthor(s): Jian Wang, Jiansheng Guo, Mingfa Zheng, Zheng MuRong, Zhengxin LiPages: 4573-4584 Read more »
  • Soft Computing, Volume 23, Issue 11, June 2019
    1. Domination landscape in evolutionary algorithms and its applicationsAuthor(s): Guo-Sheng Hao, Meng-Hiot Lim, Yew-Soon Ong, Han Huang, Gai-Ge WangPages: 3563-35702. Optimization based on nonlinear transformation in decision spaceAuthor(s): Yangyang Li, Cheng Peng, Yang Wang, Licheng JiaoPages: 3571-35903. Controller exploitation-exploration reinforcement learning architecture for computing near-optimal policiesAuthor(s): Erick Asiain, Julio B. Clempner, Alexander S. PoznyakPages: 3591-36044. Fuzzy relation lexicographic programming for modelling P2P file sharing systemAuthor(s): Yu-Bin Zhong, Gang Xiao, Xiao-Peng YangPages: 3605-36145. On decidability and axiomatizability of some ordered structuresAuthor(s): Ziba Assadi, Saeed SalehiPages: 3615-36266. Iterated population-based VND algorithms for single-machine scheduling with sequence-dependent setup timesAuthor(s): Chun-Lung ChenPages: 3627-36417. Differential evolution algorithm with dichotomy-based parameter space compressionAuthor(s): Laizhong Cui, Genghui Li, Zexuan Zhu, Zhong Ming, Zhenkun Wen, Nan LuPages: 3643-36608. New crossover operators using dominance and co-dominance principles for faster convergence of genetic algorithmsAuthor(s): G. Pavai, T. V. GeethaPages: 3661-36869. First hitting time of uncertain random renewal reward process and its application in insurance risk processAuthor(s): Kai YaoPages: 3687-369610. Soft-clustering-based local multiple kernel learning algorithm for classificationAuthor(s): Qingchao Wang, Guangyuan Fu, Hongqiao Wang, Linlin Li, Shuai HuangPages: 3697-370611. How much and where to use manual guidance in the computational detection of contours for histopathological images?Author(s): Catalin Stoean, Ruxandra Stoean, Adrian Sandita, Cristian MesinaPages: 3707-372212. A predictive strategy based on special points for evolutionary dynamic multi-objective optimizationAuthor(s): Qingya Li, Juan Zou, Shengxiang Yang, Jinhua Zheng, Gan RuanPages: 3723-373913. Knowledge aggregation in decision-making process with C-fuzzy random forest using OWA operatorsAuthor(s): Łukasz Gadomer, Zenon A. SosnowskiPages: 3741-375514. 3D flow simulation of straight groynes using hybrid DE-based artificial intelligence methodsAuthor(s): Akbar Safarzadeh, Amir Hossein Zaji, Hossein BonakdariPages: 3757-377715. Public audit for operation behavior logs with error locating in cloud storageAuthor(s): Hui Tian, Zhaoyi Chen, Chin-Chen Chang, Yongfeng Huang, Tian WangPages: 3779-379216. Very large-scale data classification based on K-means clustering and multi-kernel SVMAuthor(s): Tinglong Tang, Shengyong Chen, Meng Zhao, Wei Huang, Jake LuoPages: 3793-380117. On fuzzy type-1 and type-2 stochastic ordinary and partial differential equations and numerical solutionAuthor(s): Abhirup Bandyopadhyay, Samarjit KarPages: 3803-382118. Multi-attribute group decision making based on power generalized Heronian mean operator under hesitant fuzzy linguistic environmentAuthor(s): Dawei Ju, Yanbing Ju, Aihua WangPages: 3823-384219. A novel fuzzy vault scheme based on fingerprint and finger vein feature fusionAuthor(s): Lin You, Ting WangPages: 3843-385120. Multi-attribute decision making based on prioritized operators under probabilistic hesitant fuzzy environmentsAuthor(s): Jian Li, Zhong-xing WangPages: 3853-386821. Some intuitionistic uncertain linguistic Bonferroni mean operators and their application to group decision makingAuthor(s): Peide Liu, Xiaohong ZhangPages: 3869-388622. An approach for parameterized shadowed type-2 fuzzy membership functions applied in control applicationsAuthor(s): Patricia Melin, Emanuel Ontiveros-Robles, Claudia I. GonzalezPages: 3887-3901 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems, Volume 30, Issue 5, May 2019
    1. Deep Neural Network Initialization With Decision TreesAuthor(s): Kelli D. Humbird; J. Luc Peterson; Ryan G. McclarrenPages: 1286 - 12952. Neural Learning Control of Strict-Feedback Systems Using Disturbance ObserverAuthor(s): Bin Xu; Yingxin Shou; Jun Luo; Huayan Pu; Zhongke ShiPages: 1296 - 13073. Off-Policy Interleaved  Q-Learning: Optimal Control for Affine Nonlinear Discrete-Time SystemsAuthor(s): Jinna Li; Tianyou Chai; Frank L. Lewis; Zhengtao Ding; Yi JiangPages: 1308 - 13204. Feature Analysis of Marginalized Stacked Denoising Autoenconder for Unsupervised Domain AdaptationAuthor(s): Pengfei Wei; Yiping Ke; Chi Keong GohPages: 1321 - 13345. On the Representational Power of Restricted Boltzmann Machines for Symmetric Functions and Boolean FunctionsAuthor(s): Linyan Gu; Jianfeng Huang; Lihua YangPages: 1335 - 13476. Pool-Based Sequential Active Learning for RegressionAuthor(s): Dongrui WuPages: 1348 - 13597. Stochastic Conjugate Gradient Algorithm With Variance ReductionAuthor(s): Xiao-Bo Jin; Xu-Yao Zhang; Kaizhu Huang; Guang-Gang GengPages: 1360 - 13698. Quantized Minimum Error Entropy CriterionAuthor(s): Badong Chen; Lei Xing; Nanning Zheng; José C. PríncipePages: 1370 - 13809. Heterogeneous Domain Adaptation Through Progressive AlignmentAuthor(s): Jingjing Li; Ke Lu; Zi Huang; Lei Zhu; Heng Tao ShenPages: 1381 - 139110. Generalization and Expressivity for Deep NetsAuthor(s): Shao-Bo Lin 1392 - 140611. Temporal Attention-Augmented Bilinear Network for Financial Time-Series Data AnalysisAuthor(s): Dat Thanh Tran; Alexandros Iosifidis; Juho Kanniainen; Moncef GabboujPages: 1407 - 141812. Face Sketch Synthesis by Multidomain Adversarial LearningAuthor(s): Shengchuan Zhang; Rongrong Ji; Jie Hu; Xiaoqiang Lu; Xuelong LiPages: 1419 - 142813. Deep Semantic-Preserving Ordinal Hashing for Cross-Modal Similarity SearchAuthor(s): Lu Jin; Kai Li; Zechao Li; Fu Xiao; Guo-Jun Qi; Jinhui TangPages: 1429 - 144014. Bag-Level Aggregation for Multiple-Instance Active Learning in Instance Classification ProblemsAuthor(s): Marc-André Carbonneau; Eric Granger; Ghyslain GagnonPages: 1441 - 145115. Nonuniformly Sampled Data Processing Using LSTM NetworksAuthor(s): Safa Onur Sahin; Suleyman Serdar KozatPages: 1452 - 146116. Distributed Adaptive Tracking Synchronization for Coupled Reaction–Diffusion Neural NetworkAuthor(s): Hao Zhang; Nikhil R. Pal; Yin Sheng; Zhigang ZengPages: 1462 - 147517. Novel Finite-Time Synchronization Criteria for Inertial Neural Networks With Time Delays via Integral Inequality MethodAuthor(s): Zhengqiu Zhang; Jinde CaoPages: 1476 - 148518. Early Expression Detection via Online Multi-Instance Learning With Nonlinear ExtensionAuthor(s): Liping Xie; Dacheng Tao; Haikun WeiPages: 1486 - 149619. LTNN: A Layerwise Tensorized Compression of Multilayer Neural NetworkAuthor(s): Hantao Huang; Hao YuPages: 1497 - 151120. Approximate Optimal Distributed Control of Nonlinear Interconnected Systems Using Event-Triggered Nonzero-Sum GamesAuthor(s): Vignesh Narayanan; Avimanyu Sahoo; Sarangapani Jagannathan; Koshy GeorgePages: 1512 - 152221. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator ProblemAuthor(s): Syed Ali Asad Rizvi; Zongli LinPages: 1523 - 153622. Multistability of Delayed Hybrid Impulsive Neural Networks With Application to Associative MemoriesAuthor(s): Bin Hu; Zhi-Hong Guan; Guanrong Chen; Frank L. LewisPages: 1537 - 155123. STRAINet: Spatially Varying sTochastic Residual AdversarIal Networks for MRI Pelvic Organ SegmentationAuthor(s): Dong Nie; Li Wang; Yaozong Gao; Jun Lian; Dinggang ShenPages: 1552 - 156424. A Deep Learning Approach to Competing Risks Representation in Peer-to-Peer LendingAuthor(s): Fei Tan; Xiurui Hou; Jie Zhang; Zhi Wei; Zhenyu YanPages: 1565 - 157425. Dynamical Behavior of Nonautonomous Stochastic Reaction–Diffusion Neural-Network ModelsAuthor(s): Tengda Wei; Ping Lin; Quanxin Zhu; Linshan Wang; Yangfan WangPages: 1575 - 158026. Hierarchical Feature Selection for Random ProjectionAuthor(s): Qi Wang; Jia Wan; Feiping Nie; Bo Liu; Chenggang Yan; Xuelong LiPages: 1581 - 158627. Generalized Uncorrelated Regression with Adaptive Graph for Unsupervised Feature SelectionAuthor(s): Xuelong Li; Han Zhang; Rui Zhang; Yun Liu; Feiping NiePages: 1587 - 159528. Output-Feedback Adaptive Neural Controller for Uncertain Pure-Feedback Nonlinear Systems Using a High-Order Sliding Mode ObserverAuthor(s): Jang-Hyun Park; Seong-Hwan Kim; Tae-Sik ParkPages: 1596 - 160129. Multiobjective Support Vector Machines: Handling Class Imbalance With Pareto OptimalityAuthor(s): Shounak Datta; Swagatam DasPages: 1602 - 1608 Read more »
  • Soft Computing, Volume 23, Issue 10, May 2019
    1. α-filters and prime α-filter spaces in residuated latticesAuthor(s): Yan Yan Dong, Xiao Long XinPages: 3207-32162. The characterizations of upper approximation operators based on coveringsAuthor(s): Pei Wang, Qingguo LiPages: 3217-32283. Transformations between the center of gravity and the possibilistic mean for triangular and trapezoidal fuzzy numbersAuthor(s): Pasi Luukka, Jan Stoklasa, Mikael CollanPages: 3229-32354. The comparative study of covering rough sets and multi-granulation rough setsAuthor(s): Qingzhao Kong, Weihua XuPages: 3237-32515. Regular and strongly regular relations on soft hyperringsAuthor(s): S. Ostadhadi-Dehkordi, K. P. ShumPages: 3253-32606. The lattice of subspaces of a vector space over a finite fieldAuthor(s): Ivan Chajda, Helmut LängerPages: 3261-32677. A novel artificial bee colony algorithm for inverse kinematics calculation of 7-DOF serial manipulatorsAuthor(s): Li Zhang, Nanfeng XiaoPages: 3269-32778. Uncertain multi-objective multi-item fixed charge solid transportation problem with budget constraintAuthor(s): Saibal Majumder, Pradip Kundu, Samarjit Kar, Tandra PalPages: 3279-33019. Evolutionary multiobjective optimization with clustering-based self-adaptive mating restriction strategyAuthor(s): Xin Li, Shenmin Song, Hu ZhangPages: 3303-332510. A group decision process based on expert analysis and criteria coalition to measure municipalities’ financial distressAuthor(s): Manuel A. Fernández, Elías Bendodo, José R. Sánchez, Francisco E. CabreraPages: 3327-334511. Semantic distance between vague concepts in a framework of modeling with wordsAuthor(s): Weifeng Zhang, Hua Hu, Haiyang Hu, Jinglong FangPages: 3347-336412. Conception and implementation of a data-driven prognostics algorithm for safety–critical systemsAuthor(s): Hatem M. Elattar, Hamdy K. Elminir, A. M. RiadPages: 3365-338213. Fuzzy reliability estimation using the new operations of transmission average on Rational-linear patchy fuzzy numbersAuthor(s): F. Abbasi, T. AllahviranlooPages: 3383-339614. iMOPSE: a library for bicriteria optimization in Multi-Skill Resource-Constrained Project Scheduling ProblemAuthor(s): Paweł B. Myszkowski, Maciej Laszczyk, Ivan Nikulin, Marek SkowrońskiPages: 3397-341015. Improved secure fuzzy auditing protocol for cloud data storageAuthor(s): Jindan Zhang, Baocang Wang, Debiao He, Xu An WangPages: 3411-342216. Adaptive differential evolution with multi-population-based mutation operators for constrained optimizationAuthor(s): Bin Xu, Lili Tao, Xu Chen, Wushan ChengPages: 3423-344717. Bio-inspired heuristics for layer thickness optimization in multilayer piezoelectric transducer for broadband structuresAuthor(s): Aneela Zameer, Mohsin Majeed, Sikander M. MirzaPages: 3449-346318. A generic heuristic for multi-project scheduling problems with global and local resource constraints (RCMPSP)Author(s): Félix Villafáñez, David Poza, Adolfo López-Paredes, Javier PajaresPages: 3465-347919. Supplier’s strategy: align with the dominant entrant retailer or the vulnerable incumbent retailer?Author(s): Ye Wang, Wansheng Tang, Ruiqing ZhaoPages: 3481-350020. Finding suitable membership functions for fuzzy temporal mining problems using fuzzy temporal bees methodAuthor(s): Mojtaba Asadollahpour Chamazi, Homayun MotameniPages: 3501-351821. A fuzzy genetic approach for velocity estimation in wind-tunnelAuthor(s): Hamed Vahdat-Nejad, Mojtaba Dehghan-Manshadi, Mahdi KheradPages: 3519-352722. Adaptive neuro-fuzzy inference system for evaluating dysarthric automatic speech recognition (ASR) systems: a case study on MVML-based ASRAuthor(s): Adeleh Asemi, Siti Salwah Binti Salim, Seyed Reza Shahamiri, Asefeh AsemiPages: 3529-354423. Sliding-window metaheuristic optimization-based forecast system for foreign exchange analysisAuthor(s): Jui-Sheng Chou, Thi Thu Ha TruongPages: 3545-3561 Read more »
  • Evolving Systems, Volume 10, Issue 1, March 2019
    Special Issue: Foundations and Applications of Evolving Intelligence at the 21 Century (Volume 2)1. EditorialAuthor(s): Lazaros Iliadis, Ilias MaglogiannisPages: 1-22. Heuristic and metaheuristic solutions of pickup and delivery problem for self-driving taxi routingAuthor(s): Viacheslav Shalamov, Andrey Filchenkov, Anatoly ShalytoPages: 3-113. Applying nature-inspired optimization algorithms for selecting important timestamps to reduce time series dimensionalityAuthor(s): Muhammad Marwan Muhammad FuadPages: 13-284. ILIOU machine learning preprocessing method for depression type predictionAuthor(s): Theodoros Iliou, Georgia Konstantopoulou, Mandani NtekouliPages: 29-395. A machine learning approach for asperities’ location identificationAuthor(s): Konstantinos Arvanitakis, Ioannis Karydis, Katia L. KermanidisPages: 41-506. Hybrid local boosting utilizing unlabeled data in classification tasksAuthor(s): Christos K. Aridas, Sotiris B. Kotsiantis, Michael N. VrahatisPages: 51-617. Multi-label active learning: key issues and a novel query strategyAuthor(s): Everton Alvares Cherman, Yannis Papanikolaou, Grigorios TsoumakasPages: 63-78 Read more »
  • IEEE Transactions on Fuzzy Systems - Papers Published April 2019
    1. General 3-D Type-II Fuzzy Logic Systems in the Polar Frame: Concept and PracticeAuthor(s): H Zakeri, F M Nejad, A Fahimifar2. Survey of Fuzzy Min–Max Neural Network for Pattern Classification Variants and ApplicationsAuthor(s): O N Al Sayaydeh, M F Mohammed, C P Lim3. A Novel Finite-Time Adaptive Fuzzy Tracking Control Scheme for Nonstrict Feedback SystemsAuthor(s): Y Liu, X Liu, Y Jing, Z Zhang4. Dissipativity-Preserving Model Reduction for Takagi–Sugeno Fuzzy SystemsAuthor(s): Y Ren, Q Li, D-W Ding, X Xie5. Quality Control Process Based on Fuzzy Random VariablesAuthor(s): G Hesamian, M G Akbari, R Yaghoobpoor6. Multiobjective $H_{2}/H_{\infty }$ Control Design of the Nonlinear Mean-Field Stochastic Jump-Diffusion Systems via Fuzzy ApproachAuthor(s): C-F Wu, B-S Chen, W Zhang7. A Metahierarchical Rule Decision System to Design Robust Fuzzy Classifiers Based on Data ComplexityAuthor(s): J Cozar, A Fernandez, F Herrera, J A Gamez8. A Method of Measuring Uncertainty for Z-NumberAuthor(s): B Kang, Y Deng, K Hewage, R Sadiq9. A Weighted Least Squares Fuzzy Regression for Crisp Input-Fuzzy Output DataAuthor(s): J Chachi10. An SOS-Based Sliding Mode Controller Design for a Class of Polynomial Fuzzy SystemsAuthor(s): H Zhang, Y Wang, J Zhang, Y Wang11. Fuzzy Pushdown Termination GamesAuthor(s): H Pan, F Song, Y Cao, J Qian12. Granular Fuzzy Modeling for Multidimensional Numeric Data: A Layered Approach Based on HyperboxAuthor(s): W Lu, D Shan, W Pedrycz, L Zhang, J Yang, X Liu13. Decentralized Dissipative Filtering for Delayed Nonlinear Interconnected Systems Based on T–S Fuzzy ModelAuthor(s): Y Liu, F Fang, J H Park14. Delayed Fuzzy Control of a 1-D Reaction-Diffusion Equation Using Sampled-in-Space Sensing and ActuationAuthor(s): W Kang, D-W Ding15. Supervised Learning to Aggregate Data With the Sugeno IntegralAuthor(s): M Gagolewski, S James, G Beliakov16. Interval Observer Design for Discrete-Time Uncertain Takagi–Sugeno Fuzzy SystemsAuthor(s): J Li, Z Wang, Y Shen, Y Wang17. Characterization of Quadratic Aggregation FunctionsAuthor(s): S Tasena Read more »
  • IEEE Transactions on Fuzzy Systems - Papers Published March 2019
    1. Intelligent Backstepping Control Using Recurrent Feature Selection Fuzzy Neural Network for Synchronous Reluctance Motor Position Servo Drive SystemAuthor(s): F-J Lin, S-G Chen, C-W Hsu2. Generalized Dempster–Shafer StructuresAuthor(s): R R Yager3. Fuzzy Peak-to-Peak Filtering for Networked Nonlinear Systems With Multipath Data Packet DropoutsAuthor(s): X-H Chang; Q Liu, Y-M Wang, J Xiong4. Distributed Fuzzy Adaptive Backstepping Optimal Control for Nonlinear Multimissile Guidance Systems With Input SaturationAuthor(s): J Sun, C Liu5. Dynamic Output Feedback Predictive Control With One Free Control Move for the Takagi–Sugeno Model With Bounded DisturbanceAuthor(s): J Hu, B Ding6. Another View on Generalized Intuitionistic Fuzzy Soft Sets and Related Multiattribute Decision Making MethodsAuthor(s): F Feng, H Fujita, M I Ali, R R Yager, X Liu7. A Novel Three-Dimensional Fuzzy Modeling Method for Nonlinear Distributed Parameter SystemsAuthor(s): X-X Zhang, L-R Zhao, H-X Li, S-W Ma8. iPatch: A Many-Objective Type-2 Fuzzy Logic System for Field Workforce OptimizationAuthor(s): A Starkey, H Hagras, S Shakya, G Owusu9. Similarity Measures for Closed General Type-2 Fuzzy Sets: Overview, Comparisons, and a Geometric ApproachAuthor(s): D Wu, J M Mendel10. Intuitionistic Fuzzy Rough Set-Based Granular Structures and Attribute Subset SelectionAuthor(s): A Tan, W-Z Wu, Y Qian, J Liang, J Chen, J Li11. Novel Stabilization Criteria for T–S Fuzzy Systems With Affine Matched Membership FunctionsAuthor(s): S Lee12. Wavelet-TSK-Type Fuzzy Cerebellar Model Neural Network for Uncertain Nonlinear SystemsAuthor(s): J Zhao, C-M Lin13. Sparse Representation-Based Intuitionistic Fuzzy Clustering Approach to Find the Group Intra-Relations and Group Leaders for Large-Scale Decision MakingAuthor(s): R-X Ding, X Wang, K Shang, B Liu, F Herrera14. Finite-Time Convergence Adaptive Fuzzy Control for Dual-Arm Robot With Unknown Kinematics and DynamicsAuthor(s): C Yang, Y Jiang, J Na, Z Li, L Cheng, C-Y Su15. The Properties of Fuzzy Tensor and Its Application in Multiple Attribute Group Decision MakingAuthor(s): S Deng, J Liu, X Wang16. The Non-Smoothness Problem in Disturbance Observer Design: A Set-Invariance-Based Adaptive Fuzzy Control MethodAuthor(s): M Lv, S Baldi, Z Liu17. Monotonic Smooth Takagi–Sugeno Fuzzy Systems With Fuzzy Sets With Compact SupportAuthor(s): P Husek18. Ordered Weighted Averaging Aggregation on Convex PosetAuthor(s): L Jin, R Mesiar, R Yager Read more »
  • IEEE Transactions on Fuzzy Systems - Papers Published February 2019
    1. Online Performance-Based Adaptive Fuzzy Dynamic Surface Control for Nonlinear Uncertain Systems Under Input SaturationAuthor(s): P Wang, X Zhang, J Zhu2. A Consensus Model for Large-Scale Linguistic Group Decision Making With a Feedback Recommendation Based on Clustered Personalized Individual Semantics and Opposing Consensus GroupsAuthor(s): C-C Li, Y Dong, F Herrera3. A Goal-Programming-Based Heuristic Approach to Deriving Fuzzy Weights in Analytic Form from Triangular Fuzzy Preference RelationsAuthor(s): Z-J Wang4. Finite-Time Stabilization for Discontinuous Interconnected Delayed Systems via Interval Type-2 T–S Fuzzy Model ApproachAuthor(s): N Rong, Z Wang, H Zhang5. Efficient Robust Fuzzy Model Predictive Control of Discrete Nonlinear Time-Delay Systems via Razumikhin ApproachAuthor(s): L Teng, Y Wang, W Cai, H Li6. Omitting Types Theorem for Fuzzy LogicsAuthor(s): P Cintula, D Diaconescu7. A Novel Fuzzy Observer-Based Steering Control Approach for Path Tracking in Autonomous VehiclesAuthor(s): C Zhang, J Hu, J Qiu, W Yang, H Sun, Q Chen8. Fuzzy Transfer Learning Using an Infinite Gaussian Mixture Model and Active LearningAuthor(s): H Zuo, J Lu, G Zhang, F Liu9. Design of Hidden-Property-Based Variable Universe Fuzzy Control for Movement Disorders and Its Efficient Reconfigurable ImplementationAuthor(s): S Yang, B Deng, J Wang, C Liu, H Li, Q Lin, C Fietkie, K A Loparo10. Consensus Building With Individual Consistency Control in Group Decision MakingAuthor(s): C-C Li, R M Rodrigues, L Martinez, Y Dong, F Herrera11. General Type-2 Radial Basis Function Neural Network: A Data-Driven Fuzzy ModelAuthor(s): A Rubio-Solis, P Melin, U Martinez-Hernandez, G Panoutsos12. Fuzzy Rule-Based Domain Adaptation in Homogeneous and Heterogeneous SpacesAuthor(s): H Zuo, J Lu, G Zhang, W Pedrycz13. From Fuzzy Sets to Interval-Valued and Atanassov Intuitionistic Fuzzy Sets: A Unified View of Different Axiomatic MeasuresAuthor(s): I Couso, H Bustince14. Consistency Measures of Linguistic Preference Relations With HedgesAuthor(s): H Wang, Z Xu, X-J Zeng, H Liao15. Noise Robust Multiobjective Evolutionary Clustering Image Segmentation Motivated by the Intuitionistic Fuzzy InformationAuthor(s): F Zhao, J Fan, H Liu, R Lan, C W Chen16. Enhanced Predictor-Based Control Synthesis for Discrete-Time TS Fuzzy Descriptor Systems With Time-Varying Input DelaysAuthor(s): A Gonzalez, T-M Guerra Read more »
  • Soft Computing, Volume 23, Issue 9, May 2019
    1. Special issue on: Optimization methods for decision making: advances and applicationsAuthor(s): Patrizia Beraldi, Maurizio Boccia, Claudio SterlePages: 2849-28522. A configurational approach based on geographic information systems to support decision-making process in real estate domainAuthor(s): Valerio Di Pinto, Antonio M. RinaldiPages: 2853-28623. A “pay-how-you-drive” car insurance approach through cluster analysisAuthor(s): Maria Francesca Carfora, Fabio Martinelli, Francesco MercaldoPages: 2863-28754. A decision support system to improve performances of airport check-in servicesAuthor(s): Giuseppe Bruno, Antonio Diglio, Andrea Genovese, Carmela PiccoloPages: 2877-28865. Sparse analytic hierarchy process: an experimental analysisAuthor(s): Gabriele Oliva, Roberto Setola, Antonio Scala, Paolo Dell’OlmoPages: 2887-28986. Sustainability-based review of urban freight modelsAuthor(s): Maria Elena Nenni, Antonio Sforza, Claudio SterlePages: 2899-29097. MIP-based heuristic approaches for the capacitated edge activation problem: the effect of non-compactnessAuthor(s): Sara MattiaPages: 2911-29218. Scheduling ships movements within a canal harborAuthor(s): Paola Pellegrini, Giacomo di Tollo, Raffaele PesentiPages: 2923-29369. Checking weak optimality and strong boundedness in interval linear programmingAuthor(s): Elif Garajová, Milan HladíkPages: 2937-294510. The Minimum Routing Cost Tree ProblemAuthor(s): Adriano Masone, Maria Elena Nenni, Antonio Sforza, Claudio SterlePages: 2947-295711. An effective heuristic for large-scale fault-tolerant k-median problemAuthor(s): Igor Vasilyev, Anton V. Ushakov, Nadezhda Maltugueva, Antonio SforzaPages: 2959-296712. Global optimization in machine learning: the design of a predictive analytics applicationAuthor(s): Antonio Candelieri, Francesco ArchettiPages: 2969-297713. The risk-averse traveling repairman problem with profitsAuthor(s): P. Beraldi, M. E. Bruni, D. Laganà, R. MusmannoPages: 2979-299314. Multi-objective stable matching and distributional constraintsAuthor(s): Mangesh Gharote, Nitin Phuke, Rahul Patil, Sachin LodhaPages: 2995-301115. Computational study of separation algorithms for clique inequalitiesAuthor(s): Francesca Marzi, Fabrizio Rossi, Stefano SmriglioPages: 3013-302716. Implementation of an intelligent hybrid simulation systems for WMNs based on particle swarm optimization and simulated annealing: performance evaluation for different replacement methodsAuthor(s): Shinji Sakamoto, Kosuke Ozera, Admir Barolli, Makoto IkedaPages: 3029-303517. Weighted network graph for interpersonal communication with temporal regularityAuthor(s): Ryoichi Shinkuma, Yuki Sugimoto, Yuichi InagakiPages: 3037-305118. Sparse multi-criteria optimization classifier for credit risk evaluationAuthor(s): Zhiwang Zhang, Jing He, Guangxia Gao, Yingjie TianPages: 3053-306619. Enhanced quantum-based neural network learning and its application to signature verificationAuthor(s): Om Prakash Patel, Aruna Tiwari, Rishabh Chaudhary…Pages: 3067-308020. Patch-based fuzzy clustering for image segmentationAuthor(s): Xiaofeng Zhang, Qiang Guo, Yujuan Sun, Hui Liu, Gang Wang, Qingtang Su…Pages: 3081-309321. Bee swarm optimization for solving the MAXSAT problem using prior knowledgeAuthor(s): Youcef Djenouri, Zineb Habbas, Djamel Djenouri, Philippe Fournier-VigerPages: 3095-311222. Self-adaptive parameters in differential evolution based on fitness performance with a perturbation strategyAuthor(s): Chen-Yang Cheng, Shu-Fen Li, Yu-Cheng LinPages: 3113-312823. A vegetable category recognition system: a comparison study for caffe and Chainer DNN frameworksAuthor(s): Makoto Ikeda, Tetsuya Oda, Leonard BarolliPages: 3129-313624. A survey on handling computationally expensive multiobjective optimization problems with evolutionary algorithmsAuthor(s): Tinkle Chugh, Karthik Sindhya, Jussi Hakanen, Kaisa MiettinenPages: 3137-316625. Novel soft fuzzy rough rings (ideals) of rings and their application in decision makingAuthor(s): Kuan Yun ZhuPages: 3167-318926. A distributed hybrid index for processing continuous range queries over moving objectsAuthor(s): Ziqiang Yu, Fatos Xhafa, Yuehui Chen, Kun MaPages: 3191-3205 Read more »
  • Soft Computing, Volume 23, Issue 8, April 2019
    1. Soft computing approaches for next-generation sustainable systems (SCNGS)Author(s): Pasumpon Pandian, Xavier Fernando, Tomonobu SenjyuPages: 24832. Saliency detection in stereoscopic images using adaptive Gaussian Kernel and Gabor filterAuthor(s): Y. Rakesh, K. Sri Rama KrishnaPages: 2485-24983. Destination-aware context-based routing protocol with hybrid soft computing cluster algorithm for VANETAuthor(s): K. Aravindhan, C. Suresh Gnana DhasPages: 2499-25074. Optimal resource allocation algorithm for OFDMA-based WiMAX network using stochastic fish swarm optimizationAuthor(s): P. S. Kumaresh, A. V. RamprasadPages: 2509-25235. Gender classification from face images by mixing the classifier outcome of prime, distinct descriptorsAuthor(s): A. Geetha, M. Sundaram, B. VijayakumariPages: 2525-25356. Soft computing and trust-based self-organized hierarchical energy balance routing protocol (TSHEB) in wireless sensor networksAuthor(s): G. Asha, R. SanthoshPages: 2537-25437. Global biotic cross-pollination algorithm enhanced with evolutionary strategies for color image segmentationAuthor(s): S. N. Deepa, D. RasiPages: 2545-25598. Attribute-based hierarchical file encryption for efficient retrieval of files by DV index tree from cloud using crossover genetic algorithmAuthor(s): R. Naresh, M. Sayeekumar, G. M. Karthick, P. SuprajaPages: 2561-25749. A fuzzy entropy technique for dimensionality reduction in recommender systems using deep learningAuthor(s): B. Saravanan, V. Mohanraj, J. SenthilkumarPages: 2575-258310. Medical big data analysis: preserving security and privacy with hybrid cloud technologyAuthor(s): E. Shanmugapriya, R. KavithaPages: 2585-259611. A hybrid soft computing: SGP clustering methodology for enhancing network lifetime in wireless multimedia sensor networksAuthor(s): P. X. Britto, S. SelvanPages: 2597-260912. Digital acquisition and character extraction from stone inscription images using modified fuzzy entropy-based adaptive thresholdingAuthor(s): K. Durga Devi, P. Uma MaheswariPages: 2611-262613. A model predictive controller for improvement in power quality from a hybrid renewable energy systemAuthor(s): R. Muthukumar, P. BalamuruganPages: 2627-263514. Hybrid data fusion model for restricted information using Dempster–Shafer and adaptive neuro-fuzzy inference (DSANFI) systemAuthor(s): E. Brumancia, S. Justin Samuel, L. Mary Gladence, Karunya RathanPages: 2637-264415. Noninvasive method of epileptic detection using DWT and generalized regression neural networkAuthor(s): S. Vijay Anand, R. Shantha SelvakumariPages: 2645-265316. An adaptive neuro-fuzzy logic based jamming detection system in WSNAuthor(s): K. P. Vijayakumar, K. Pradeep Mohan Kumar, K. Kottilingam, T. KarthickPages: 2655-266717. Quality-based pattern C2 code score-level fusion in multimodal biometric authentication system using pattern netAuthor(s): S. Ilankumaran, C. Deisy, R. PandianPages: 2669-268218. A novel Nth-order IIR filter-based graphic equalizer optimized through genetic algorithm for computing filter orderAuthor(s): Shajin Prince, K. R. Shankar KumarPages: 2683-269119. A novel vessel detection and classification algorithm using a deep learning neural network model with morphological processing (M-DLNN)Author(s): S. Iwin Thanakumar Joseph, J. Sasikala, D. Sujitha JulietPages: 2693-270020. A novel PTS: grey wolf optimizer-based PAPR reduction technique in OFDM scheme for high-speed wireless applicationsAuthor(s): R. S. Suriavel Rao, P. MalathiPages: 2701-271221. Constraints handling in combinatorial interaction testing using multi-objective crow search and fruitfly optimizationAuthor(s): P. Ramgouda, V. ChandraprakashPages: 2713-272622. Hybrid intelligence system using fuzzy inference in cluster architecture for secured group communicationAuthor(s): M. Sayeekumar, G. M. Karthik, S. PuhazholiPages: 2727-273423. Enhanced secure communication over inter-domain routing in heterogeneous wireless networks based on analysis of BGP anomalies using soft computing techniquesAuthor(s): N. Elamathi, S. Jayashri, R. PitchaiPages: 2735-274624. Interblend fusing of genetic algorithm-based attribute selection for clustering heterogeneous data setAuthor(s): J. Dhayanithi, J. AkilandeswariPages: 2747-275925. FFcPsA: a fast finite conventional state using prefix pattern gene search algorithm for large sequence identificationAuthor(s): A. Surendar, M. Arun, A. Mahabub BashaPages: 2761-277126. Performance analysis of soft computing techniques for the automatic classification of fruits datasetAuthor(s): L. Rajasekar, D. SharmilaPages: 2773-278827. A hybrid genetic artificial neural network (G-ANN) algorithm for optimization of energy component in a wireless mesh network toward green computingAuthor(s): B. Prakash, S. Jayashri, T. S. KarthikPages: 2789-279828. IoT complex communication architecture for smart cities based on soft computing modelsAuthor(s): Daming Li, Zhiming Cai, Lianbing Deng, Xiang YaoPages: 2799-281229. Video analytics-based intelligent surveillance system for smart buildingsAuthor(s): K. S. Gautam, Senthil Kumar ThangavelPages: 2813-283730. An approach for automatic detection of fetal gestational age at the third trimester using kidney length and biparietal diameterAuthor(s): S. Meenakshi, M. Suganthi, P. Suresh KumarPages: 2839-2848 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems, Volume 30, Issue 4, April 2019.
    1. Denoising Adversarial AutoencodersAuthor(s):  Antonia Creswell; Anil Anthony BharathPages: 968 - 9842. A Novel Cluster Validity Index Based on Local CoresAuthor(s):  Dongdong Cheng; Qingsheng Zhu; Jinlong Huang; Quanwang Wu; Lijun YangPages: 985 - 9993. Exponential Synchronization for Delayed Dynamical Networks via Intermittent Control: Dealing With Actuator SaturationsAuthor(s):  Yonggang Chen; Zidong Wang; Bo Shen; Hongli DongPages: 1000 - 10124. Semantically Modeling of Object and Context for CategorizationAuthor(s):  Chunjie Zhang; Jian Cheng; Qi TianPages: 1013 - 10245. Exponential Synchronizationlike Criterion for State-Dependent Impulsive Dynamical NetworksAuthor(s):  Liangliang Li; Xin Wang; Chuandong Li; Yuming FengPages: 1025 - 10336. Variational Bayesian Learning of Generalized Dirichlet-Based Hidden Markov Models Applied to Unusual Events DetectionAuthor(s):  Elise Epaillard; Nizar BouguilaPages: 1034 - 10477. Self-Organizing Neuroevolution for Solving Carpool Service Problem With Dynamic Capacity to Alternate MatchesAuthor(s):  Ming-Kai Jiau; Shih-Chia HuangPages: 1048 - 10608. Online Robust Low-Rank Tensor Modeling for Streaming Data AnalysisAuthor(s):  Ping Li; Jiashi Feng; Xiaojie Jin; Luming Zhang; Xianghua Xu; Shuicheng YanPages: 1061 - 10759. Adaptive Neural State-Feedback Tracking Control of Stochastic Nonlinear Switched Systems: An Average Dwell-Time MethodAuthor(s):  Ben Niu; Ding Wang; Naif D. Alotaibi; Fuad E. AlsaadiPages: 1076 - 108710. Active Learning From Imbalanced Data: A Solution of Online Weighted Extreme Learning MachineAuthor(s):  Hualong Yu; Xibei Yang; Shang Zheng; Changyin SunPages: 1088 - 110311. Perception Coordination Network: A Neuro Framework for Multimodal Concept Acquisition and BindingAuthor(s):  You-Lu Xing; Xiao-Feng Shi; Fu-Rao Shen; Jin-Xi Zhao; Jing-Xin Pan; Ah-Hwee TanPages: 1104 - 111812. Adaptive Learning Control for Nonlinear Systems With Randomly Varying Iteration LengthsAuthor(s):  Dong Shen; Jian-Xin XuPages: 1119 - 113213. Flexible Affinity Matrix Learning for Unsupervised and Semisupervised ClassificationAuthor(s):  Xiaozhao Fang; Na Han; Wai Keung Wong; Shaohua Teng; Jigang Wu; Shengli Xie; Xuelong LiPages: 1133 - 114914. Fast Inference Predictive Coding: A Novel Model for Constructing Deep Neural NetworksAuthor(s):  Zengjie Song; Jiangshe Zhang; Guang Shi; Junmin LiuPages: 1150 - 116515. Nonlinear Dimensionality Reduction With Missing Data Using Parametric Multiple ImputationsAuthor(s):  Cyril de Bodt; Dounia Mulders; Michel Verleysen; John Aldo LeePages: 1166 - 117916. Domain Adaption via Feature Selection on Explicit Feature MapAuthor(s):  Wan-Yu Deng; Amaury Lendasse; Yew-Soon Ong; Ivor Wai-Hung Tsang; Lin Chen; Qing-Hua ZhengPages: 1180 - 119017. Universal Approximation Capability of Broad Learning System and Its Structural VariationsAuthor(s):  C. L. Philip Chen; Zhulin Liu; Shuang FengPages: 1191 - 120418. Dualityfree Methods for Stochastic Composition OptimizationAuthor(s):  Liu Liu; Ji Liu; Dacheng TaoPages: 1205 - 121719. Event-Based Line Fitting and Segment Detection Using a Neuromorphic Visual SensorAuthor(s):  David Reverter Valeiras; Xavier Clady; Sio-Hoi Ieng; Ryad BenosmanPages: 1218 - 123020. SEFRON: A New Spiking Neuron Model With Time-Varying Synaptic Efficacy Function for Pattern ClassificationAuthor(s):  Abeegithan Jeyasothy; Suresh Sundaram; Narasimhan SundararajanPages: 1231 - 124021. Bounded Neural Network Control for Target Tracking of Underactuated Autonomous Surface Vehicles in the Presence of Uncertain Target DynamicsAuthor(s):  Lu Liu; Dan Wang; Zhouhua Peng; C. L. Philip Chen; Tieshan LiPages: 1241 - 124922. Category-Based Deep CCA for Fine-Grained Venue Discovery From Multimodal DataAuthor(s):  Yi Yu; Suhua Tang; Kiyoharu Aizawa; Akiko AizawaPages: 1250 - 125823. Markov Boundary-Based Outlier MiningAuthor(s):  Kui Yu; Huanhuan ChenPages: 1259 - 126424. Spectral Embedded Adaptive Neighbors ClusteringAuthor(s):  Qi Wang; Zequn Qin; Feiping Nie; Xuelong LiPages: 1265 - 127125. Approximation of Ensemble Boundary Using Spectral CoefficientsAuthor(s):  T. Windeatt; C. Zor; N. C. CamgozPages: 1272 - 127726. Developmental Resonance NetworkAuthor(s):  Gyeong-Moon Park; Jae-Woo Choi; Jong-Hwan KimPages: 1278 - 1284 Read more »
  • Soft Computing, Volume 23, Issue 7, April 2019
    1) Many-valued Logics for Reasoning: Essays in Honor of Lluís Godo on the Occasion of his 60th BirthdayAuthor(s): Didier Dubois, Francesc Esteva, Tommaso Flaminio, Carles NogueraPages: 2125-21272) On linear varieties of MTL-algebrasAuthor(s): Stefano Aguzzoli, Matteo BianchiPages: 2129-21463) A distributed argumentation algorithm for mining consistent opinions in weighted Twitter discussionsAuthor(s): Teresa Alsinet, Josep Argelich, Ramón Béjar, Joel CemeliPages: 2147-21664) Paraconsistency and the need for infinite semanticsAuthor(s): Arnon AvronPages: 2167-21755) Syntactic characterizations of classes of first-order structures in mathematical fuzzy logicAuthor(s): Guillermo Badia, Vicent Costa, Pilar Dellunde, Carles NogueraPages: 2177-21866) New complexity results for Łukasiewicz logicAuthor(s): Miquel Bofill, Felip Manyà, Amanda Vidal, Mateu VillaretPages: 2187-21977) Pseudomonadic BL-algebras: an algebraic approach to possibilistic BL-logicAuthor(s): Manuela Busaniche, Penélope Cordero, Ricardo Oscar RodriguezPages: 2199-22128) Combining fragments of classical logic: When are interaction principles needed?Author(s): Carlos Caleiro, Sérgio Marcelino, João MarcosPages: 2213-22319) Toward a general frame semantics for modal many-valued logicsAuthor(s): Petr Cintula, Paula Menchón, Carles NogueraPages: 2233-224110) Swap structures semantics for Ivlev-like modal logicsAuthor(s): Marcelo E. Coniglio, Ana Claudia GolzioPages: 2243-225411) Connecting fuzzy logic and argumentation frames via logical attack principlesAuthor(s): Esther Anna Corsi, Christian G. FermüllerPages: 2255-227012) On an operation with regular elementsAuthor(s): Rodolfo C. Ertola-BirabenPages: 2271-227813) Implicit definability of truth constants in Łukasiewicz logicAuthor(s): Zuzana HanikováPages: 2279-228714) Betting on continuous independent eventsAuthor(s): Daniele MundiciPages: 2289-229515) Compatibly involutive residuated lattices and the Nelson identityAuthor(s): Matthew Spinks, Umberto Rivieccio, Thiago NascimentoPages: 2297-232016) On network analysis using non-additive integrals: extending the game-theoretic network centralityAuthor(s): Vicenç Torra, Yasuo NarukawaPages: 2321-232917) Representations for logics and algebras related to revised drastic product t-normAuthor(s): Diego ValotaPages: 2331-234218) A fuzzy-based classification strategy (FBCS) based on brain–computer interfaceAuthor(s): Ahmed I. Saleh, Sahar A. Shehata, Labeeb M. LabeebPages: 2343-236719) A soft-computing-based approach to artificial visual attention using human eye-fixation paradigm: toward a human-like skill in robot visionAuthor(s): Kurosh Madani, Viachaslau Kachurka, Christophe Sabourin, Vladimir GolovkoPages: 2369-238920) An FPN-based classification method for speech intelligibility detection of children with speech impairmentsAuthor(s): Fadhilah Rosdi, Siti Salwah Salim, Mumtaz Begum MustafaPages: 2391-240821) An effective improved differential evolution algorithm to solve constrained optimization problemsAuthor(s): Xiaobing Yu, Yiqun Lu, Xuming Wang, Xiang Luo, Mei CaiPages: 2409-242722) Scalable scene understanding via saliency consensusAuthor(s): Bharath Ramesh, Nicholas Lim Zhi Jian, Liang Chen, Cheng Xiang, Zhi GaoPages: 2429-2443223) A novel intelligent diagnosis method using optimal LS-SVM with improved PSO algorithmAuthor(s): Wu Deng, Rui Yao, Huimin Zhao, Xinhua Yang, Guangyu LiPages: 2445-246224) Comparison of a genetic programming approach with ANFIS for power amplifier behavioral modeling and FPGA implementationAuthor(s): José Alejandro Galaviz-Aguilar, Patrick RoblinPages: 2463-2481 Read more »
  • Complex and Intelligent Systems, Volume 5, Issue 1, March 2019
    1. Rare pattern mining: challenges and future perspectivesAuthor(s): Anindita Borah, Bhabesh NathPages: 1-232. LSHADE-SPA memetic framework for solving large-scale optimization problemsAuthor(s): Anas A. Hadi, Ali W. Mohamed, Kamal M. JambiPages: 25-403. Interval-valued Pythagorean fuzzy Einstein hybrid weighted averaging aggregation operator and their application to group decision makingAuthor(s): Khaista Rahman, Saleem Abdullah, Asad Ali, Fazli AminPages: 41-524. Evaluation of firms applying to Malcolm Baldrige National Quality Award: a modified fuzzy AHP methodAuthor(s): Serhat Aydın, Cengiz KahramanPages: 53-635. Intuitionistic trapezoidal fuzzy multi-numbers and its application to multi-criteria decision-making problemsAuthor(s): Vakkas Uluçay, Irfan Deli, Mehmet ŞahinPages: 65-786. Controlling disturbances of islanding in a gas power plant via fuzzy-based neural network approach with a focus on load-shedding systemAuthor(s): M. Moloudi, A. H. MazinanPages: 79-89 Read more »
  • Soft Computing, Volume 23, Issue 6, March 2019
    1. Editorial to image processing with soft computing techniquesAuthor(s): Irina Perfilieva, Javier Montero, Salvatore SessaPages: 1777-17782. Hyperspectral imaging using notions from type-2 fuzzy setsAuthor(s): A. Lopez-Maestresalas, L. De Miguel, C. Lopez-Molina, S. ArazuriPages: 1779-17933. Sensitivity analysis for image represented by fuzzy functionAuthor(s): Petr Hurtik, Nicolás Madrid, Martin DybaPages: 1795-18074. A new edge detection method based on global evaluation using fuzzy clusteringAuthor(s): Pablo A. Flores-Vidal, Pablo Olaso, Daniel Gómez, Carely GuadaPages: 1809-18215. An image segmentation technique using nonsubsampled contourlet transform and active contoursAuthor(s): Lingling FangPages: 1823-18326. Total variation with nonlocal FT-Laplacian for patch-based inpaintingAuthor(s): Irina Perfilieva, Pavel VlašánekPages: 1833-18417. Biometric recognition using finger and palm vein imagesAuthor(s): S Bharathi, R SudhakarPages: 1843-18558. Design of meteorological pattern classification system based on FCM-based radial basis function neural networks using meteorological radar dataAuthor(s): Eun-Hu Kim, Jun-Hyun Ko, Sung-Kwun Oh, Kisung SeoPages: 1857-187259. Zadeh max–min composition fuzzy rule for dominated pixel values in iris localizationAuthor(s): S. G. Gino Sophia, V. Ceronmani SharmilaPages: 1873-188910. Fusion linear representation-based classificationAuthor(s): Zhonghua Liu, Guosen Xie, Lin Zhang, Jiexin PuPages: 1891-189911. A novel optimization algorithm for recommender system using modified fuzzy c-means clustering approachAuthor(s): C. Selvi, E. SivasankarPages: 1901-191612. Energy-aware virtual machine allocation and selection in cloud data centersAuthor(s): V. Dinesh Reddy, G. R. Gangadharan, G. Subrahmanya V. R. K. RaoPages: 1917-193213. An extensive evaluation of search-based software testing: a reviewAuthor(s): Manju Khari, Prabhat KumarPages: 1933-194614. A parallel hybrid optimization algorithm for some network design problemsAuthor(s): Ibrahima Diarrassouba, Mohamed Khalil Labidi, A. Ridha MahjoubPages: 1947-196415. Structure evolution-based design for low-pass IIR digital filters with the sharp transition band and the linear phase passbandAuthor(s): Lijia Chen, Mingguo Liu, Jing Wu, Jianfeng Yang, Zhen DaiPages: 1965-198416. A new approach to construct similarity measure for intuitionistic fuzzy setsAuthor(s): Yafei Song, Xiaodan Wang, Wen Quan, Wenlong HuangPages: 1985-199817. Similarity measures of generalized trapezoidal fuzzy numbers for fault diagnosisAuthor(s): Jianjun Xie, Wenyi Zeng, Junhong Li, Qian YinPages: 1999-201418. Discussing incomplete 2-tuple fuzzy linguistic preference relations in multi-granular linguistic MCGDM with unknown weight informationAuthor(s): Xue-yang Zhang, Hong-yu Zhang, Jian-qiang WangPages: 2015-203219. A hybrid biogeography-based optimization and fuzzy C-means algorithm for image segmentationAuthor(s): Minxia Zhang, Weixuan Jiang, Xiaohan Zhou, Yu Xue, Shengyong ChenPages: 2033-204620. Vibration control of a structure using sliding-mode hedge-algebras-based controllerAuthor(s): Duc-Trung Tran, Van-Binh Bui, Tung-Anh Le, Hai-Le BuiPages: 2047-205921. Efficient obfuscation for CNF circuits and applications in cloud computingAuthor(s): Huang Zhang, Fangguo Zhang, Rong Cheng, Haibo TianPages: 2061-207222. Modeling and comparison of the series systems with imperfect coverage for an unreliable serverAuthor(s): Ching-Chang Kuo, Jau-Chuan KePages: 2073-208223. Gravitational search algorithm and K-means for simultaneous feature selection and data clustering: a multi-objective approachAuthor(s): Jay Prakash, Pramod Kumar SinghPages: 2083-210024. Towards efficient privacy-preserving encrypted image search in cloud computingAuthor(s): Yuan Wang, Meixia Miao, Jian Shen, Jianfeng WangPages: 2101-211225. Complete image fusion method based on fuzzy transformsAuthor(s): Ferdinando Di Martino, Salvatore SessaPages: 2113-2123 Read more »
  • Complex & Intelligent Systems, Volume 5, Issue 1, March 2019
    1. Rare pattern mining: challenges and future perspectivesAuthor(s): Anindita Borah, Bhabesh Nath Pages: 1-232. LSHADE-SPA memetic framework for solving large-scale optimization problemsAuthor(s):Anas A. Hadi, Ali W. Mohamed, Kamal M. Jambi Pages: 25-403. Interval-valued Pythagorean fuzzy Einstein hybrid weighted averaging aggregation operator and their application to group decision makingAuthor(s):Khaista Rahman, Saleem Abdullah, Asad Ali, Fazli Amin Pages: 41-524. Evaluation of firms applying to Malcolm Baldrige National Quality Award: a modified fuzzy AHP methodAuthor(s):Serhat Aydin, Cengiz Kahraman Pages: 53-635. Intuitionistic trapezoidal fuzzy multi-numbers and its application to multi-criteria decision-making problemsAuthor(s):Vakkas Ulucay, Irfan Deli, Mehmet Sahin Pages: 65-786. Controlling disturbances of islanding in a gas power plant via fuzzy-based neural network approach with a focus on load-shedding systemAuthor(s):M. Moloudi, A. H. Mazinan Pages: 79-89 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems, Volume 30, Issue 3, March 2019
    1. NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature MapsAuthor(s): Alessandro Aimar; Hesham Mostafa; Enrico Calabrese; Antonio Rios-Navarro; Ricardo Tapiador-Morales; Iulia-Alexandra Lungu; Moritz B. Milde; Federico Corradi; Alejandro Linares-Barranco; Shih-Chii Liu; Tobi DelbruckPages: 644 - 6562. Robust Dimension Reduction for Clustering With Local Adaptive LearningAuthor(s): Xiao-Dong Wang; Rung-Ching Chen; Zhi-Qiang Zeng; Chao-Qun Hong; Fei YanPages: 657 - 6693. Learning of a Decision-Maker’s Preference Zone With an Evolutionary ApproachAuthor(s): Manish AggarwalPages: 670 - 6824. Fine-Grained Image Classification Using Modified DCNNs Trained by Cascaded Softmax and Generalized Large-Margin LossesAuthor(s): Weiwei Shi; Yihong Gong; Xiaoyu Tao; De Cheng; Nanning ZhengPages: 683 - 6945. Distributed Generalized Nash Equilibrium Seeking Algorithm Design for Aggregative Games Over Weight-Balanced DigraphsAuthor(s): Zhenhua Deng; Xiaohong NianPages: 695 - 7066. DBSDA: Lowering the Bound of Misclassification Rate for Sparse Linear Discriminant Analysis via Model DebiasingAuthor(s): Haoyi Xiong; Wei Cheng; Jiang Bian; Wenqing Hu; Zeyi Sun; Zhishan GuoPages: 707 - 7177. Modulation Classification Based on Signal Constellation Diagrams and Deep LearningAuthor(s): Shengliang Peng; Hanyu Jiang; Huaxia Wang; Hathal Alwageed; Yu Zhou; Marjan Mazrouei Sebdani; Yu-Dong YaoPages: 718 - 7278. ICFS Clustering With Multiple Representatives for Large DataAuthor(s): Liang Zhao; Zhikui Chen; Yi Yang; Liang Zou; Z. Jane WangPages: 728 - 7389. Exponential Stabilization of Fuzzy Memristive Neural Networks With Hybrid Unbounded Time-Varying DelaysAuthor(s): Yin Sheng; Frank L. Lewis; Zhigang ZengPages: 739 - 75010. A Fused CP Factorization Method for Incomplete TensorsAuthor(s): Yuankai Wu; Huachun Tan; Yong Li; Jian Zhang; Xiaoxuan ChenPages: 751 - 76411. Indefinite Kernel Logistic Regression With Concave-Inexact-Convex ProcedureAuthor(s): Fanghui Liu; Xiaolin Huang; Chen Gong; Jie Yang; Johan A. K. SuykensPages: 765 - 77612. Robot Learning System Based on Adaptive Neural Control and Dynamic Movement PrimitivesAuthor(s): Chenguang Yang; Chuize Chen; Wei He; Rongxin Cui; Zhijun LiPages: 777 - 78713. Discriminative Feature Selection via Employing Smooth and Robust Hinge LossAuthor(s): Hanyang Peng; Cheng-Lin LiuPages: 788 - 80214. A Fast and Accurate Matrix Completion Method Based on QR Decomposition and L2,1 Norm MinimizationAuthor(s): Qing Liu; Franck Davoine; Jian Yang; Ying Cui; Zhong Jin; Fei HanPages: 803 - 81715. Local Restricted Convolutional Neural Network for Change Detection in Polarimetric SAR ImagesAuthor(s): Fang Liu; Licheng Jiao; Xu Tang; Shuyuan Yang; Wenping Ma; Biao HouPages: 818 - 83316. Cost-Effective Object Detection: Active Sample Mining With Switchable Selection CriteriaAuthor(s): Keze Wang; Liang Lin; Xiaopeng Yan; Ziliang Chen; Dongyu Zhang; Lei ZhangPages: 834 - 85017. Multiview Subspace Clustering via Tensorial t-Product RepresentationAuthor(s): Ming Yin; Junbin Gao; Shengli Xie; Yi GuoPages: 851 - 86418. Exploring Self-Repair in a Coupled Spiking Astrocyte Neural NetworkAuthor(s): Junxiu Liu; Liam J. Mcdaid; Jim Harkin; Shvan Karim; Anju P. Johnson; Alan G. Millard; James Hilder; David M. Halliday; Andy M. Tyrrell; Jon TimmisPages: 865 - 87519. Fast and Accurate Hierarchical Clustering Based on Growing Multilayer Topology TrainingAuthor(s): Yiu-ming Cheung; Yiqun ZhangPages: 876 - 89020. General Square-Pattern Discretization Formulas via Second-Order Derivative Elimination for Zeroing Neural Network Illustrated by Future OptimizationAuthor(s): Jian Li; Yunong Zhang; Mingzhi MaoPages: 891 - 90121. Optimal Control of Propagating Fronts by Using Level Set Methods and Neural ApproximationsAuthor(s): Angelo Alessandri; Patrizia Bagnerini; Mauro GaggeroPages: 902 - 91222. Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning ApproachAuthor(s): Ramasamy Saravanakumar; Hyung Soo Kang; Choon Ki Ahn; Xiaojie Su; Hamid Reza KarimiPages: 913 - 92223. Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical StructuresAuthor(s): Mohammadreza Mohaghegh Neyshabouri; Kaan Gokcesu; Hakan Gokcesu; Huseyin Ozkan; Suleyman Serdar KozatPages: 923 - 93724. Adaptive Optimal Output Regulation of Time-Delay Systems via Measurement FeedbackAuthor(s): Weinan Gao; Zhong-Ping JiangPages: 938 - 94525. Unsupervised Knowledge Transfer Using Similarity EmbeddingsAuthor(s): Nikolaos Passalis; Anastasios TefasPages: 946 - 95026. Synchronization of Coupled Markovian Reaction–Diffusion Neural Networks With Proportional Delays Via Quantized ControlAuthor(s): Xinsong Yang; Qiang Song; Jinde Cao; Jianquan LuPages: 951 - 95827. Stepsize Range and Optimal Value for Taylor–Zhang Discretization Formula Applied to Zeroing Neurodynamics Illustrated via Future Equality-Constrained Quadratic ProgrammingAuthor(s): Yunong Zhang; Huihui Gong; Min Yang; Jian Li; Xuyun YangPages: 959 - 966 Read more »
  • IEEE Transactions on Fuzzy Systems, Volume 27, Issue 1, Jan. 2019
    1) A Nested Tensor Product Model TransformationAuthor(s): Y Yu, Z Li, X Liu, K Hirota, X Chen, T Fernando, H H C LuPages: 1 - 15 2) High-order Intuitionistic Fuzzy Cognitive Map Based on Evidential Reasoning TheoryAuthor(s): Y Zhang, J Qin, P Shi, Y KangPages: 16 - 30 3) Joint Learning of Spectral Clustering Structure and Fuzzy Similarity Matrix of DataAuthor(s): Z Bian, H Ishibuchi, S WangPages: 31 - 44 4) Fuzzy Optimal Energy Management for Fuel Cell and Supercapacitor Systems Using Neural Network Based Driving Pattern RecognitionAuthor(s): R Zhang, J Tao, H ZhouPages: 45 - 57 5) Comparing the Performance Potentials of Interval and General Type-2 Rule-Based Fuzzy Systems in Terms of Sculpting the State SpaceAuthor(s): J M MendelPages: 58 - 71 6) LDS-FCM: A Linear Dynamical System Based Fuzzy C-Means Method for Tactile RecognitionAuthor(s): C Liu, W Huang, F Sun, M Luo, C TanPages: 72 - 83 7) Improving Risk Evaluation in FMEA With Cloud Model and Hierarchical TOPSIS MethodAuthor(s): H-C Liu, L-E Wang, Z-W Li, Y-P HuPages: 84 - 95 8) Finite-Time Adaptive Fuzzy Output Feedback Dynamic Surface Control for MIMO Nonstrict Feedback SystemsAuthor(s): Y Li, K Li, S TongPages: 96 - 1109) BPEC: Belief-Peaks Evidential ClusteringAuthor(s): Z-G Su, T DenoeuxPages: 111 - 123 10) Improving the Performance of Fuzzy Rule-Based Classification Systems Based on a Nonaveraging Generalization of CC-Integrals NamedCF1F2-IntegralsAuthor(s): G Lucca, G P Dimuro, J Fernandez, H Bustince, B Bedregal, J A SanzPages: 124 - 134 11) The Negation of a Basic Probability AssignmentAuthor(s): L Yin, X Deng, Y DengPages: 135 - 143 12) Event Triggered Adaptive Fuzzy Consensus for Interconnected Switched Multiagent SystemsAuthor(s): S Zheng, P Shi, S Wang, Y ShiPages: 144 - 158 13) Alternative Ranking-Based Clustering and Reliability Index-Based Consensus Reaching Process for Hesitant Fuzzy Large Scale Group Decision MakingAuthor(s): X Liu, Y Xu, R Montes, R-X Ding, F HerreraPages: 159 - 171 14) Fuzzy Adaptive Finite-Time Control Design for Nontriangular Stochastic Nonlinear SystemsAuthor(s): S Sui, C L P Chen, S TongPages: 172 - 184 15) Deviation-Sparse Fuzzy C-Means With Neighbor Information ConstraintAuthor(s): Y Zhang, X Bai, R Fan, Z WangPages: 185 - 199 16) Sampled-Data Adaptive Output Feedback Fuzzy Stabilization for Switched Nonlinear Systems With Asynchronous SwitchingAuthor(s): S Li, C K Ahn, Z XiangPages: 200 - 205 Read more »
  • Soft Computing, Volume 23, Issue 5, March 2019
    Special issue on Theory and Practice of Natural Computing: Fifth Edition1) Theory and practice of natural computing: fifth editionAuthor(s): Carlos Martín-Vide, Miguel A. Vega-RodríguezPages: 14212) A multi-objective evolutionary approach to Pareto-optimal model treesAuthor(s): Marcin Czajkowski, Marek KretowskiPages: 1423-14373) Fuel-efficient truck platooning by a novel meta-heuristic inspired from ant colony optimisationAuthor(s): Abtin Nourmohammadzadeh, Sven HartmannPages: 1439-14524) A heuristic survivable virtual network mapping algorithmAuthor(s): Xiangwei Zheng, Jie Tian, Xiancui Xiao, Xinchun Cui, Xiaomei YuPages: 1453-14635) A semiring-like representation of lattice pseudoeffect algebrasAuthor(s): Ivan Chajda, Davide Fazio, Antonio LeddaPages: 1465-14756) Twenty years of Soft Computing: a bibliometric overviewAuthor(s): José M. Merigó, Manuel J. Cobo, Sigifredo Laengle, Daniela RivasPages: 1477-14977) Monadic pseudo BCI-algebras and corresponding logicsAuthor(s): Xiaolong Xin, Yulong Fu, Yanyan Lai, Juntao WangPages: 1499-15108) Group decision making with compatibility measures of hesitant fuzzy linguistic preference relationsAuthor(s): Xunjie Gou, Zeshui Xu, Huchang LiaoPages: 1511-15279) Semi-multifractal optimization algorithmAuthor(s): Ireneusz GosciniakPages: 1529-153910) Time series interval forecast using GM(1,1) and NGBM(1, 1) modelsAuthor(s): Ying-Yuan Chen, Hao-Tien Liu, Hsiow-Ling HsiehPages: 1541-155511) Tri-partition cost-sensitive active learning through kNNAuthor(s): Fan Min, Fu-Lun Liu, Liu-Ying Wen, Zhi-Heng ZhangPages: 1557-157212) New ranked set sampling schemes for range charts limits under bivariate skewed distributionsAuthor(s): Derya Karagöz, Nursel KoyuncuPages: 1573-158713) Gist: general integrated summarization of text and reviewsAuthor(s): Justin Lovinger, Iren Valova, Chad CloughPages: 1589-160114) Rough fuzzy bipolar soft sets and application in decision-making problemsAuthor(s): Nosheen Malik, Muhammad ShabirPages: 1603-161415) Differential evolution with Gaussian mutation and dynamic parameter adjustmentAuthor(s): Gaoji Sun, Yanfei Lan, Ruiqing ZhaoPages: 1615-164216) Using Covariance Matrix Adaptation Evolution Strategies for solving different types of differential equationsAuthor(s): Jose M. Chaquet, Enrique J. CarmonaPages: 1643-166617) A direct solution approach based on constrained fuzzy arithmetic and metaheuristic for fuzzy transportation problemsAuthor(s): Adil Baykasoğlu, Kemal SubulanPages: 1667-169818) An efficient hybrid algorithm based on Water Cycle and Moth-Flame Optimization algorithms for solving numerical and constrained engineering optimization problemsAuthor(s): Soheyl Khalilpourazari, Saman KhalilpourazaryPages: 1699-172219) An evolutionary approach to constrained path planning of an autonomous surface vehicle for maximizing the covered area of Ypacarai LakeAuthor(s): Mario Arzamendia, Derlis Gregor, Daniel Gutierrez ReinaPages: 1723-173420) A multi-key SMC protocol and multi-key FHE based on some-are-errorless LWEAuthor(s): Huiyong Wang, Yong Feng, Yong Ding, Shijie TangPages: 1735-174421) Smart PSO-based secured scheduling approaches for scientific workflows in cloud computingAuthor(s): J. Angela Jennifa Sujana, T. Revathi, T. S. Siva Priya, K. MuneeswaranPages: 1745-17622) ARD-PRED: an in silico tool for predicting age-related-disorder-associated proteinsAuthor(s): Kirti Bhadhadhara, Yasha HasijaPages: 1767-1776 Read more »
  • Soft Computing, Volume 23, Issue 4, February 2019
    1) Direct limits of generalized pseudo-effect algebras with the Riesz decomposition propertiesAuthor(s): Yanan Guo, Yongjian XiePages: 1071-10782) Geometric structure information based multi-objective function to increase fuzzy clustering performance with artificial and real-life dataAuthor(s): M. M. Gowthul Alam, S. BaulkaniPages: 1079-10983) A Gould-type integral of fuzzy functions IIAuthor(s): Alina Gavriluţ, Alina IosifPages: 1099-11074) Sorting of decision-making methods based on their outcomes using dominance-vector hesitant fuzzy-based distanceAuthor(s): Bahram Farhadinia, Enrique Herrera-ViedmaPages: 1109-11215) On characterization of fuzzy tree pushdown automataAuthor(s): M. GhoraniPages: 1123-11316) BIAM: a new bio-inspired analysis methodology for digital ecosystems based on a scale-free architectureAuthor(s): Vincenzo Conti, Simone Sante Ruffo, Salvatore Vitabile, Leonard BarolliPages: 1133-11507) Self-feedback differential evolution adapting to fitness landscape characteristicsAuthor(s): Wei Li, Shanni Li, Zhangxin Chen, Liang Zhong, Chengtian OuyangPages: 1151-11638) Mining stock category association on Tehran stock marketAuthor(s): Zahra Hoseyni MasumPages: 1165-11779) A comparative study of optimization models in genetic programming-based rule extraction problemsAuthor(s): Marconi de Arruda Pereira, Eduardo Gontijo Carrano…Pages: 1179-119710) Distributed task allocation in multi-agent environments using cellular learning automataAuthor(s): Maryam Khani, Ali Ahmadi, Hajar HajaryPages: 1199-121811) MOEA3D: a MOEA based on dominance and decomposition with probability distribution modelAuthor(s): Ziyu Hu, Jingming Yang, Huihui Cui, Lixin Wei, Rui FanPages: 1219-123712) CCODM: conditional co-occurrence degree matrix document representation methodAuthor(s): Wei Wei, Chonghui Guo, Jingfeng Chen, Lin Tang, Leilei SunPages: 1239-125513) Minimization of reliability indices and cost of power distribution systems in urban areas using an efficient hybrid meta-heuristic algorithmAuthor(s): Avishek Banerjee, Samiran Chattopadhyay, Grigoras Gheorghe…Pages: 1257-128114) A robust method to discover influential users in social networksAuthor(s): Qian Ma, Jun MaPages: 1283-129515) Chance-constrained random fuzzy CCR model in presence of skew-normal distributionAuthor(s): Behrokh Mehrasa, Mohammad Hassan BehzadiPages: 1297-130816) A fuzzy AHP-based methodology for project prioritization and selectionAuthor(s): Amir Shaygan, Özlem Müge TestikPages: 1309-131917) A multi-objective evolutionary fuzzy system to obtain a broad and accurate set of solutions in intrusion detection systemsAuthor(s): Salma Elhag, Alberto Fernández, Abdulrahman Altalhi, Saleh Alshomrani…Pages: 1321-133618) Uncertain vertex coloring problemAuthor(s): Lin Chen, Jin Peng, Dan A. RalescuPages: 1337-134619) Local, global and decentralized fuzzy-based computing paradigms for coordinated voltage control of grid-connected photovoltaic systemsAuthor(s): Alfredo Vaccaro, Hafsa Qamar, Haleema QamarPages: 1347-135620) Combining user preferences and expert opinions: a criteria synergy-based model for decision making on the WebAuthor(s): Marcelo Karanik, Rubén Bernal, José Ignacio Peláez…Pages: 1357-137321) A bi-objective fleet size and mix green inventory routing problem, model and solution methodAuthor(s): Mehdi Alinaghian, Mohsen ZamaniPages: 1375-139122) A genetic algorithm approach to the smart grid tariff design problemAuthor(s): Will Rogers, Paula Carroll, James McDermottPages: 1393-140523) Lyapunov–Krasovskii stable T2FNN controller for a class of nonlinear time-delay systemsAuthor(s): Sehraneh Ghaemi, Kamel Sabahi, Mohammad Ali BadamchizadehPages: 1407-1419 Read more »
  • Soft Computing, Volume 23, Issue 3, February 2019
    1) Butterfly optimization algorithm: a novel approach for global optimizationAuthor(s): Sankalap Arora, Satvir SinghPages: 715-734 2) Congruences and ideals in generalized pseudoeffect algebras revisitedAuthor(s): S. PulmannováPages: 735-7453) An efficient online/offline ID-based short signature procedure using extended chaotic mapsAuthor(s): Chandrashekhar Meshram, Chun-Ta Li, Sarita Gajbhiye MeshramPages: 747-7534) Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversionAuthor(s): Qiuhong Xiang, Bolin Liao, Lin Xiao, Long Lin, Shuai LiPages: 755-7665) Post-training discriminative pruning for RBMsAuthor(s): Máximo Sánchez-Gutiérrez, Enrique M. Albornoz, Hugo L. RufinerPages: 767-7816) Gravitational search algorithm with both attractive and repulsive forcesAuthor(s): Hamed Zandevakili, Esmat Rashedi, Ali MahaniPages: 783-8257) A statistic approach for power analysis of integrated GPUAuthor(s): Qiong Wang, Ning Li, Li Shen, Zhiying WangPages: 827-838) Resolution of single-variable fuzzy polynomial equations and an upper bound on the number of solutionsAuthor(s): Hamed Farahani, Mahmoud Paripour, Saeid AbbasbandyPages: 837-8459) Rescheduling-based congestion management scheme using particle swarm optimization with distributed acceleration constantsAuthor(s): Naresh Kumar YadavPages: 847-85710) Dual buffer rotation four-stage pipeline for CPU–GPU cooperative computingAuthor(s): Tao Li, Qiankun Dong, Yifeng Wang, Xiaoli Gong, Yulu YangPages: 859-86911) Physarum-energy optimization algorithmAuthor(s): Xiang Feng, Yang Liu, Huiqun Yu, Fei LuoPages: 871-88812) An adaptive control study for the DC motor using meta-heuristic algorithmsAlejandro Rodríguez-Molina, Miguel Gabriel Villarreal-CervantesPages: 889-90613) Interval valued L-fuzzy prime ideals, triangular norms and partially ordered groupsAuthor(s): Babushri Srinivas Kedukodi, Syam Prasad Kuncham, B. JagadeeshaPages: 907-92014) An interpretable neuro-fuzzy approach to stock price forecastingAuthor(s): Sharifa Rajab, Vinod SharmaPages: 921-93615) Collaborative multi-view K-means clusteringAuthor(s): Safa Bettoumi, Chiraz Jlassi, Najet ArousPages: 937-94516) New types of generalized Bosbach states on non-commutative residuated latticesAuthor(s): Weibing ZuoPages: 947-95917) Bi-objective corridor allocation problem using a permutation-based genetic algorithm hybridized with a local search techniqueAuthor(s): Zahnupriya Kalita, Dilip Datta, Gintaras PalubeckisPages: 961-98618) Derivative-based acceleration of general vector machineAuthor(s): Binbin Yong, Fucun Li, Qingquan Lv, Jun Shen, Qingguo ZhouPages: 987-99519) An approach based on reliability-based possibility degree of interval for solving general interval bilevel linear programming problemAuthor(s): Aihong Ren, Yuping WangPages: 997-100620) Emotion-based color transfer of images using adjustable color combinationsAuthor(s): Yuan-Yuan Su, Hung-Min SunPages: 1007-102021) Improved metaheuristic-based energy-efficient clustering protocol with optimal base station location in wireless sensor networksAuthor(s): Palvinder Singh Mann, Satvir SinghPages: 1021-103722) Evolving nearest neighbor time series forecastersAuthor(s): Juan J. Flores, José R. Cedeño González, Rodrigo Lopez FariasPages: 1039-104823) On separating axioms and similarity of soft topological spacesAuthor(s): Małgorzata TerepetaPages: 1049-105724) Optimal platform design with modularity strategy under fuzzy environmentAuthor(s): Qinyu Song, Yaodong NiPages: 1059-1070 Read more »
  • Complex & Intelligent Systems, Volume 4, Issue 4, December 2018
    1) A robust system maturity model for complex systems utilizing system readiness level and Petri netsAuthor(s): Brent Thal, Bill Olson, Paul Blessner Pages: 241-2502) Circuit design and simulation for the fractional-order chaotic behavior in a new dynamical systemAuthor(s): Z. Hammouch, T. Mekkaoui Pages: 251-2603) Priority ranking for energy resources in Turkey and investment planning for renewable energy resourcesAuthor(s): Mehmet Emin Baysal, Nazli Ceren Cetin Pages: 261-2694) Towards online data-driven prognostics systemAuthor(s): Hatem M. Elattar, Hamdy K. Elminir, A. M. Riad Pages: 271-2825) Model-based evolutionary algorithms: a short surveyAuthor(s): Ran Cheng, Cheng He, Yaochu Jin, Xin Yao Pages: 283-292 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems: Volume 30, Issue 1, January 2019
    1. Editorial: Booming of Neural Networks and Learning SystemsPage(s): 2 - 102. Deep CNN-Based Blind Image Quality PredictorAuthor(s): Jongyoo Kim; Anh-Duc Nguyen; Sanghoon LeePage(s): 11 - 243. Neuro-Adaptive Control With Given Performance Specifications for Strict Feedback Systems Under Full-State ConstraintsAuthor(s): Xiucai Huang; Yongduan Song; Junfeng LaiPage(s): 25 - 344. Consensus Problems Over Cooperation-Competition Random Switching Networks With Noisy ChannelsAuthor(s): Yonghong Wu; Bin Hu; Zhi-Hong GuanPage(s): 35 - 435. Estimation of Graphlet Counts in Massive NetworksAuthor(s): Ryan A. Rossi; Rong Zhou; Nesreen K. AhmedPage(s): 44 - 576. Finite-Time Passivity-Based Stability Criteria for Delayed Discrete-Time Neural Networks via New Weighted Summation InequalitiesAuthor(s): Ramasamy Saravanakumar; Sreten B. Stojanovic; Damnjan D. Radosavljevic; Choon Ki Ahn; Hamid Reza KarimiPage(s): 58 - 717. Multiple-Model Adaptive Estimation for 3-D and 4-D Signals: A Widely Linear Quaternion ApproachAuthor(s): Min Xiang; Bruno Scalzo Dees; Danilo P. MandicPage(s): 72 - 848. Optimal Synchronization Control of Multiagent Systems With Input Saturation via Off-Policy Reinforcement LearningAuthor(s): Jiahu Qin; Man Li; Yang Shi; Qichao Ma; Wei Xing ZhengPage(s): 85 - 969. Design and Adaptive Control for an Upper Limb Robotic Exoskeleton in Presence of Input SaturationAuthor(s): Wei He; Zhijun Li; Yiting Dong; Ting ZhaoPage(s): 97 - 10810. A Cost-Sensitive Deep Belief Network for Imbalanced ClassificationAuthor(s): Chong Zhang; Kay Chen Tan; Haizhou Li; Geok Soon HongPage(s): 109 - 12211. A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking NeuronsAuthor(s): Malu Zhang; Hong Qu; Ammar Belatreche; Yi Chen; Zhang YiPage(s): 123 - 13712. Enhanced Robot Speech Recognition Using Biomimetic Binaural Sound Source LocalizationAuthor(s): Jorge Dávila-Chacón; Jindong Liu; Stefan WermterPage(s): 138 - 15013. A Discrete-Time Projection Neural Network for Sparse Signal Reconstruction With Application to Face RecognitionAuthor(s): Bingrong Xu; Qingshan Liu; Tingwen HuangPage(s): 151 - 16214. Domain-Weighted Majority Voting for CrowdsourcingAuthor(s): Dapeng Tao; Jun Cheng; Zhengtao Yu; Kun Yue; Lizhen WangPage(s): 163 - 17415. Reconstructible Nonlinear Dimensionality Reduction via Joint Dictionary LearningAuthor(s): Xian Wei; Hao Shen; Yuanxiang Li; Xuan Tang; Fengxiang Wang; Martin Kleinsteuber; Yi Lu MurpheyPage(s): 175 - 18916. On the Duality Between Belief Networks and Feed-Forward Neural NetworksAuthor(s): Paul M. BaggenstossPage(s): 190 - 20017. Exploiting Combination Effect for Unsupervised Feature Selection by L2,0 NormAuthor(s): Xingzhong Du; Feiping Nie; Weiqing Wang; Yi Yang; Xiaofang ZhouPage(s): 201 - 21418. Leader-Following Practical Cluster Synchronization for Networks of Generic Linear Systems: An Event-Based ApproachAuthor(s): Jiahu Qin; Weiming Fu; Yang Shi; Huijun Gao; Yu KangPage(s): 215 - 22419. Semisupervised Learning Based on a Novel Iterative Optimization Model for Saliency DetectionAuthor(s): Shuwei Huo; Yuan Zhou; Wei Xiang; Sun-Yuan KungPage(s): 225 - 24120. Augmented Real-Valued Time-Delay Neural Network for Compensation of Distortions and Impairments in Wireless TransmittersAuthor(s): Dongming Wang; Mohsin Aziz; Mohamed Helaoui; Fadhel M. GhannouchiPage(s): 242 - 25421. UCFTS: A Unilateral Coupling Finite-Time Synchronization Scheme for Complex NetworksAuthor(s): Min Han; Meng Zhang; Tie Qiu; Meiling XuPage(s): 255 - 26822. A Semisupervised Classification Approach for Multidomain Networks With Domain SelectionAuthor(s): Chuan Chen; Jingxue Xin; Yong Wang; Luonan Chen; Michael K. NgPage(s): 269 - 28323. Neurons With Paraboloid Decision Boundaries for Improved Neural Network Classification PerformanceAuthor(s): Nikolaos Tsapanos; Anastasios Tefas; Nikolaos Nikolaidis; Ioannis PitasPage(s): 284 - 29424. Adaptive Reinforcement Learning Control Based on Neural Approximation for Nonlinear Discrete-Time Systems With Unknown Nonaffine Dead-Zone InputAuthor(s): Yan-Jun Liu; Shu Li; Shaocheng Tong; C. L. Philip ChenPage(s): 295 - 30525. Filippov Hindmarsh–Rose Neuronal Model With Threshold Policy ControlAuthor(s): Yi Yang; Xiaofeng LiaoPage(s): 306 - 31126. Blind Denoising AutoencoderAuthor(s): Angshul MajumdarPage(s): 312 - 31727. Variational Random Function Model for Network ModelingAuthor(s): Zenglin Xu; Bin Liu; Shandian Zhe; Haoli Bai; Zihan Wang; Jennifer NevillePage(s): 318 - 324 Read more »
  • IEEE Transactions on Neural Networks and Learning Systems: Volume 30, Issue 2, February 2019.
    1. fpgaConvNet: Mapping Regular and Irregular Convolutional Neural Networks on FPGAsAuthor(s): Stylianos I. Venieris; Christos-Savvas BouganisPage(s): 326 - 3422. A Novel Neural Networks Ensemble Approach for Modeling Electrochemical CellsAuthor(s): Massimiliano Luzi; Maurizio Paschero; Antonello Rizzi; Enrico Maiorino; Fabio Massimo Frattale MascioliPage(s): 343 - 3543. Exploring Correlations Among Tasks, Clusters, and Features for Multitask ClusteringAuthor(s): Wenming Cao; Si Wu; Zhiwen Yu; Hau-San WongPage(s): 355 - 3684. Scaling Up Kernel SVM on Limited Resources: A Low-Rank Linearization ApproachAuthor(s): Liang Lan; Zhuang Wang; Shandian Zhe; Wei Cheng; Jun Wang; Kai ZhangPage(s): 369 - 3785. Optimized Neural Network Parameters Using Stochastic Fractal Technique to Compensate Kalman Filter for Power System-Tracking-State EstimationAuthor(s): Hossam Mosbah; Mohamed E. El-HawaryPage(s): 379 - 3886. Online Identification of Nonlinear Stochastic Spatiotemporal System With Multiplicative Noise by Robust Optimal Control-Based Kernel Learning MethodAuthor(s): Hanwen Ning; Guangyan Qing; Tianhai Tian; Xingjian JingPage(s): 389 - 4047. Semisupervised Learning With Parameter-Free Similarity of Label and Side InformationAuthor(s): Rui Zhang; Feiping Nie; Xuelong LiPage(s): 405 - 4148. H-infinity State Estimation for Discrete-Time Nonlinear Singularly Perturbed Complex Networks Under the Round-Robin ProtocolAuthor(s): Xiongbo Wan; Zidong Wang; Min Wu; Xiaohui LiuPage(s): 415 - 4269. Temporal Self-Organization: A Reaction–Diffusion Framework for Spatiotemporal MemoriesAuthor(s): Prayag Gowgi; Shayan Srinivasa GaraniPage(s): 427 - 44810. Variational Bayesian Learning for Dirichlet Process Mixture of Inverted Dirichlet Distributions in Non-Gaussian Image Feature ModelingAuthor(s): Zhanyu Ma; Yuping Lai; W. Bastiaan Kleijn; Yi-Zhe Song; Liang Wang; Jun GuoPage(s): 449 - 46311. Hierarchical Decision and Control for Continuous Multitarget Problem: Policy Evaluation With Action DelayAuthor(s): Jiangcheng Zhu; Jun Zhu; Zhepei Wang; Shan Guo; Chao XuPage(s): 464 - 47312. Unified Low-Rank Matrix Estimate via Penalized Matrix Least Squares ApproximationAuthor(s): Xiangyu Chang; Yan Zhong; Yao Wang; Shaobo LinPage(s): 474 - 48513. Online Active Learning Ensemble Framework for Drifted Data StreamsAuthor(s): Jicheng Shan; Hang Zhang; Weike Liu; Qingbao LiuPage(s): 486 - 49814. A New Approach to Stochastic Stability of Markovian Neural Networks With Generalized Transition RatesAuthor(s): Ruimei Zhang; Deqiang Zeng; Ju H. Park; Yajuan Liu; Shouming ZhongPage(s): 499 - 51015. Optimization of Distributions Differences for ClassificationAuthor(s): Mohammad Reza Bonyadi; Quang M. Tieng; David C. ReutensPage(s): 511 - 52316. Deep Convolutional Identifier for Dynamic Modeling and Adaptive Control of Unmanned HelicopterAuthor(s): Yu Kang; Shaofeng Chen; Xuefeng Wang; Yang CaoPage(s): 524 - 53817. Neural-Response-Based Extreme Learning Machine for Image ClassificationAuthor(s): Hongfeng Li; Hongkai Zhao; Hong LiPage(s): 539 - 55218. Deep Ensemble Machine for Video ClassificationAuthor(s): Jiewan Zheng; Xianbin Cao; Baochang Zhang; Xiantong Zhen; Xiangbo SuPage(s): 553 - 56519. Multiple ψ-Type Stability of Cohen–Grossberg Neural Networks With Both Time-Varying Discrete Delays and Distributed DelaysAuthor(s): Fanghai Zhang; Zhigang ZengPage(s): 566 - 57920. Neural Network Training With Levenberg–Marquardt and Adaptable Weight CompressionAuthor(s): James S. Smith; Bo Wu; Bogdan M. WilamowskiPage(s): 580 - 58721. Solving Partial Least Squares Regression via Manifold Optimization ApproachesAuthor(s): Haoran Chen; Yanfeng Sun; Junbin Gao; Yongli Hu; Baocai YinPage(s): 588 - 60022. Dendritic Neuron Model With Effective Learning Algorithms for Classification, Approximation, and PredictionAuthor(s): Shangce Gao; Mengchu Zhou; Yirui Wang; Jiujun Cheng; Hanaki Yachi; Jiahai WangPage(s): 601 - 61423. Multiclass Nonnegative Matrix Factorization for Comprehensive Feature Pattern DiscoveryAuthor(s): Yifeng Li; Youlian Pan; Ziying LiuPage(s): 615 - 62924. Self-Paced Learning-Based Probability Subspace Projection for Hyperspectral Image ClassificationAuthor(s): Shuyuan Yang; Zhixi Feng; Min Wang; Kai ZhangPage(s): 630 - 63525. Hierarchical Stability Conditions for a Class of Generalized Neural Networks With Multiple Discrete and Distributed DelaysAuthor(s): Lei Song; Sing Kiong Nguang; Dan HuangPage(s): 636 - 642 Read more »
  • Full UK PhD Scholarships
    Full UK PhD scholarships in evolutionary computation/ computational intelligence/data analytics/ operations research/optimisation/simulationThanks to an arisen opportunity, we at the Operational Research (OR) group, Liverpool John Moores University (United Kingdom) may be able to offer a small number of PhD scholarships (full or tuition-fees-only depending on the quality of the candidate).There are two types of scholarships:The ones for UK/EU/settled students:Deadline 3rd March. Results to be known by end of March.provide full tuition fees for three years and,living expenses + running cost cover of about £16,500 each year (to be determined) for 3 yearsstudents have to enrol in Sept-Oct 2019Brexit will not have any impact on these scholarshipsThe ones for international students:Provide about £20,000 each year (to be determined). Students can use this amount to pay toward their tuition fees and living expenses.If the successful candidate joins one of the projects currently being run by the OR group, he/she may get additional scholarships depending on research performance and level of contribution. Regarding research topic, any area in evolutionary computation/ computational intelligence/data analytics/ operations research would be acceptable. However, I would prefer a topic that relates to one of our existing projects, which are in the following areas:OR techniques to study/mitigate the impact of climate change on transportation. For example, we have a project (with Merseyrail and Network Rail) on using data analytics and optimisation to anticipate and mitigate the impact of leaves falling on train tracks.Evolutionary computation or meta-heuristicsOR/data analytics applications in rail, in partnership with Merseyrail, Network Rail, and Rail Delivery GroupOR applications in maritime, in partnership with UK, EU and overseas portsOR applications in sustainable transportation, e.g. bicycle, e-bikes, walking, buses, emission/congestion reduction etc., in partnership with local authorities and transport authorities (e.g. the ones in Liverpool and Manchester)OR applications in logistics (e.g. bin packing, vehicle routing etc.) in partnership with logistics companies, especially those in airports, ports, and manufacturing plants (especially those in Liverpool).OR applications in manufacturing, in partnership with car manufacturers e.g. Vauxhall and Jaguar Land Rover.Interested candidates please contact Dr. Trung Thanh Nguyen T.T.Nguyen@ljmu.ac.uk with your full CV and transcripts. It is important that interested candidates contact me ASAP to ensure that we can prepare your applications in the best way to maximise your chance before the deadline of 3rd March. Read more »
  • IEEE Transactions on Neural Networks and Learning Systems, Volume 29, Issue 9, September 2018
    1. Continuous DropoutAuthor(s): Xu Shen; Xinmei Tian; Tongliang Liu; Fang Xu; Dacheng TaoPages: 3926 - 39372. Deep Manifold Learning Combined With Convolutional Neural Networks for Action RecognitionAuthor(s): Xin Chen; Jian Weng; Wei Lu; Jiaming Xu; Jiasi WengPages: 3938 - 39523. AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential DataAuthor(s): Ava Bargi; Richard Yi Da Xu; Massimo PiccardiPages: 3953 - 39684. Deep Learning of Constrained Autoencoders for Enhanced Understanding of DataAuthor(s): Babajide O. Ayinde; Jacek M. ZuradaPages: 3969 - 39795. Learning Methods for Dynamic Topic Modeling in Automated Behavior AnalysisAuthor(s): Olga Isupova; Danil Kuzin; Lyudmila MihaylovaPages: 3980 - 39936. Support Vector Data Descriptions and k-Means Clustering: One Class?Author(s): Nico Görnitz; Luiz Alberto Lima; Klaus-Robert Müller; Marius Kloft; Shinichi NakajimaPages: 3994 - 40067. Data-Driven Robust M-LS-SVR-Based NARX Modeling for Estimation and Control of Molten Iron Quality Indices in Blast Furnace IronmakingAuthor(s): Ping Zhou; Dongwei Guo; Hong Wang; Tianyou ChaiPages: 4007 - 40218. Detection of Sources in Non-Negative Blind Source Separation by Minimum Description Length CriterionAuthor(s): Chia-Hsiang Lin; Chong-Yung Chi; Lulu Chen; David J. Miller; Yue WangPages: 4022 - 40379. Nonparametric Coupled Bayesian Dictionary and Classifier Learning for Hyperspectral ClassificationAuthor(s): Naveed Akhtar; Ajmal MianPages: 4038 - 405010. Heterogeneous Multitask Metric Learning Across Multiple DomainsAuthor(s): Yong Luo; Yonggang Wen; Dacheng TaoPages: 4051 - 406411. Classification of Imbalanced Data by Oversampling in Kernel Space of Support Vector MachinesAuthor(s): Josey Mathew; Chee Khiang Pang; Ming Luo; Weng Hoe LeongPages: 4065 - 407612. A Novel Error-Compensation Control for a Class of High-Order Nonlinear Systems With Input DelayAuthor(s): Chao Shi; Zongcheng Liu; Xinmin Dong; Yong ChenPages: 4077 - 408713. Dimensionality Reduction in Multiple Ordinal RegressionAuthor(s): Jiabei Zeng; Yang Liu; Biao Leng; Zhang Xiong; Yiu-ming CheungPages: 4088 - 410114. A Deep Machine Learning Method for Classifying Cyclic Time Series of Biological Signals Using Time-Growing Neural NetworkAuthor(s): Arash Gharehbaghi; Maria LindénPages: 4102 - 411515. Transductive Zero-Shot Learning With Adaptive Structural EmbeddingAuthor(s): Yunlong Yu; Zhong Ji; Jichang Guo; Yanwei PangPages: 4116 - 412716. Bayesian Nonparametric Regression Modeling of Panel Data for Sequential ClassificationAuthor(s): Sihan Xiong; Yiwei Fu; Asok RayPages: 4128 - 413917. Symmetric Predictive Estimator for Biologically Plausible Neural LearningAuthor(s): David Xu; Andrew Clappison; Cameron Seth; Jeff OrchardPages: 4140 - 415118. A Distance-Based Weighted Undersampling Scheme for Support Vector Machines and its Application to Imbalanced ClassificationAuthor(s): Qi Kang; Lei Shi; MengChu Zhou; XueSong Wang; QiDi Wu; Zhi WeiPages: 4152 - 416519. Learning With Coefficient-Based Regularized Regression on Markov ResamplingAuthor(s): Luoqing Li; Weifu Li; Bin Zou; Yulong Wang; Yuan Yan Tang; Hua HanPages: 4166 - 417620. Sequential Labeling With Structural SVM Under Nondecomposable LossesAuthor(s): Guopeng Zhang; Massimo Piccardi; Ehsan Zare BorzeshiPages: 4177 - 418821. The Stability of Stochastic Coupled Systems With Time-Varying Coupling and General Topology StructureAuthor(s): Yan Liu; Wenxue Li; Jiqiang FengPages: 4189 - 420022. Stability Analysis of Quaternion-Valued Neural Networks: Decomposition and Direct ApproachesAuthor(s): Yang Liu; Dandan Zhang; Jungang Lou; Jianquan Lu; Jinde CaoPages: 4201 - 421123. On Wang k WTA With Input Noise, Output Node Stochastic, and Recurrent State NoiseAuthor(s): John Sum; Chi-Sing Leung; Kevin I.-J. HoPages: 4212 - 422224. Event-Driven Stereo Visual Tracking Algorithm to Solve Object OcclusionAuthor(s): Luis A. Camuñas-Mesa; Teresa Serrano-Gotarredona; Sio-Hoi Ieng; Ryad Benosman; Bernabé Linares-BarrancoPages: 4223 - 423725. Stability Analysis of Neural Networks With Time-Varying Delay by Constructing Novel Lyapunov FunctionalsAuthor(s): Tae H. Lee; Hieu M. Trinh; Ju H. ParkPages: 4238 - 424726. Design, Analysis, and Representation of Novel Five-Step DTZD Algorithm for Time-Varying Nonlinear OptimizationAuthor(s): Dongsheng Guo; Laicheng Yan; Zhuoyun NiePages: 4248 - 426027. Neural Observer and Adaptive Neural Control Design for a Class of Nonlinear SystemsAuthor(s): Bing Chen; Huaguang Zhang; Xiaoping Liu; Chong LinPages: 4261 - 427128. Shared Autoencoder Gaussian Process Latent Variable Model for Visual ClassificationAuthor(s): Jinxing Li; Bob Zhang; David ZhangPages: 4272 - 428629. Online Supervised Learning for Hardware-Based Multilayer Spiking Neural Networks Through the Modulation of Weight-Dependent Spike-Timing-Dependent PlasticityAuthor(s): Nan Zheng; Pinaki MazumderPages: 4287 - 430230. Neural-Network-Based Adaptive Backstepping Control With Application to Spacecraft Attitude RegulationAuthor(s): Xibin Cao; Peng Shi; Zhuoshi Li; Ming LiuPages: 4303 - 431331. Recursive Adaptive Sparse Exponential Functional Link Neural Network for Nonlinear AEC in Impulsive Noise EnvironmentAuthor(s): Sheng Zhang; Wei Xing ZhengPages: 4314 - 432332. Multilabel Prediction via Cross-View SearchAuthor(s): Xiaobo Shen; Weiwei Liu; Ivor W. Tsang; Quan-Sen Sun; Yew-Soon OngPages: 4324 - 433833. Large-Scale Metric Learning: A Voyage From Shallow to DeepAuthor(s): Masoud Faraki; Mehrtash T. Harandi; Fatih PorikliPages: 4339 - 434634. Distributed Event-Triggered Adaptive Control for Cooperative Output Regulation of Heterogeneous Multiagent Systems Under Switching TopologyAuthor(s): Ruohan Yang; Hao Zhang; Gang Feng; Huaicheng YanPages: 4347 - 435835. Event-Based Adaptive NN Tracking Control of Nonlinear Discrete-Time SystemsAuthor(s): Yuan-Xin Li; Guang-Hong YangPages: 4359 - 436936. Dynamic Analysis of Hybrid Impulsive Delayed Neural Networks With UncertaintiesAuthor(s): Bin Hu; Zhi-Hong Guan; Tong-Hui Qian; Guanrong ChenPages: 4370 - 438437. Robust Zeroing Neural-Dynamics and Its Time-Varying Disturbances Suppression Model Applied to Mobile Robot ManipulatorsAuthor(s): Dechao Chen; Yunong ZhangPages: 4385 - 439738. Multiple-Instance Ordinal RegressionAuthor(s): Yanshan Xiao; Bo Liu; Zhifeng HaoPages: 4398 - 441339. Neuroadaptive Control With Given Performance Specifications for MIMO Strict-Feedback Systems Under Nonsmooth Actuation and Output ConstraintsAuthor(s): Yongduan Song; Shuyan ZhouPages: 4414 - 442540. Lagrangean-Based Combinatorial Optimization for Large-Scale S3VMsAuthor(s): Francesco Bagattini; Paola Cappanera; Fabio SchoenPages: 4426 - 443541. Adaptive Fault-Tolerant Control for Nonlinear Systems With Multiple Sensor Faults and Unknown Control DirectionsAuthor(s): Ding Zhai; Liwei An; Xiaojian Li; Qingling ZhangPages: 4436 - 444642. Design of Distributed Observers in the Presence of Arbitrarily Large Communication DelaysAuthor(s): Kexin Liu; Jinhu Lü; Zongli LinPages: 4447 - 446143. A Solution Path Algorithm for General Parametric Quadratic Programming ProblemAuthor(s): Bin Gu; Victor S. ShengPages: 4462 - 447244. Online Density Estimation of Nonstationary Sources Using Exponential Family of DistributionsAuthor(s): Kaan Gokcesu; Suleyman S. KozatPages: 4473 - 447845. Image-Specific Classification With Local and Global DiscriminationsAuthor(s): Chunjie Zhang; Jian Cheng; Changsheng Li; Qi TianPages: 4479 - 448646. Global Asymptotic Stability for Delayed Neural Networks Using an Integral Inequality Based on Nonorthogonal PolynomialsAuthor(s): Xian-Ming Zhang; Wen-Juan Lin; Qing-Long Han; Yong He; Min WuPages: 4487 - 449347. L1-Norm Distance Minimization-Based Fast Robust Twin Support Vector k-Plane ClusteringAuthor(s): Qiaolin Ye; Henghao Zhao; Zechao Li; Xubing Yang; Shangbing Gao; Tongming Yin; Ning YePages: 4494 - 450348. Extensions to Online Feature Selection Using Bagging and BoostingAuthor(s): Gregory Ditzler; Joseph LaBarck; James Ritchie; Gail Rosen; Robi PolikarPages: 4504 - 450949. On Adaptive Boosting for System IdentificationAuthor(s): Johan Bjurgert; Patricio E. Valenzuela; Cristian R. RojasPages: 4510 - 451450. Universal Approximation by Using the Correntropy Objective FunctionAuthor(s): Mojtaba Nayyeri; Hadi Sadoghi Yazdi; Alaleh Maskooki; Modjtaba RouhaniPages: 4515 - 452151. Stability Analysis of Optimal Adaptive Control Under Value Iteration Using a Stabilizing Initial PolicyAuthor(s): Ali HeydariPages: 4522 - 452752. Object Categorization Using Class-Specific RepresentationsAuthor(s): Chunjie Zhang; Jian Cheng; Liang Li; Changsheng Li; Qi TianPages: 4528 - 453453. Improved Stability Analysis for Delayed Neural NetworksAuthor(s): Zhichen Li; Yan Bai; Congzhi Huang; Huaicheng Yan; Shicai MuPages: 4535 - 454154. Connectivity-Preserving Consensus Tracking of Uncertain Nonlinear Strict-Feedback Multiagent Systems: An Error Transformation ApproachAuthor(s): Sung Jin YooPages: 4542 - 4548 Read more »
WordPress RSS Feed Retriever by Theme Mason

AI ML MarketPlace

  • Astra makes machine learning bet with Benevolent AI deal
    true
    Today’s tie-up between Astrazeneca and Benevolent AI is another example of a big pharma jumping on the machine learning bandwagon, in this case to try to improve its drug discovery processes. Some industry watchers are cynical about the potential of artificial intelligence, and even Astra admits that the jury is still out. But the company maintains that this is an area it has to be involved in, otherwise it risks falling behind its competitors. “There’s a lot of hyperbole around machine learning and what it can do to transform drug discovery,” Mene Pangalos, head of Astra’s innovative medicines division, tells Vantage. “But I think it’s something worth us investing in, because if it is useful it could be transformational.” Still, it will be a long while before this will be proven either way, he concedes: “The proof of the pudding will be if we can deliver proof-of-concept data or launch a medicine. And that’s obviously some years away.” New targets The aim of the deal with Benevolent is to give Astra new insights into the biology of diseases, initially chronic kidney disease (CKD) and idiopathic pulmonary fibrosis (IPF), so it can discover targets or drug candidates that it might not have otherwise found. Key to this effort will be Benevolent’s capabilities in natural language processing (NLP): the way machines analyse human language and pull out relevant information. In this case, the companies will scour publicly available sources including scientific meetings, journals and patents to look for links between the diseases of interest and specific genes or proteins. This will generate an evolving and adapting dataset that will be updated as new findings become available, making this resource more powerful, Mr Pangalos explains. Astra is already sitting on a lot of data in CKD and IPF, which is one reason it chose to look first at these disorders, another being that they are complex and poorly understood. But the company will expand into all disease areas if the machine learning approach proves its worth, Mr Pangalos says. Early hints that the technology is doing what it is supposed to could include the company finding a genetically linked target that it would not otherwise have looked for, he adds. No black box As for why Astra chose Benevolent above the other many AI specialists, Mr Pangalos cites the group’s “excellent” NLP skills, an area in which Astra does not have its own capabilities. Another important factor was that Benevolent was willing to work with Astra, rather than just asking the big pharma to trust a “black box” algorithm with no input or understanding of how – or if – it worked. “There were companies that wanted us to just trust their capability, and for me that wasn’t acceptable because that won’t strengthen our own ability to do this. I didn’t want to become dependent on any one person or group – I wanted the expertise internally to judge whether this is good or not,” Mr Pangalos says, but refuses to point the finger at any AI company in particular. However, one player, IBM Watson Health, could provide a cautionary tale on the dangers of putting too much faith in machine learning. The company has struggled with various issues, most recently reportedly stopping sales of Watson for Drug Discovery, used in preclinical drug development. The group’s director of global life sciences, Christina Busmalis, told Vantage last week that Watson was still supporting the drug discovery product. When pressed on whether this meant that the company would not sell Watson for Drug Discovery to new customers, she replied: “We’re supporting our existing clients, and it’s a case-by-case basis of how we go to market.” She also admitted that the company was “investing more heavily into other areas, specifically clinical development”. Astra did meet Watson when it was looking for a machine learning partner, along with many other AI players, Mr Pangalos says. For now he will not give any financial details, but says the Benevolent deal represents “exceptionally good value for money” for Astra. Of course, this will ultimately depend on the collaboration producing results in the long term. “We still don’t know how useful this will be,” Mr Pangalos admits. “But we’ll never get there if we don’t work on it, so it’s important we’re trying it.” Source: Evaluate Read more »
  • Eye on A.I.— Retail Has Big Hopes For A.I. But Shoppers May Have Other Ideas
    true
      Walmart has opened a store in Levittown, N.Y. that is intended to showcase the power of artificial intelligence. The store, announced last week, is packed with video cameras, digital screens, and over 100 servers, making it appear more like a corporate data center than a discount retailer. All that machinery helps Walmart automatically track inventory so that it knows when toilet paper is running low or that milk needs restocking. The company’s goal is to create “a glimpse into the future of retail,” when computers rather than humans are expected to do a lot of retail’s grunt work. Walmart’s push into artificial intelligence highlights how retailers are increasingly adding data crunching to their brick and mortar stores. But it also sheds light on some of the potential pitfalls as consumers grow increasing wary of technology amid an endless stream of privacy mishaps at companies like Facebook. Walmart isn’t alone, of course, in trying to reinvent itself in an industry that is facing a huge threat from tech-savvy Amazon. Grocery chain Kroger, for instance, said earlier this year that it had tapped Microsoft to help it build two “connected experience” stores in which shoppers would, among other things, get personalized deals—possibly on their phones as they walk inside or on screens mounted on the shelves. Mark Russinovich, the chief technology officer of Microsoft Azure, told Fortune in a recent interview that such futuristic stores would need to handle their data crunching nearby, rather than doing it far away in the cloud—to avoid lag time. Equipping these Internet-connected stores could be a lucrative business for companies like Microsoft that want to sell computing power to retailers. The vendors even have a name for this emerging market—edge computing. But there’s no guarantee that retailers will be saved by it because consumers may balk at cameras tracking their movements while they walk up and down the aisles and being bombarded by discount offers. In apparent anticipation of the blowback, Walmart has filled its new store with kiosks that tell shoppers more about the technology it has installed. For retailers to be successful, consumers must feel comfortable about how their data is being used and with how they’re tracked. Companies that pitch A.I. as the future shouldn’t just assume that customers will go along for the ride. EYE ON A.I. NEWS   Facebook’s A.I. failure. Facebook said that its A.I. systems failed to detect a video by the New Zealand mosque shooter because the video was taken from a first-person point of view. But tech publication Motherboard fed some of the New Zealand shooting footage into Amazon’s image and video detection software and found that it was able to detect guns and weapons. White House A.I. National Plan Version 2.0. Later this year, the White House will release an updated version of its national A.I. plan, reported policy news site FedScoop. The newer version will contain more recommendations to government agencies about A.I. policies.  Driver’s Ed needs an update. As more companies like Tesla debut autonomous-driving features, drivers should be trained to use them, said auto news site Jalopnik. The report details how airline pilot training evolved as the aviation industry introduced automation, and suggests the similar training is necessary for drivers so that they know how to safely use self-driving features. Slack’s not slacking on A.I. Workplace chat company Slack filed last week to list its sharesdirectly on the New York Stock Exchange. The filing repeatedly mentions the use of machine learning technology to improve how its chat app recommends relevant topics, people, and documents in workplace discussions.  MACHINE LEARNING BABY STEPS Despite machine learning’s buzz in the business world, few companies are widely  using the technology and are instead testing it, according to tech publication ZDNet. For those starting out, Warwick Business School associate professor Dr. Panos Constantinides recommends that companies try machine learning in non-critical areas of their operations like chatbots that field customer inquiries. EYE ON A.I. HIRES   The Lustgarten Foundation named Elizabeth Jaffee as its chief medical advisor and that she will help develop a national pancreatic cancer database based on data from clinical and non-clinical trials. Jaffee is also the deputy director of the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University School of Medicine. Financial services company AllocateRite hired Michael Spece as the firm’s chief of artificial intelligence and data science. Spece was a data science immersion program fellow at e-commerce company Wayfair. The University of Chicago’s National Opinion Research Center picked Susan Paddock as its executive vice president and chief statistician. She was previously a senior statistician at the RAND Corporation think tank. EYE ON A.I. RESEARCH   Ludwig-Maximilians University of Munich and the Schoen Clinic Schwabing in Munich were among the institutions that published a paper about using deep learning techniques to better screen for hard-to-detect body movements related to Parkinson’s disease. The researchers used wearable devices containing motion sensors to gather data from patients, and then used a variety of machine-learning and other statistical approaches to analyze the information.  A.I. in the power grid. Researchers from the Global Energy Interconnection Research Institute North America published a paper about using reinforcement learning—in which computers learn from repetition—to automatically control voltage settings in a power grid. The researchers used a power grid simulator from the Pacific Northwest National Laboratory as a test bed for its Grid Mind A.I. system. Source: Fortune Read more »
  • Rise of AI: Should Humans Be Worried?
    true
    AI — A Decades Old Problem The bogeyman of the modern workplace is undoubtedly AI. It seems every day a new article pops up about how machines are going to steal jobs from humans – and soon. The worry around AI has only continued to grow as the technology advances, with more and more enterprises undergoing digital transformation and implementing AI in their everyday business processes. This isn’t a new problem. People have been concerned about machines taking their jobs for decades. Looking back at the Indian Freedom Movement — the concern over automation displacing agrarian jobs was a key influence in pushing the movement forward. Mahatma Gandhi elaborated, “I have a great concern about introducing machine industry. The machine produces too much too fast and brings with it a sort of economic system, which I cannot grasp. I do not want to accept something when I see its evil effects, which outweigh whatever good comes with it.” This view certainly remains pervasive today. However, while AI has certainly evolved quite a bit from its roots, it will never be able to fully take over for humans in decision making-roles. Humans will always play a key role in the workplace. Ultimately, I predict humans will be living and working in an augmented world, where machines are useful supplements and tools to the future workplace rather than completely in charge. Evolution of AI AI has been around for decades, and much like humans, has continued to evolve over time. Human intelligence evolved as people started talking to one another about their ideas, developing better ideas over time through increased collaboration and communication. Machines are now experiencing the same exact phenomenon. With the introduction of the internet, machines were able to communicate with each other in a way that was previously impossible, allowing them to develop the ability to learn and perform basic tasks. However, there is an important distinction to make between the respective evolutions of machines and humans. Humans have evolved in three dimensions — thought, action, and emotion — and are capable of implementing all three in decision-making processes, while machines have evolved in only two dimensions, thought and action. Irrefutably, AI technology and machines have greatly grown in these two realms — AI can learn as it goes along, and is capable of creating machines that can process input, make decisions, and take actions like humans. The key difference is that with machines, the aspect of emotion is missing. The type of work that humans choose to pursue, the way that we spend our time, and the decisions we make aren’t always necessarily logical. In fact, they’re often ruled more by emotion than anything else. Without the trigger of emotion, an integral part in decision-making and productivity, humans can never be taken fully out of the equation in the workplace, despite the pervasive fear that AI’s capabilities will take over. Recently, it was revealed that Amazon’s experimental hiring AI was discriminatory against women, and had taught itself that male candidates were preferable. Without any concrete way to guarantee that the AI wouldn’t continue to be biased moving forward, Amazon abandoned the project. This only shows that we’re still a long way off from ensuring AI is completely fair and accountable — and until we reach that point, humans will continue to be the driving force of the workplace. What Will the World with AI Really Look Like? Interestingly, we can actually view humans as similar to AI — both are able to continually learn and perform endless varieties of specific tasks. Meanwhile, actual AI is trained to perform very specific tasks that it cannot stray from. A driverless car powered by AI technology can’t perform the same tasks as an AI-powered spam email detector, as a machine can’t do anything other than what it was designed for. However, humans have the ability to both drive a car and identify spam email. In this sense, humans can be considered a generalized AI, while machines are specialized AI for specific use cases. AI’s clear-cut applications will only supplement human ability to perform a variety of tasks, in the workplace and outside of it. Ultimately, the tasks and skills that humans and AI bring to the table are complementary, and will not detract from one another. In fact, it is expected that Assisted Intelligence will come to be the norm, rather than Augmented Intelligence (where the decision rights are solely with the human) or Autonomous Intelligence (where the rights are solely with the machine). This will only amplify the value of existing activity, and allow humans to perform higher-level tasks rather than focusing on the minutia. Look at telephones, for example — prior to phones, humans were still able to communicate. However, the telephone created a new opportunity for humans to communicate on a larger and more efficient scale. Today, machines are continuing to come into our lives extensively across industries, forging the way ahead for co-automation. While jobs across industries will leverage AI technology advantageously, it is important to note that there will always be aspects of jobs that cannot be automated. For doctors and nurses, a certain level of compassion and empathy will always be necessary when working with patients; in education, people don’t want a machine teaching their children, they look for human interaction and warmth. While co-automation will be beneficial, it cannot take away the necessity of humans in the workplace. Final Thoughts A co-automated future is already beginning, and eventually, every domain will be affected. In order to prepare for this digital era, upcoming generations must learn to communicate with both AI and humans – knowing how to speak the same language as machines will only help to remove the fear around AI. Teaching children programming and computer fluency will allow them from a young age to start working side by side with machines, and be ready for co-automation in the workplace, and even outside of it. There have been leaps and bounds forward in the abilities of AI. Ultimately, there is no need for mass anxiety about the capabilities of emerging technology – at least in our lifetime, machines will not be able to work towards their own self-interests to take over the job market, despite what some of the media hype would like you to believe. We should rather view AI as a helpful partner or tool to bolster our roles in the workplace going forward, and work towards that vision rather than fear it. Source: Mar Tech Series Read more »
  • A Promising future: How AI is making big strides in breast cancer treatment
    true
    Breast cancer is the most common cancer in the UK. It accounts for 15% of all new cases in the country, and about one in eight women will be diagnosed with it during their lifetime. In the NHS, breast cancer screening routinely includes a mammogram, which is essentially an X-ray of the breast. But the future of this early test is at risk as the number of specialists able to read them declines. While this skills shortfall can’t be made up immediately, promising advances in artificial intelligence may be able to help.   Interpreting a mammogram is a complex process normally performed by specially trained radiologists and radiographers. Their skills are vital to the early detection and diagnosis of this cancer. They visually scrutinise batches of mammograms in a session for signs of breast cancer. But these signs are often ambiguous or hard to see. False negative rates – where cancers are incorrectly diagnosed or missed – are between 20 and 30% for mammography. These are either errors in perception or errors in interpretation, and can be attributed to the sensitivity or specificity of the reader. It is widely believed that the key to developing the expertise needed to interpret mammograms is rigorous training, extensive practice and experience. And while researchers like myself are looking into training strategies and perceptual learning modules which can expedite the transition from novice reader to expert, others have been investigating how AI could be used to speed up diagnosis and improve its accuracy. Machine diagnosis As in countless other fields, the potential for AI algorithms to help with cancer diagnosis has not gone unrecognised. Along with breast cancer, researchers have been looking at how AI can improve the efficacy and efficiency of care for lung, brain and prostate cancer, in order to meet ever increasing diagnosis demands. Even Google is looking at how AI can be used to diagnose cancer. The search giant has trained an algorithm to detect tumours which have metastasised, with a 99% success rate. For breast cancer, the focus so far has been on how AI can help diagnose the disease from mammograms. Every mammogram is read by two specialists, which can lead to potential delays in diagnosis if there is a shortfall in expertise. But researchers have been looking at introducing AI systems at the time of the screening. The idea is that it would support a specialist’s findings without waiting for the second opinion of another professional. This would reduce the waiting time and associated anxiety for the women who have been tested. AI has already made substantial strides in cancer image recognition. In late 2018, researchers reported that one commercial system matched the accuracy of over 28,000 interpretations of screening mammograms by 101 radiologists. This means it achieved a cancer detection accuracy comparable to an expert radiologist. In another study led by the same researcher, radiologists using an AI system for support showed an improved rate of improved breast cancer detection – rising from 83% to 86%. This research also found that using an AI system reduced the amount of time radiologists spent analysing the images on screen. Fine tuning But while the potential of AI has been welcomed by some radiologists, it has brought suspicion from others. And though other researchers have also found that AI is just as good at detecting breast cancers from mammograms as its human counterparts, this comes with the caveat that more fine tuning and software improvement is needed before it can be safely introduced into breast screening programmes. Exciting as it may be to think that AI could be used to help detect such a prevalent cancer, specialist and public confidence needs to be taken into consideration before it can be introduced. Acceptance of the technology is vital so that patients and medical professionals know they are receiving the correct results. As yet, there has been little research into the public perception of AI in breast cancer screening, but more general studies into AI and healthcare have found that 39% of people are willing to engage with artificial intelligence/robotics for healthcare. This rises to 55% for the 18- to 24-year-old demographic. The AI systems are still in the research phase, with no firm plans to use it to diagnose patients in the UK yet. But these promising results show there is a tremendous opportunity for the delivery of radiology healthcare services, and ultimately the potential to detect more patients with breast and other cancers. Source: Business Standard News Read more »
  • Amazing AI Generates Entire Bodies of People Who Don’t Exist
    true
    Embodied AI: A new deep learning algorithm can generate high-resolution, photorealistic images of people — faces, hair, outfits, and all — from scratch. The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media. Catalog From Hell: The algorithm was developed by DataGrid, a tech company housed on the campus of Japan’s Kyoto University, according to a press release. In a video showing off the tech, the AI morphs and poses model after model as their outfits transform, bomber jackets turning into winter coats and dresses melting into graphic tees. Specifically, the new algorithm is a Generative Adversarial Network (GAN). That’s the kind of AI typically used to churn out new imitations of something that exists in the real world, whether they be video game levels or images that look like hand-drawn caricatures.   Photorealistic Media: Past attempts to create photorealistic portraits with GANs focused just on generating faces. These faces had flaws like asymmetrical ears or jewelry, bizarre teeth, and glitchy blotches of color that bled out from the background. DataGrid’s system does away with all of that extraneous info that can confuse algorithms, instead posing the AI models in front of a nondescript white background and shining realistic-looking light down on them. Each time scientists build a new algorithm that can generate realistic images or deep fakes that are indistinguishable from real photos, it seems like a new warning that AI-generated media could be readily misused to create manipulative propaganda. Here’s hoping that this algorithm stays confined within the realm of fashion catalogs. Source: Futurism Read more »
WordPress RSS Feed Retriever by Theme Mason

Artificial Intelligence Weekly News

  • Artificial Intelligence Weekly - Artificial Intelligence News #107 - May 16th 2019
    true
    In the News Facing up to AI’s failings Intel’s ZombieLoad flaw, AIibaba, Tencent sales rise, OnePlus’s new pop-up ft.com Why San Francisco’s ban on face recognition is only the start of a long fight The city government can’t use the technology, but private companies still can, and regulating those uses is a thornier problem. technologyreview.com Sponsor Two Leading Newsletters, One Simple Objective: Generate Leads For Your Sales Team. Our engaged audiences are made of 100K subscribers - of which 40%+ are executives from 10K companies in the US/UK, including most of the Forbes 500 companies. Put your message in front of the right B2B audience of highly-engaged business leaders who search for innovative solutions. essentials.news Learning European Union AI Ecosystem A thorough summary of AI policy in the EU charlottestix.com Google AI Impact Challenge names 2019 grantees List of 20 projects that won this challenge in addressing social, humanitarian and environmental problems using AI. blog.google Five questions you can use to cut through AI hype Here’s a checklist for assessing the quality and validity of a company’s machine-learning product. technologyreview.com Software tools & code Technical debt for data scientists How to identify technical debt and how to get back on track once projects get mired in debt. shotwell.ca Data version control with DVC Data versioning is one of the most ignored features in data science projects, but that has to change. Here I’ll discuss a recent podcast by one of the creators of the best tool for this purpose, DVC. towardsdatascience.com A new way to build tiny neural networks could create powerful AI on your phone We’ve been wasting our processing power to train neural networks that are ten times too big. technologyreview.com Some thoughts The Empty Promise of Data Moats Having more data may not be the silver bullet that many business leaders have long assumed it to be. a16z.com Will Artificial Intelligence Enhance or Hack Humanity? wired.com This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #106 - May 2nd 2019
    true
    Welcome Will Artificial Intelligence Enhance or Hack Humanity? This week, I interviewed Yuval Noah Harari, the author of three best-selling books about the history and future of our species, and Fei-Fei Li, one of the pioneers in the field of artificial intelligence. wired.com In the News Vue AI raises 17 million for AI-driven retail products AI in commerce is a fast-growing — and highly lucrative — industry. venturebeat.com The once-hot robotics startup Anki is shutting down after raising more than $200 million It’s a hard, hard fall. vox.com Real or artificial? Tech titans declare AI ethics concerns idahobusinessreview.com Sponsor Ladder Spotlight: Performance Marketing & Automated Data-Science Take the guesswork out of growth - see what creative/message combos to double-down on, access anomaly detection across any metric, clearly see full-funnel performance by channel, and more. Meet Ladder Spotlight - technology for marketing growth. ladder.io Featured Forget about artificial intelligence, extended intelligence is the future We should challenge the cult of Singularity. AI won't take over the world wired.co.uk I Quit My Job to Protest My Company’s Work on Building Killer Robots I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war. aclu.org Here Come AI-Enabled Cameras Meant to Sense Crime Before it Occurs Imagine it were possible to recognize not the faces of people who had already committed crimes, but the behaviors indicating a crime that was about to occur. defenseone.com Learning What Is AI Anyway? Together with Hannah Fry, Murray Shanahan, Ali Eslami and Kimberly Stachenfeld from British AI company DeepMind guide you through the world of AI to find out what it really is (and what it’s not). cheltenhamfestivals.com Studying the behavior of AI A new paper frames the emerging interdisciplinary field of machine behavior medium.com Workplace The Methodology and Ethics of Targeting Governments and companies can now model and predict the beliefs, preferences, and behaviour of small groups and even individuals - allowing them to “target” interventions, messages, and services much more narrowly. These new forms of targeting present huge opportunities to make valuable interventions more effective, for example by delivering public services to those most in need of them. However, the use of more fine-grained information about individuals and groups also raises huge risks, challenging key notions of privacy, fairness, and autonomy. eventbrite.co.uk This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #105 - Apr 25th 2019
    true
    In the News Bloomberg review of Tiktok and it's data-backed approach to social The video-sharing app by the Chinese-owned Bytedance, the world’s most valuable startup, has a younger audience than Facebook, an algorithm that learns you, and different ideas about free speech. bloomberg.com Also in the news this week... The top 5 AI trends. More An AI startup's approach to solving fashion's data problem. More Mind reading device uses ML to turn brainwaves into audible speech. More Sponsor AI Hardware Asia Summit launches in Beijing, June 4-5 2019 The AI Hardware Asia Summit is the second in a global series focusing on AI accelerator technologies and the design and application of silicon and systems for processing deep learning, neural networks and computer vision. Discover the Agenda now! aihardwareasiasummit.com Learning Data Science at the young Uber Discussion with Bradley Voytek, the first data scientist at Uber superdatascience.com Interview of Google Photos' Product Lead On managing and improving an app that uses ML at its core and has the potential to serve a few billion people. ycombinator.com Lecture by Demis Hassabis on the state of AI youtube.com Software tools & code One model to rule them all Predictive performance is just the beginning of how you should be evaluating your models. bentoml.com A visual proof that Neural Networks can compute any function You might have heard that many times, but here is an interesting way to visualize that and progressively bulid the NNs computing any function neuralnetworksanddeeplearning.com Advanced NLP with spaCy Online course covering NLP topics like basic statistical models, extracting information from large volumes of text and pipelines. spacy.io Hardware What Machine Learning needs from Hardware More Arithmetic, Inference, Low Precision, Compatibility and Codesign petewarden.com This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #104 - Apr 18th 2019
    Welcome Dear readers, It has now been close to 3 years since I started AI Weekly. From an initial group of just a few readers, it has now grown into one of the largest AI-news sources with more than 16,000 readers. During those 3 years, the AI & ML fields have evolved a lot, and so has the need for curated information to better understand what is going and stay up-to-date. To continue on this mission, AI Weekly is now merging with Essentials, a news platform dedicated to bringing curated and personalized news feeds. This will help make the newsletter even better by sourcing through a larger and wider pool of news & articles. Expect to see those improvements rolling out in the coming few weeks. Hang on, sit tight and read on :). David, AI Weekly Hello to all fellow AI Weekly subscribers. It's an honor for us at Faveeo Essentials to get in touch with you and join forces with AI Weekly to use our AI-driven engine to help curate the content on this list in the next few months. Essentials has been built on the principle that there never was so much great content out there, the thing is : it's also harder than ever to find it in the distracting world of social media. Even for a great curator like David, the task becomes harder by the day. To solve this, we developed Essentials, an AI that first, identifies and monitors the top experts and curators in any industry and second, highlights the great content that is vetted and hand-picked by these great minds of the industry. Think of it as a "Best of Social Media Digest", but that is not driven by popularity, but by quality and trust. Our engine curates the curators, to bring to the surface the links and news you don't want to miss. This offers a deeper curation experience by combining the best of both worlds : highly curated content, while still benefiting from the discovery aspect of social media! We have at the moment 5 deep-dive topics that we cover (Use cases, Research, Ethics & Robotics, you can test-drive them here! In the next few weeks we'll also inform you on the transition phase and work together so make sure that you still get the same great value as you had in the past! Best, Alexis @ Essentials This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #103 - Apr 11th 2019
    true
    In the News Bengio: ‘The dangers of abuse are very real’ Yoshua Bengio, winner of the prestigious Turing award for his work on deep learning, is establishing international guidelines for the ethical use of AI. nature.com The Animal-AI Olympics is going to treat AI like a lab rat The $10,000 competition will test AI with challenges that were originally designed to test animal cognition—to see how close we are to machines that have common sense. technologyreview.com Also in the news this week... Google pulls the plug on its AI council, just one week after its launch... More ... and launches an "end-to-end" AI platform to train and deploy ML models, but keep in mind that there are specific service terms Sponsor Save 40% off your Manning order! AI, machine learning, and deep learning is exploding, driving everything from autonomous vehicles to real-time computer vision and speech recognition. Are you ready to be a part of it? Manning Publications are offering 40% off your entire order, including their entire range of data books and videos! bit.ly Learning Human Side of Tesla Autopilot Eye-opening study on how people use Tesla’s autopilot mit.edu Smart home, machine learning and discovery As always, thoughtful and interesting piece by Ben Evans. ben-evans.com Software tools & code Teaching machines to reason about what they see Researchers combine statistical and symbolic artificial intelligence techniques to speed learning and improve transparency. mit.edu Introducing TensorFlow Privacy Learning with Differential Privacy for Training Data medium.com A Guide to Learning with Limited Labeled Data fastforwardlabs.com Alexa AI scientists reduce speech recognition errors up to 22% with semi-supervised learning venturebeat.com Some thoughts MIT is using AI to invent new flavor combinations and foods ... it suggested a shrimp, jelly, and sausage pizza businessinsider.fr This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #102 - Apr 4th 2019
    true
    In the News Three pioneers in Artificial Intelligence win Turing award Yann LeCun, Geoffrey Hinton and Yoshua Bengio worked on key developments for neural networks, which are reshaping how computer systems are built. Yann Lecun is from Paris, France nytimes.com McDonald's acquires Machine-Learning startup Dynamic Yield for $300 million wired.com Sponsor Train towards a guaranteed machine learning job The first online course to offer unlimited 1:1 mentorship from machine learning experts, career coaching, and partnerships to land you a guaranteed machine learning job. Get a job or your tuition back. springboard.com Learning A survey of the European Union’s A.I. ecosystem Compared to other global powers, the European Union (EU) is rarely considered a leading player in the development of artificial intelligence (AI). Why is this, and does this in fact accurately reflect the EU’s activities related to AI? What would it take for the EU to take a more leading role in AI, and to be internationally recognised as such? charlottestix.com Towards Robust and Verified AI: specification testing, robust training, and formal verification Today, the prevailing practice in machine learning is to train a system on a training data set, and then test it on another set. While this reveals the average-case performance of models, it is also crucial to ensure robustness, or acceptably high performance even in the worst case.  In this article, we describe three approaches for rigorously identifying and eliminating bugs in learned predictive models: adversarial testing, robust learning, and formal verification. deepmind.com How malevolent machine learning could derail AI AI security expert Dawn Song warns that “adversarial machine learning” could be used to reverse-engineer systems—including those used in defense. technologyreview.com Software tools & code How to teach stats Common statistical tests are linear models github.io TensorFlow is dead, long live TensorFlow! If you’re an AI enthusiast and you didn’t see the big news this month, you might have just snoozed through an off-the-charts earthquake. Everything is about to change! hackernoon.com Some thoughts Who Owns Your Health Data? Companies are denying people access to their own data as security risks run rampant medium.com About AI Ethics: Seven Traps As researchers try to come up with principles to apply when seeking to build and deploy AI systems in an ethical way, what problems might they need to be aware of? That's the question that researchers from Princeton try to answer in a blog seven "AI ethics traps" that people might stumble into. freedom-to-tinker.com This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #101 - Mar 27th 2019
    true
    In the News Artificial intelligence: art’s weird and wonderful new medium The brave new world of AI-generated artworks is captivating venerable institutions and pioneering collectors, says Francesca Gavin ft.com DeepMind and Google: the battle to control artificial intelligence Demis Hassabis founded a company to build the world’s most powerful AI. Then Google bought him out. Hal Hodson asks who is in charge 1843magazine.com China is about to overtake America in AI research China will publish more of the most-cited 50 percent of papers than America for the first time this year theverge.com Sponsor Download the SwissBorg Community App. Predict, learn and earn Bitcoin with zero risk! New to cryptos or an early adopter? This game puts every player into real-life, risk-return, investment decisions with a single prediction feature. The best players will receive BTC & priority access to the upcoming SwissBorg Wealth App. adjust.com Learning Stores get smart about AI Artificial intelligence is infiltrating physical retail, helping stores maximise marketing investments, personalise the customer experience and optimise store inventory. voguebusiness.com The Little Black Dress What happens if AI were to design one of the essential items of womens' wardrobes, the Little Black Dress? lbd-ai.com How Artificial Intelligence Is Changing Science The latest AI algorithms are probing the evolution of galaxies, calculating quantum wave functions, discovering new chemical compounds and more. Is there anything that scientists do that can’t be automated? quantamagazine.org Software tools & code Global dataset version control system (GDVCS) Qri is a version control system for datasets built on top of the distributed web. Like Git for data and it allows datasets to be easily discovered and kept up-to-date. github.com An all-neural on-device Speech Recognizer End-to-end, all-neural, on-device speech recognizer to power speech input in Gboard based on an RNN-transducer googleblog.com A Data Scientist designed a Social Media influencer account that's 100% automated A data engineer who created a 100% automated Instagram account to earn free meals at restaurants looking for promotion is offering his services to clients too. buzzfeednews.com Workplace Why Data Science teams need generalists, not specialists hbr.org This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #100 - Mar 13th 2019
    true
    In the News The AI-art gold rush is here An artificial-intelligence “artist” got a solo show at a Chelsea gallery. Will it reinvent art, or destroy it? theatlantic.com Also in the news... OpenAI created OpenAI LP, a new “capped-profit” company to invest in projects that align with their vision towards AGI. More Google Duplex rolls out to Pixel phones in 43 US states. More Sponsor Machine Learning for Marketers: We Built a Marketing Tactic Recommendation Engine Ladder’s mission is to remove the guesswork from growth. Machine learning, data-science, and automated intelligence is their latest leap forward. Learn how they built a marketing tactic recommendation engine, and how they’re currently working to automate marketing strategy. Learn More ladder.io Learning Humanity + AI: Better Together Frank Chen on how A.I. should "help humanity": creativity, decision making, understanding etc. Many examples too — mainly companies they invested in. a16z.com You created a machine learning application. Now make sure it’s secure The software industry has demonstrated, all too clearly, what happens when you don’t pay attention to security oreilly.com Technical details on Facebook Portal's smart camera facebook.com Software tools & code Lessons learned building natural language processing systems in health care NLP systems in health care are hard—they require broad general and medical knowledge, must handle a large variety of inputs, and need to understand context. oreilly.com Using deep learning to “read your thoughts” With Keras and an EEG sensor medium.com Jupyter Lab: Evolution of the Jupyter Notebook An overview of JupyterLab, the next generation of the Jupyter Notebook. towardsdatascience.com Cocktail similarity Fun project to generate a cocktail similarity map based on common ingredients observablehq.com Hardware Launching TensorFlow Lite for Microcontrollers petewarden.com Workplace 12 things I wish I’d known before starting as a Data Scientist Useful and practical advice by someone who has worked as a data scientist for a few years at Airbnb. medium.com Some thoughts Driver Behaviours in a world of Autonomous Mobility These are the behaviours and practices that will mainstream in our self-driving urban landscape. medium.com This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #99 - Feb 28th 2019
    true
    In the News This is why AI has yet to reshape most businesses For many companies, deploying AI is slower and more expensive than it might seem. Applies throughout industries, from to dating sites to retail, insurance, or telcos. technologyreview.com We have stumbled into the era of machine psychology wordpress.com Nvidia's got a cunning plan to keep powering the AI revolution Nvidia’s artificial intelligence journey started with cats. Now it's heading to the kitchen wired.co.uk Sponsor Partner with Neon to Develop New Conversational AI Apps Take your apps and devices to the next level. Neon’s time-tested, white-label product delivers easy-to-install code with endless possibilities. Our Polylingual AI offers real-time translation, transcription, natural language understanding, home automation and more. Let's build the future together. neongecko.com Learning 14 NLP Research Breakthroughs You Can Apply To Your Business BERT, sequence classification with human attention, SWAG, Meta-learning, Multi-task learning etc. topbots.com The technology behind OpenAI’s fiction-writing, fake-news-spewing AI, explained The language model can write like a human, but it doesn’t have a clue what it’s saying. technologyreview.com Data Versioning "Productionizing machine learning/AI/data science is a challenge. Not only are the outputs of machine-learning algorithms often compiled artifacts that need to be incorporated into existing production services, the languages and techniques used to develop these models are usually very different than those used in building the actual service. In this post, I want to explore how the degrees of freedom in versioning machine learning systems poses a unique challenge. I'll identify four key axes on which machine learning systems have a notion of version, along with some brief recommendations for how to simplify this a bit." emilygorcenski.com Foundations Built for a General Theory of Neural Networks Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural network’s form will influence its function. quantamagazine.org Software tools & code How 20th Century Fox uses ML to predict a movie audience google.com How Hayneedle created its visual search engine medium.com Some thoughts Meet 2 women transforming the AI Ecosystem in Africa forbes.com About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
  • Artificial Intelligence Weekly - Artificial Intelligence News #98 - Feb 21st 2019
    true
    In the News The Rise of the Robot Reporter As reporters and editors find themselves the victims of layoffs at digital publishers and traditional newspaper chains alike, journalism generated by machine is on the rise. nytimes.com Getting smart about the future of AI Artificial intelligence is a primary driver of possibilities and promise as the Fourth Industrial Revolution unfolds. technologyreview.com Sponsor Add Audible AI to Any Web Page in Just 5 Lines of HTML Neon adds Audible AI to all your pages quickly and easily! Empower your website users to gather helpful information by using voice commands or by typing. Equip your site so users can ask for real-time Q&A, conversions, math solutions, language translation, transcription & more! Customizable! Watch our Audible AI demo to learn how. neongecko.com Learning Better Language Models and Their Implications OpenAI trained GPT2, a language generation model that achieved surprisingly good results (see article for examples). Seeing this performance, OpenAI decided not to open-source their best model for fear it might be mis-used (online trolling, fake news, cyber bullying, spam...) openai.com List of Machine Learning / Deep Learning conferences in 2019 tryolabs.com Perspectives on issues in AI Governance Report by Google focusing on 5 areas for clarification: explainability, fairness, safety, human-AI collaboration and liability ai.google Software tools & code Introducing PlaNet Instead of using traditional RL approaches, Google has trained an agent to "learn a world model" and thus become more efficient at planning ahead. googleblog.com Troubleshooting Deep Neural Networks A field guide to fixing your model josh-tobin.com Hardware Edge TPU Devices The Edge TPU is a small ASIC designed by Google that perform ML inferencing on low-power devices. For example, it can execute MobileNet V2 at 100+ fps in a power efficient manner. withgoogle.com Facebook is Working on Its Own Custom AI Silicon extremetech.com Workplace Succeeding as a data scientist in small companies/startups medium.com Some thoughts Will AI achieve consciousness? Wrong question When Norbert Wiener, the father of cybernetics, wrote his book The Human Use of Human Beings in 1950, vacuum tubes were still the primary electronic building blocks, and there were only a few actual computers in operation. wired.com About This newsletter is a collection of AI news and resources curated by @dlissmyr. If you find it worthwhile, please forward to your friends and colleagues, or share on your favorite network! Share on Twitter · Share on Linkedin Suggestions or comments are more than welcome, just reply to this email. Thanks! This RSS feed is published on http://aiweekly.co/. You can also subscribe via email. Read more »
WordPress RSS Feed Retriever by Theme Mason