Archive for the ‘Machine Learning’ Category
23 AI predictions for the enterprise in 2023 – VentureBeat
Posted: December 29, 2022 at 12:20 am
Check out all the on-demand sessions from the Intelligent Security Summit here.
Its that time of year again, when artificial intelligence (AI) leaders, consultants and vendors look at enterprise trends and make their predictions. After a whirlwind 2022, its no easy task this time around.
You may not agree with every one of these but in honor of 2023, these are 23 top AI and ML predictions experts think will be spot-on for the coming year:
In 2023, were going to see more organizations start to move away from deploying siloed AI and ML applications that replicate human actions for highly specific purposes and begin building more connected ecosystems with AI at their core. This will enable organizations to take data from throughout the enterprise to strengthen machine learning models across applications, effectively creating learning systems that continually improve outcomes. For enterprises to be successful, they need to think about AI as a business multiplier, rather than simply an optimizer.
Vinod Bidarkoppa, CTO of Sams Club and SVP of Walmart
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
The hype about generative AI becomes reality in 2023. Thats because the foundations for true generative AI are finally in place, with software that can transform large language models and recommender systems into production applications that go beyond images to intelligently answer questions, create content and even spark discoveries. This new creative era will fuel massive advances in personalized customer service, drive new business models and pave the way for breakthroughs in healthcare.
Manuvir Das, senior vice president, enterprise computing, Nvidia
Were seeing AI and powerful data capabilities redefine the security models and capabilities for companies. Security practitioners and the industry as a whole will have much better tools and much faster information at their disposal, and they should be able to isolate security risks with much greater precision. Theyll also be using more marketing-like techniques to understand anomalous behavior and bad actions. In due time, we may very well see parties using AI to infiltrate systems, attempt to take over software assets through ransomware and take advantage of the cryptocurrency markets.
Ashok Srivastava, senior vice president and chief data officer, Intuit
Next year teams that focus on ML operations, management and governance will have to do more with less. Because of this, businesses will adopt more off-the-shelf solutions because they are less expensive to produce, require less research time and can be customized to fit most needs. MLOps teams will also need to consider open-source infrastructure instead of getting locked into long-term contracts with cloud providers. Open source delivers flexible customization, cost savings and efficiency. Especially with teams shrinking across tech, this is becoming a much more viable option.
Moses Guttman, CEO, ClearML
The biggest source of improvement in AI has been the deployment of deep learning and especially transformer models in training systems, which are meant to mimic the action of a brains neurons and the tasks of humans. These breakthroughs require tremendous compute power to analyze vast structured and unstructured datasets. Unlike CPUs, graphics processing units (GPUs) can support the parallel processing that deep learning workloads require. That means in 2023, as more applications founded on deep learning technology emerge to do everything from translating menus to curing disease, demand for GPUs will continue to soar.
Nick Elprin, CEO, Domino Data Lab
Modern AI technology is already being used to help managers, coaches and executives with real-time feedback to better interpret inflection, emotion and more, and provide recommendations on how to improve future interactions. The ability to interpret meaningful resonance as it happens is a level of coaching no human being can provide.
Zayd Enam, CEO, Cresta
As fear and protectionism create barriers to data movement and processing locations, AI adoption will slow down. Macroeconomic instability, including rising energy costs and a looming recession, will hobble the advancement of AI initiatives as companies struggle just to keep the lights on.
Rich Potter, CEO, Peak
Since model deployment, scaling AI across the enterprise, reducing time to insight and reducing time to value will become the key success criteria, AI/ML engineers will become critical in meeting these criteria. Today a lot of AI projects fail because they are not built to scale or [to] integrate with business workflows.
Nicolas Sekkaki, GM of applications, Data and AI, Kyndryl
As the AI/ML market continues to flood with new solutions, as evident by the volume of startups and VC capital deployed in the space, enterprises have found themselves with a collection of niche, disparate tools at their disposal. In 2023, enterprises will be more conscious of selecting solutions that will be more interoperable with the rest of their ecosystem, including their on-premises footprint and across cloud providers (AWS, Azure, GCP). Additionally, enterprises will gravitate towards a handful of leading solutions as the disparate tools mature and come together in bundles as standalone solutions.
Anay Nawathe, principal consultant, ISG
Advanced machine learning technologies will enable no-code developers to innovate and create applications never seen before. This evolution may pave the way for a new breed of development tools. In a likely scenario, application developers will program the application by describing their intent, rather than describing the data and the logic as theyd do with low-code tools of today.
Esko Hannula, SVP of product management, Copado
This past year was filled with incredibly impressive technological advancements, popularized by ChatGPT, DALL-E 2, Galactica and Facebooks Make-A-Video. These massive models were made possible largely due to the availability of endless volumes of training data, and huge compute and infrastructure resources. Heading into 2023, funding for true blue-sky research will slow down as organizations become more conservative in spending to brace for the looming recession and will shift from investing in fundamental research to more practical applications. With more companies becoming increasingly frugal to mitigate this imminent threat, we can anticipate increased use of pre-trained models and more focus on applying the advancements from previous years to more concrete applications.
John Kane, head of signal processing and machine learning, Cogito
Chatbots are the obvious application for ChatGPT, but they are probably not going to be the first ones. First, ChatGPT today can answer questions, but it cannot take actions. When a user contacts a brand, they sometimes just want answers, but often they want something done process a return, or cancel an account, or transfer funds. Secondly, when used to answer questions, ChatGPT can answer based on knowledge [found] on the internet. But it doesnt have access to knowledge which is not online. Finally, ChatGPT excels at generation of text, creating new content derived from existing online information. When a user contacts a brand, they dont want creative output they want immediate actions. All of these issues will get addressed, but it does mean that the first use case is probably not chatbots.
Jonathan Rosenberg, CTO, Five9
Digital engagement has become the default rather than the fallback, and every interaction counts. While the emergence of automation initially resolved basic FAQs, its now providing more advanced capabilities: personalizing interactions based on customer intent, empowering people to take action and self-serve, and making predictions on their next best action.
The only way for businesses to scale a VIP digital experience for everyone is with an AI-driven automation solution. This will become a C-level priority for brands in 2023, as they determine how to evolve from a primarily live agent-based interaction model to one that can be primarily serviced through automated interactions. AI will be necessary to scale operations and properly understand and respond to what customers are saying, so brands can learn what their customers want and plan accordingly.
Jessica Popp, CTO of Ada
Coming soon are industry-specific AI model marketplaces that enable businesses to easily consume and integrate AI models in their business without having to create and manage the model lifecycle. Businesses will simply subscribe to an AI model store. Think of the Apple Music store or Spotify for AI models broken down by industry and data they process.
Bryan Harris, executive vice president and chief technology officer, SAS
As individuals continue to worry about how businesses and employers will use AI and machine learning technology, it will become more important than ever for companies to provide transparency into how their AI is applied to worker and finance data. Explainable AI will increasingly help to advance enterprise AI adoption by establishing greater trust. More providers will start to disclose how their machine learning models lead to their outputs (e.g. recommendations) and predictions, and well see this expand even further to the individual user level with explainability built right into the application being used.
Jim Stratton, CTO, Workday
Federated learning is a machine learning technique that can be used to train machine learning models at the location of data sources, by only communicating the trained models from individual data sources to reach a consensus for a global model. Therefore instead of using the traditional approach of collecting data from multiple sources to a centralized location for model training, this technique learns a collaborative model. Federated learning addresses some of the major issues that prevail in the current machine learning technique, such as data privacy, data security, data access rights and access to data from heterogeneous sources.
David Murray, chief business officer, Devron
While most people write scrapers today to get data off of websites, natural language processing (NLP) progress has been made where soon you can describe in natural language what you want to extract from a given web page and the machine pulls it for you. For example, you could say, Search this travel site for all the flights from San Francisco to Boston and put all of them in a spreadsheet, along with price, airline, time and day of travel. Its a hard problem, but we could actually solve it in the next year.
Varun Ganapathi, CTO and co-founder, AKASA
With remote work, boundaries are becoming increasingly blurred. Today its common for people to work and converse with colleagues across borders, even if they dont share a common language. Manual translation can become a hindrance that slows down productivity and innovation. We now have the technology to use communication tools such as Zoom that allows someone in Turkey, for example, to speak their native language but allows someone in the U.S. to hear what theyre saying in English. This real-time speech translation ultimately helps with efficiency and productivity while also giving businesses more of an opportunity to operate globally.
Manoj Chaudhary, CTO and SVP of engineering, Jitterbit
By now, everyone has seen AI-created deepfake videos. They are leveraged for a variety of purposes, ranging from reanimating a lost loved one, disseminating political propaganda or enhancing a marketing campaign. However, imagine receiving a phishing email with a deepfake video of your CEO instructing you to go to a malicious URL. Or an attacker constructing more believable, legitimate-seeming phishing emails by using AI to better mimic corporate communications. Modern AI capabilities could completely blur the lines between legitimate and malicious emails, websites, company communications and videos. Cybercrime AI-as-a-Service could be the next monetized tactic.
Heather Gantt-Evans, CISO, SailPoint
In the year ahead, we will see enterprises turn to a hybrid approach to natural language processing combining symbolic AI with ML, which has shown to produce explainable, scalable and more accurate results while leaving a smaller carbon footprint. Companies will expand automation to more complex processes, requiring accurate understanding of documents, and extending their data analytics activities to include data embedded in text and documents. Therefore, investments in AI-based natural language technologies will grow. These solutions will have to be accurate, efficient, environmentally sustainable, explainable and not subject to bias. This requires enterprises to abandon the single-technique approach such as just machine learning (ML) or deep learning (DL) for their intrinsic limitations.
Luca Scagliarini, chief product officer, Expert.ai
Advancements in AI-generated music will be a particularly interesting development. Now [that] tools exist that generate visual art from text prompts, these same tools will be improved to do the same for music. There are already models available that use text prompts to generate music and realistic human voices. Once these models start performing well enough that the public takes notice, progress in the field of generative audio will accelerate even further. Its not unreasonable to think, within the next few years, that AI-generated music videos could become reality, with AI-generated video, music and vocals.
Ulrik Stig Hansen, president, Encord
There will be less investment within Fortune 500 organizations allocated to internal ML and data science teams to build solutions from the ground up. It will be replaced with investments in fully productized applications or platform interfaces to deliver the desired data analytic and customer experience outcomes in focus.[Thats because] in the next five years, nearly every application will be powered by LLM-based neural network-powered data pipelines to help classify, enrich, interpret and serve.
[But] productization of neural network technology is one of the hardest tasks in the computer science field right now. It is an incredibly fast-moving space that without dedicated focus and exposure to many different types of data and use cases, it will be hard for internal-solution ML teams to excel at leveraging these technologies.
Amr Awadallah, CEO, Vectara
When it comes to devops, experts are confident that AI is not going to replace jobs; rather, it will empower developers and testers to work more efficiently. AI integration is augmenting people and empowering exploratory testers to find more bugs and issues upfront, streamlining the process from development to deployment. In 2023, well see already-lean teams working more efficiently and with less risk as AI continues to be implemented throughout the development cycle.
Specifically, AI-augmentation will help inform decision-making processes for devops teams by finding patterns and pointing out outliers, allowing applications to continuously self-heal and freeing up time for teams to focus their brain power on the tasks that developers actually want to do and that are more strategically important to the organization.
Kevin Thompson, CEO, Tricentis
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Follow this link:
How artificial intelligence is helping us explore the solar system – Space.com
Posted: at 12:20 am
Let's be honest it's much easier for robots to explore space than us humans. Robots don't need fresh air and water, or to lug around a bunch of food to keep themselves alive. They do, however, require humans to steer them and make decisions. Advances in machine learning technology may change that, making computers a more active collaborator in planetary science.
Last week at the 2022 American Geophysical Union (AGU) Fall Meeting, planetary scientists and astronomers discussed how new machine-learning techniques are changing the way we learn about our solar system, from planning for future mission landings on Jupiter's icy moon Europa to identifying volcanoes on tiny Mercury.
Machine learning is a way of training computers to identify patterns in data, then harness those patterns to make decisions, predictions or classifications. Another major advantage to computers besides not requiring life-support is their speed. For many tasks in astronomy, it can take humans months, years or even decades of effort to sift through all the necessary data.
Related: Our solar system: A photo tour of the planets
One example is identifying boulders in pictures of other planets. For a few rocks, it's as easy as saying "Hey, there's a boulder!" but imagine doing that thousands of times over. The task would get pretty boring, and eat up a lot of scientists' valuable work time.
"You can find up to 10,000, hundreds of thousands of boulders, and it's very time consuming," Nils Prieur, a planetary scientist at Stanford University in California said during his talk at AGU. Prieur's new machine-learning algorithm can detect boulders across the whole moon in only 30 minutes. It's important to know where these large chunks of rock are to make sure new missions can land safely at their destinations. Boulders are also useful for geology, providing clues to how impacts break up the rocks around them to create craters.
Computers can identify a number of other planetary phenomena, too: explosive volcanoes on Mercury, vortexes in Jupiter's thick atmosphere and craters on the moon, to name a few.
During the conference, planetary scientist Ethan Duncan, from NASA's Goddard Space Flight Center in Maryland, demonstrated how machine learning can identify not chunks of rock, but chunks of ice on Jupiter's icy moon Europa. The so-called chaos terrain is a messy-looking swath of Europa's surface, with bright ice chunks strewn about a darker background. With its underground ocean, Europa is a prime target for astronomers interested in alien life, and mapping these ice chunks will be key to planning future missions.
Upcoming missions could also incorporate artificial intelligence as part of the team, using this tech to empower probes to make real-time responses to hazards and even land autonomously. Landing is a notorious challenge for spacecraft, and always one of the most dangerous times of a mission.
The 'seven minutes of terror' on Mars [during descent and landing], that's something we talk about a lot, Bethany Theiling, a planetary scientist at NASA Goddard, said during her talk. "That gets much more complicated as you get further into the solar system. We have many hours of delay in communication."
A message from a probe landing on Saturn's methane-filled moon Titan would take a little under an hour and a half to get back to Earth. By the time humans' response arrived at its destination, the communication loop would be almost three hours long. In a situation like landing where real-time responses are needed, this kind of back-and-forth with Earth just won't cut it. Machine learning and AI could help solve this problem, according to Theiling, providing a probe with the ability to make decisions based on its observations of its surroundings.
"Scientists and engineers, we're not trying to get rid of you," Theiling said. "What we're trying to do is say, the time you get to spend with that data is going to be the most useful time we can manage." Machine learning won't replace humans, but hopefully, it can be a powerful addition to our toolkit for scientific discovery.
Follow the author at @briles_34 on Twitter and follow us on Twitter @Spacedotcom and on Facebook.
See the original post:
How artificial intelligence is helping us explore the solar system - Space.com
How Does TensorFlow Work and Why is it Vital for AI? – Spiceworks News and Insights
Posted: at 12:20 am
TensorFlow is defined as an open-source platform and framework for machine learning, which includes libraries and tools based on Python and Java designed with the objective of training machine learning and deep learning models on data. This article explains the meaning of TensorFlow and how it works, discussing its importance in the world of computing.
TensorFlow is an open-source platform and framework for machine learning, which includes libraries and tools based on Python and Java designed with the objective of training machine learning and deep learning models on data.
Googles TensorFlow is an open-sourced package designed for applications involving deep learning. Additionally, it supports conventional machine learning. TensorFlow was initially created without considering deep learning for large numerical calculations. However, it has also proven valuable for deep learning development, so Google made it available to the public.
TensorFlow supports data in the shape of tensors, which are multidimensional arrays of greater dimensions. Arrays with several dimensions are highly useful for managing enormous volumes of data.
TensorFlow uses the concept of graphs of data flow with nodes and edges. As the implementation method is in tables and graphs, spreading TensorFlow code over a cluster of GPU-equipped machines is more straightforward.
Though TensorFlow supports other programming languages, Python and JavaScript are the most popular. Additionally, TensorFlow supports Swift, C, Go, C#, and Java. Python is not required to work with TensorFlow; however, it makes working with TensorFlow extremely straightforward.
TensorFlow follows in the footsteps of Googles closed-source DistBelief framework, which was deployed internally in 2012. Based on extensive neural networks and the backpropagation method, it was utilized to conduct unsupervised feature learning and deep learning applications.
TensorFlow is distinct from DistBelief in many aspects. TensorFlow was meant to operate independently from Googles computational infrastructure, making its code more portable for external usage. It is also a more overall machine learning architecture that is less neural network-centric than DistBelief.
Under the Apache 2.0 license, Google published TensorFlow as an open-source technology in 2015. Ever since the framework has attracted a large number of supporters outside Google. TensorFlow tools are provided as add-on modules for IBM and Microsoft, and other machine learning or AI development suites.
TensorFlow attained Release 1.0.0 level early in 2017. In 2017, developers released four further albums. A version of TensorFlow geared for smartphone usage and embedded machines was also released as a developer preview.
TensorFlow 2.0, launched in October 2019, redesigned the framework in several ways to make it simpler and more efficient based on user input. A new application programming interface (API) facilitates the execution of distributed training, with assistance for TensorFlow Lite, enabling the deployment of models on a broader range of systems. However, one must always modify code developed for older iterations of TensorFlow to use the new capabilities in TensorFlow 2.0.
See More: Top 10 DevOps Automation Tools in 2021
TensorFlow models trained on edge devices or smartphones, like iOS or Android, may also be deployed. TensorFlow Lite allows you to compromise model performance and accuracy to optimize TensorFlow structures for performance on such devices. A more compact model 12MB against 25MB, or even 100+MB) is less precise, but the loss in precision is often negligible. It is more than compensated for by the versions energy efficiency and speed.
TensorFlow applications are often complex, large-scale artificial intelligence (AI) projects in deep learning and machine learning. Using TensorFlow to power Googles RankBrain system for machine learning has enhanced the data-gathering abilities of the companys search engine.
Google has also utilized the platform for applications such as automated email answer creation, picture categorization, optical character recognition, and a drug-discovery program developed in collaboration with Stanford University academics.
In addition to Airbnb, Coca-Cola, eBay, Intel, Qualcomm, SAP, Twitter, and Uber, the TensorFlow website lists eBay, Intel, Qualcomm, and Snap Inc. as framework users. STATS LLC, a sports consultancy firm, uses TensorFlow-led deep learning frameworks to monitor player movements during professional sports events, among other things.
TensorFlow enables developers to design dataflow graphs, which are structures that define how data flows via a graph or set of processing nodes. Each node in the graph symbolizes a mathematical process, and each edge between nodes is a tensor, a multi-layered data array.
TensorFlow applications can execute on almost any handy target, including a local PC, a cloud cluster, iOS and Android phones, CPUs, and GPUs. Using Googles cloud, you may run TensorFlow on Googles unique TensorFlow Processing Unit (TPU) hardware for additional acceleration. However, TensorFlow-generated models may be installed on almost any machine on which they will be utilized to make predictions.
Tensorflows architecture consists of three components:
Tensorflow is so named because it accepts inputs in the form of multidimensional arrays, often known as tensors. One may create a flowchart-like diagram (a technique called graph analytics) representing the actions you want to conduct on the input. Input comes in at one end, passes across a system of various actions, and exits the opposite end as output. It is named TensorFlow because a tensor enters it, travels through a series of processes, and finally exits.
A trained model may offer prediction as a service utilizing REST or gRPC APIs in a Docker container. For more complex serving situations, Kubernetes may be used.
TensorFlow employs the following components to accomplish the features mentioned above:
TensorFlow employs a graph-based architecture. The graph collects and explains all series calculations performed during training. The graph offers several benefits. It was initially designed to operate on several CPUs or GPUs and mobile operating systems. Additionally, the graphs portability enables you to save calculations for current or future usage. One may store the graph for future execution.
All calculations in the chart are accomplished by linking tensors. Tensors consist of a node as well as an edge. The node performs the mathematical action and generates output endpoints. The edges describe the input/output connections between nodes.
Tensorflow derives its name directly from its essential foundation, Tensor. All calculations in Tensorflow use tensors. Tensors are n-dimensional vectors or matrices that represent all forms of data. Each value in a tensor has the same data type and a known (or partly known) form. The dimension of the matrices or array is the datas form.
A tensor may be derived from raw data or the outcome of a calculation. All operations in TensorFlow are executed inside a graph. The grid is a sequence of calculations that occur in order. Each operation is referred to as an op node and therefore is interconnected.
The graph depicts the operations and relationships between the nodes. However, the values are not shown. The borders of the nodes is indeed the tensor, which is a method for providing data to the operation.
As we have seen, TensorFlow accepts input in the format of tensors, which are n-dimensional arrays or matrices. This input passes through a series of procedures before becoming output. For instance, as input, we obtain a large number of numbers indicating the Bits of an image, and as output, we receive text such as this is a dog.
Tensorflow provides a way to view what is occurring on your graph. This tool is known as TensorBoard; it is just a web page that allows you to debug your graph by checking its parameters, node connections, etc. To utilize TensorBoard, you must label the graphs with the parameters you want to examine, such as the loss value. Then, you must produce each summary.
Other essential components that enable TensorFlows functionality are:
See More: What is Root-Cause Analysis? Working, Templates, and Examples
Python has become the most common programming language for TensorFlow or machine learning as a whole. However, JavaScript is now a best-in-class language for TensorFlow, and among its enormous benefits is that it works in any web browser.
TensorFlow.js, which is the name of the JavaScript TensorFlow library, speeds calculations using all available GPUs. It is also possible to utilize a WebAssembly background program for execution, which is quicker on a CPU than the standard JavaScript backend. Pre-built models allow you to begin with easy tasks to understand how things function.
TensorFlow delivers all of this to programmers through the Python programming language. Python is simple to pick up and run, and it offers straightforward methods to represent the coupling of high-level abstractions. TensorFlow is compatible with Python 3.7 through 3.10.
TensorFlow nodes and tensors are Python objects; therefore, TensorFlow applications are also Python programs. However, real mathematical calculations are not done in Python. The transformation libraries accessible through TensorFlow are created as efficient C++ binaries. Python only controls the flow of information between the components and offers high-level coding frameworks to connect them.
Keras is used for sophisticated TensorFlow activities such as constructing vertices or layers and linking them. A three-layer fundamental model may be developed with less than ten lines of code, and the training data for the same model takes just a few extra lines of code.
You may, however, peek underneath the hood and perform even more granular tasks, such as designing an individualized training circuit, if you like.
See More: What Is Integrated Development Environment (IDE)? Meaning, Software, Types, and Importance
TensorFlow is important for users due to several reasons:
Abstraction (a key concept in object-oriented programming) is the most significant advantage of TensorFlow for machine learning development. Instead of concentrating on developing algorithms or finding how to link one components output to anothers parameters, the programmer may focus on the overall application logic. TensorFlow takes care of the nuances in the background.
Using an interactive, web-based interface, the TensorBoard visualization package enables you to examine and analyze the execution of graphs. Googles Tensorboard.dev service allows you to host and share machine learning experiments built using TensorFlow. This can retain a maximum of 100 million scalars, a gigabyte of tensor data, and a gigabyte of binary layer for free. (Note that any data stored on Tensorboard.dev is accessible to the public.)
TensorFlow provides further advantages for programmers who need to debug and gain insight into TensorFlow applications. Each graph action may be evaluated and updated independently and openly instead of the whole graph being constructed as a monolithic opaque object and evaluated simultaneously. This eager execution mode, available as an option in older iterations of TensorFlow, has become the default.
TensorFlow also benefits from Googles patronage as an A-list commercial enterprise. Google has accelerated the projects development and provided many essential products that make TensorFlow simpler to install and use. TPU silicon for increased performance in Googles cloud is but one example.
TensorFlow works with a wide variety of devices. In addition, the inclusion of TensorFlow lite helps increase its adaptability by making it compatible with additional devices. One may access TensorFlow from anywhere with a device.
Learning and problem-solving are two cognitive activities associated with the human brain that are simulated by artificial intelligence. TensorFlow features a robust and adaptable ecosystem of tools, libraries, and resources that facilitate the development and deployment of AI-powered applications. The advancement of AI provides new possibilities to address complex, real-world issues.
One may use TensorFlow to create deep neural pathways for handwritten character recognition classification, image recognition, word embedding, recurrent neural networks, frame-to-frame modeling for translation software, natural language processing, and a variety of other applications.
Applications based on deep learning are complex, with training processes needing a great deal of computation. It requires several iterative procedures, mathematical computations, matrix multiplication and division, and so on, and it is time-consuming due to the vast amount of data. These tasks need an extraordinarily long amount of time on a typical CPU. TensorFlow thus supports GPUs, which dramatically accelerates the training process.
Because of the parallelism of work models, TensorFlow is used as a special hardware acceleration library. It employs unique distribution algorithms for GPU and CPU platforms. Based on the modeling rule, users may execute their code on one of the two architectures. The system selects a GPU if none is specified. This method minimizes memory allocation to some degree.
See More: Java vs. JavaScript: 4 Key Comparisons
The true significance of TensorFlow is that it is applicable across sectors. Among its most important uses are:
The TensorFlow framework is most important for two roles data scientists and software developers.
Data scientists have several options for developing models using TensorFlow. This implies that the appropriate tool is always accessible, allowing for the rapid expression of creative methods and ideas. As one of the most popular libraries for constructing machine learning models, TensorFlow code from earlier researchers is often straightforward to locate when attempting to recreate their work.
Software developers may use TensorFlow on a wide range of standard hardware, operating systems, and platforms. With the introduction of TensorFlow 2.0 in 2019, one may deploy TensorFlow models on a broader range of platforms. The interoperability of TensorFlow-created models makes deployment an easy process.
See More: What Is TDD (Test Driven Development)? Process, Importance, and Limitations
TensorFlow is consistently ranked among the best Python libraries for machine learning. Individuals, companies, and governments worldwide rely on its capabilities to develop AI innovations. It is one of the foundational tools used for AI experiments before you can take the product to the market, owing to its low dependency and investment footprint. As AI becomes more ubiquitous in consumer and enterprise apps, TensorFlows importance will continue to grow.
Did you find our TensorFlow guide to be an interesting and informative read? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!
Read more here:
How Does TensorFlow Work and Why is it Vital for AI? - Spiceworks News and Insights
AI in the hands of imperfect users | npj Digital Medicine – Nature.com
Posted: at 12:20 am
Obermeyer, Z. & Emanuel, E. J. Predicting the futurebig data, machine learning, and clinical medicine. N. Engl. J. Med. 375, 1216 (2016).
Article Google Scholar
Klugman, C. M. & Gerke, S. Rise of the bioethics AI: curse or blessing? Am. J. Bioeth. 22, 3537 (2022).
Article Google Scholar
U.S. Food and Drug Administration. Artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) action plan. (2021).
Commission E. Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. European Commission (Brussels, 21.4.2021).
Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 38999. (2019).
Article Google Scholar
Chen T, Guestrin C. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
Markus, A. F., Kors, J. A. & Rijnbeek, P. R. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113, 103655 (2021).
Article Google Scholar
Babic, B., Gerke, S., Evgeniou, T. & Cohen, I. G. Beware explanations from AI in health care. Science 373, 284286 (2021).
Article Google Scholar
U.S. Food and Drug Administration. Clinical Decision Support SoftwareGuidance for Industry and Food and Drug Administration Staff. (2022).
U.S. Food and Drug Administration. U.S. Federal Food, Drug, and Cosmetic Act. (2018).
Gerke, S. Health AI for good rather than evil? the need for a new regulatory framework for AI-based medical devices. Yale J. Health Policy, Law, Ethics 20, 433 (2021).
Google Scholar
Gerke, S., Babic, B., Evgeniou, T. & Cohen, I. G. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. NPJ Digit. Med. 3, 14 (2020).
Article Google Scholar
Nielsen, J. & Molich, R. Heuristic evaluation of user interfaces. Proc. SIGCHI Conf. Hum. factors Comput. Syst. 1990, 249256 (1990).
Google Scholar
Wu, E. et al. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat. Med. 27, 582584 (2021).
Article Google Scholar
Price W.N. II. Medical AI and contextual bias. Harvard Journal of Law and Technology 33, 2019.
Babic, B., Gerke, S., Evgeniou, T. & Cohen, I. G. Algorithms on regulatory lockdown in medicine. Science 366, 12021204 (2019).
Article Google Scholar
Ansell, D. A. & McDonald, E. K. Bias, black lives, and academic medicine. N. Engl. J. Med. 372, 10871089 (2015).
Article Google Scholar
Kostick-Quenet, K. M. et al. Mitigating racial bias in machine learning. J. Law Med. Ethics 50, 92100 (2022).
Article Google Scholar
Blumenthal-Barby, J. S. Good ethics and bad choices: the relevance of behavioral economics for medical ethics. (MIT Press, 2021).
Kahneman D., Slovic S. P., Slovic P. & Tversky A. Judgment under uncertainty: heuristics and biases. (Cambridge university press, 1982).
Pillutla, M. M., Malhotra, D. & Murnighan, J. K. Attributions of trust and the calculus of reciprocity. J. Exp. Soc. Psychol. 39, 448455 (2003).
Article Google Scholar
Corriveau, K. H. et al. Young childrens trust in their mothers claims: longitudinal links with attachment security in infancy. Child Dev. 80, 750761 (2009).
Article Google Scholar
Fett, A.-K. et al. Learning to trust: trust and attachment in early psychosis. Psychol. Med. 46, 14371447 (2016).
Article Google Scholar
Butler, J. K. Jr. & Cantrell, R. S. A behavioral decision theory approach to modeling dyadic trust in superiors and subordinates. Psychol. Rep. 55, 1928 (1984).
Article Google Scholar
Mayer, R. C., Davis, J. H. & Schoorman, F. D. An integrative model of organizational trust. Acad. Manag. Rev. 20, 709734 (1995).
Article Google Scholar
Grover, S. L., Hasel, M. C., Manville, C. & Serrano-Archimi, C. Follower reactions to leader trust violations: A grounded theory of violation types, likelihood of recovery, and recovery process. Eur. Manag. J. 32, 689702 (2014).
Article Google Scholar
Banaji M. R. & Gelman S. A. Navigating the social world: what infants, children, and other species can teach us. (Oxford University Press; 2013).
Fawcett, C. Kids attend to saliva sharing to infer social relationships. Science 375, 260261 (2022).
Article Google Scholar
Kaufmann, L. & Clment, F. Wired for society: cognizing pathways to society and culture. Topoi 33, 45975. (2014).
Article Google Scholar
Vickery, J. et al. Challenges to evidence-informed decision-making in the context of pandemics: qualitative study of COVID-19 policy advisor perspectives. BMJ Glob. Health 7, e008268 (2022).
Article Google Scholar
Muoz, K. A. et al. Pressing ethical issues in considering pediatric deep brain stimulation for obsessive-compulsive disorder. Brain Stimul. 14, 156672. (2021).
Article Google Scholar
Hampson, G., Towse, A., Pearson, S. D., Dreitlein, W. B. & Henshall, C. Gene therapy: evidence, value and affordability in the US health care system. J. Comp. Eff. Res. 7, 1528 (2018).
Article Google Scholar
Wang, Z. J. & Busemeyer, J. R. Cognitive choice modeling. (MIT Press, 2021).
Menon, T. & Blount, S. The messenger bias: a relational model of knowledge valuation. Res. Organ. Behav. 25, 137186 (2003).
Google Scholar
Howard, J. Bandwagon effect and authority bias. Cognitive Errors and Diagnostic Mistakes. 2156 (Springer; 2019).
Slovic, P. The construction of preference. Am. Psychol. 50, 364 (1995).
Article Google Scholar
Levine, L. J., Lench, H. C., Karnaze, M. M. & Carlson, S. J. Bias in predicted and remembered emotion. Curr. Opin. Behav. Sci. 19, 7377 (2018).
Article Google Scholar
Christman, J. The politics of persons: Individual autonomy and socio-historical selves. (Cambridge University Press, 2009).
Samuelson, W. & Zeckhauser, R. Status quo bias in decision making. J. Risk Uncertain. 1, 759 (1988).
Article Google Scholar
Hardisty, D. J., Appelt, K. C. & Weber, E. U. Good or bad, we want it now: fixedcost present bias for gains and losses explains magnitude asymmetries in intertemporal choice. J. Behav. Decis. Mak. 26, 348361 (2013).
Article Google Scholar
Alon-Barkat, S. & Busuioc, M. Decision-makers processing of ai algorithmic advice: automation bias versus selective adherence. https://arxiv.org/ftp/arxiv/papers/2103/2103.02381.pdf (2021).
Bond, R. R. et al. Automation bias in medicine: The influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. J. Electrocardiol. 51, S6S11 (2018).
Article Google Scholar
Cummings, M. L. Automation bias in intelligent time critical decision support systems. Decision Making in Aviation. 289294 (Routledge, 2017).
Jussupow, E., Spohrer, K., Heinzl, A. & Gawlitza, J. Augmenting medical diagnosis decisions? An investigation into physicians decision-making process with artificial intelligence. Inf. Syst. Res. 32, 713735 (2021).
Article Google Scholar
Skitka, L. J., Mosier, K. L. & Burdick, M. Does automation bias decision-making? Int. J. Hum. Comput. Stud. 51, 9911006 (1999).
Article Google Scholar
Dijkstra, J. J., Liebrand, W. B. & Timminga, E. Persuasiveness of expert systems. Behav. Inf. Technol. 17, 155163 (1998).
Article Google Scholar
Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90103 (2019).
Article Google Scholar
Furnham, A. & Boo, H. C. A literature review of the anchoring effect. J. Socio-Econ. 40, 3542 (2011).
Article Google Scholar
Diab, D. L., Pui, S. Y., Yankelevich, M. & Highhouse, S. Lay perceptions of selection decision aids in US and nonUS samples. Int. J. Sel. Assess. 19, 209216 (2011).
Article Google Scholar
Dietvorst, B. J., Simmons, J. P. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144, 114 (2015).
Article Google Scholar
Promberger, M. & Baron, J. Do patients trust computers? J. Behav. Decis. Mak. 19, 455468 (2006).
Article Google Scholar
Gaube, S. et al. Do as AI say: susceptibility in deployment of clinical decision-aids. NPJ Digit. Med. 4, 18 (2021).
Article Google Scholar
Mosier, K. L, Skitka, L.J., Burdick, M. D. & Heers, S.T. Automation bias, accountability, and verification behaviors. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. pp. 204208 (SAGE Publications Sage CA, Los Angeles, CA, 1996).
Wickens, C. D., Clegg, B. A., Vieane, A. Z. & Sebok, A. L. Complacency and automation bias in the use of imperfect automation. Hum. Factors 57, 728739 (2015).
Article Google Scholar
Li, D., Kulasegaram, K. & Hodges, B. D. Why we neednt fear the machines: opportunities for medicine in a machine learning world. Acad. Med. 94, 623625 (2019).
Article Google Scholar
Paranjape, K., Schinkel, M., Panday, R. N., Car, J. & Nanayakkara, P. Introducing artificial intelligence training in medical education. JMIR Med. Educ. 5, e16048 (2019).
Article Google Scholar
Park, S. H., Do, K.-H., Kim, S., Park, J. H. & Lim, Y.-S. What should medical students know about artificial intelligence in medicine? J. Educ. Eval. Health Prof. 16, 18 (2019).
Article Google Scholar
Leavy, S., OSullivan, B. & Siapera, E. Data, power and bias in artificial intelligence. https://arxiv.org/abs/2008.07341 (2020).
Read the original here:
AI in the hands of imperfect users | npj Digital Medicine - Nature.com
AI-as-a-service makes artificial intelligence and data analytics more accessible and cost effective – VentureBeat
Posted: at 12:20 am
Check out all the on-demand sessions from the Intelligent Security Summit here.
Artificial intelligence (AI) has made significant progress in the past decade and has been able to solve various problems through extensive research. From self-driving cars to intuitive chatbots like OpenAIs ChatGPT.
AI solutions are becoming a norm for businesses that wish to gain insights from their valuable company data. Enterprises are looking to implement a broad spectrum of AI applications, from text analysis software to more complex predictive analytics tools. But building an in-house AI solution makes sense only for some businesses, as its a long and complex process.
With emerging data science use cases, organizations now require continuous AI experimentation and test machine learning algorithms on several cloud platforms simultaneously. Processing data through such methods need massive upfront costs, which is why businesses are now turning toward AIaaS (AI-as-a-service), third-party solutions that provide ready-to-use platforms.
AIaaS is becoming an ideal option for anyone who wants access to AI without needing to establish an ultra-expensive infrastructure for themselves. With such a cost-effective solution available for anyone, its no surprise that AIaaS is starting to become a standard in most industries. An analysis by Research and Markets estimated that the global market for AIaaS is expected to grow by around $11.6 billion by 2024.
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
AIaaS allows companies to access AI software from a third-party vendor rather than hiring a team of experts to develop it in-house. This allows companies to get the benefits of AI and data analytics with a smaller initial investment, and they can also customize the software to meet their specific needs. AIaaS is similar to other as-a-service offerings like infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS), which are all hosted by third-party vendors.
In addition, AIaaS models enclose disparate technologies, including natural language processing (NLP), computer vision, machine learning and robotics; you can pay for the services you require and upgrade to higher plans when your data and business scale.
AIaaS is an optimal solution for smaller and mid-sized companies to access AI capabilities without building and implementing their own systems from scratch. This allows these companies to focus on their core business and still benefit from AIs value, without becoming experts in data and machine learning. Using AIaaS can help companies increase profits while reducing the risk of investment in AI. In the past, companies often had to make significant financial investments in AI in order to see a return on their investment.
Moses Guttmann, CEO and cofounder of ClearML, says that AIaaS allows companies to focus their data science teams on the unique challenges to their product, use case, customers and other essential requirements.
Essentially, using AIaaS can take away all the off-the-shelf problem-solving AI can help with, allowing the data science teams to concentrate on the unique and custom scenarios and data that can make an impact on the business of the company, Guttmann told VentureBeat.
Guttmann said that the crux of AI services is essentially outsourcing talent, i.e., having an external vendor build the internal companys AI infrastructure and customize it to their needs.
The problem is always maintenance, where the know-how is still held by the AI service provider and rarely leaks into the company itself, he said. AIaaS on the contrary, provides a service platform, with simple APIs and access workflows, that allows companies to quickly adapt off-the-shelf working models and quickly integrate them into the companys business logic and products.
Guttmann says that AIaaS can be great for tech organizations either having pretrained models or real-time data use cases, enhancing legacy data science architectures.
I believe that the real value in ML for a company is always a unique combination of its constraints, use case and data, and this is why companies should have some of their data scientists in-house, said Guttmann. To materialize the potential of those data scientists, a good software infrastructure needs to be put in place, doing the heavy lifting in operations and letting the data science team concentrate on the actual value they bring to the company.
AIaaS is a proven approach that facilitates all aspects of AI innovation. The platform provides an all-in-one solution for modern business requirements, from ideating on how AI can provide value to actual, with a scaled implementation across a business as a target to tangible outcomes in a matter of weeks.
AIaaS enables a structured, beneficial way of balancing data science, IT and business consulting competencies, as well as balancing the technical delivery with the role of ongoing change management that comes with AI. It also decreases the risk of AI innovation, improving time-to-market, product outcomes and value for the business. At the same time, AIaaS provides organizations with a blueprint for AI going forward, thereby accelerating internal know-how and ability to execute, ensuring an agile delivery framework alignment, and transparency in creating the AI.
AIaaS platforms can quickly scale up or down as needed to meet changing business needs, providing organizations with the flexibility to adjust their AI capabilities as needed, Yashar Behzadi, CEO and founder of Synthesis AI, told VentureBeat.
Behzadi said AIaaS platforms can integrate with a wide range of other technologies, such as cloud storage and analytics tools, making it easier for organizations to leverage AI in conjunction with other tools and platforms.
AIaaS platforms often provide organizations with access to the latest and most advanced AI technologies, including machine learning algorithms and tools. This can help organizations build more accurate and effective machine learning models because AIaaS platforms often have access to large amounts of data, said Behzadi. This can be particularly beneficial for organizations with limited data available for training their models.
AIaaS platforms can process and analyze large volumes of text data, such as customer reviews or social media posts, to help computers and humans communicate more clearly. These platforms can also be used to build chatbots that can handle customer inquiries and requests, providing a convenient way for organizations to interact with customers and improve customer service. Computer vision training is another large use case, as AIaaS platforms can analyze and interpret images and video data, such as facial recognition or object detection; this can be inculcated in various applications, including security and surveillance, marketing and manufacturing.
Recently, weve seen a boom in the popularity of generative AI, which is another case of AIaaS being used to create content, said Behzadi. These services can create text or image content at scale with near-zero variable costs. Organizations are still figuring out how to practically use generative AI at scale, but the foundations are there.
Talking about the current challenges of AIaaS, Behzadi explained that company use cases are often nuanced and specialized, and generalized AIaaS systems may need to be revised for unique use cases.
The inability to fine-tune the models for company-specific data may result in lower-than-expected performance and ROI. However, this also ties into the lack of control organizations that use AIaaS may have over their systems and technologies, which can be a concern, he said.
Behzadi said that while integration can benefit the technology, it can also be complex and time-consuming to integrate with an organizations existing systems and processes.
Additionally, the capabilities and biases inherent in AIaaS systems are unknown and may lead to unexpected outcomes. Lack of visibility into the black box can also lead to ethical concerns of bias and privacy, and organizations do not have the technical insight and visibility to fully understand and characterize performance, said Behzadi.
He suggests that CTOs should first consider the organizations specific business needs and goals and whether an AIaaS solution can help meet these needs. This may involve assessing the organizations data resources and the potential benefits and costs of incorporating AI into their operations.
By leveraging AIaaS, a company is not investing in building core capabilities over time. Efficiency and cost-saving in the near term have to be weighed against capability in the long term. Additionally, a CTO should assess the ability of the more generalized AIaaS offering to meet the companys potentially customized needs, he said.
Behzadi says that AIaaS systems are maturing and allowing customers to fine-tune the models with company-specific data, and this expanded capability will enable enterprises to create more targeted models for their specific use cases.
Providers will likely continue to specialize in various industries and sectors, offering tailored solutions for specific business needs. This may include the development of industry-specific AI tools and technologies, he said. As foundational NLP and computer vision models continue to evolve rapidly, they will increasingly power the AIaaS offerings. This will lead to faster capability development, lower cost of development, and greater capability.
Likewise, Guttmann predicts that we will see many more NLP-based models with simple APIs that companies can integrate directly into their products.
I think that surprisingly enough, a lot of companies will realize they can do more with their current data sScience teams and leverage AIaaS for the simple tasks. We have witnessed a huge jump in capabilities over the last year, and I think the upcoming year is when companies capitalize on those new offerings, he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Visit link:
What We Know So Far About Elon Musks OpenAI, The Maker Of ChatGPT – AugustMan Thailand
Posted: at 12:20 am
Speak of Elon Musk and in all probability, companies like Twitter, Tesla or SpaceX will come to your mind. But little do people know about Elon Musks company OpenAI an artificial intelligence (AI) research and development firm that is behind the disruptive chatbot ChatGPT.
The brainchild of Musk and former Y Combinator president Sam Altman, OpenAI launched ChatGPT in November 2022 and within a week, the application saw a spike of over a million users. Being able to do anything between coding and interacting that mimics human intelligence, ChatGPT has surpassed previous standards of AI capabilities and has introduced a new chapter in AI technologies and machine learning systems.
If you are intrigued by artificial intelligence and take an interest in deep learning and how they can benefit humanity, then you must know about the history of OpenAI and the levels AI development has reached.
Launched in 2015 and headquartered in San Francisco, this altruistic artificial intelligence company was founded by Musk and Altman. They saw collaborations with other Silicon Valley tech experts like Peter Thiel and LinkedIn co-founder Reid Hoffman who pledged USD 1 billion for OpenAI that year.
To quote an OpenAI blog, OpenAI is a non-profit artificial intelligence research company. It further said, OpenAIs mission is to ensure artificial general intelligence benefits all of humanity in a holistic way, with no hope for profit.
Today, OpenAI LP is governed by the board of OpenAI non-profit. It comprises OpenAI LP employees Greg Brockman (chairman and president), Ilya Sutskever (chief scientist) and Sam Altman (chief executive officer). It also has non-employees Adam DAngelo, Reid Hoffman, Will Hurd, Tasha McCauley, Helen Toner and Shivon Zilis onboard as investors and Silicon Valley support.
Key strategic investors include Microsoft, Hoffmans charitable foundation and Khosla Ventures.
In 2018, three years after the company came into being, Elon Musk resigned from OpenAIs Board to avoid any future conflict as Tesla continues to expand in the artificial intelligence field. However, Musk will continue to donate to its non-profit cause and be a strong advisor.
Although Elon Musks resignation was announced by OpenAI on grounds of conflict of interest, the current Twitter supremo later said that he quit because he couldnt agree with certain company decisions and that he wasnt involved with the artificial intelligence firm for over a year.
Plus, Tesla was also looking to hire some of the same employees as OpenAI and, therefore, Add that all up & it was just better to part ways on good terms, he tweeted.
However, things did not end there. In 2020, Musk tweeted OpenAI should be more open imo, answering an MIT Technology Review investigation that unearthed a deep-rooted secretive business model which contradicts its no-profit ideology and transparency.
OpenAI should be more open imo
Elon Musk (@elonmusk) February 17, 2020
Musk has also raised questions over safety concerns and tweeted, mentioning Dario Amodei, a former Google engineer who now leads OpenAIs strategy, I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high.
Over the years, OpenAI has set a high benchmark in the artificial general intelligence segment with innovations and products that are aimed at mimicking human behaviour and even surpass human intelligence.
In April 2016, the company announced the launch of the OpenAI Gym, a toolkit for developing and comparing reinforcement learning algorithms. Wondering what it is?
Reinforcement learning (RL) is the subfield of machine learning concerned with decision making and motor control. It studies how an agent can learn how to achieve goals in a complex, uncertain environment, says an OpenAI blog. These environments range from simulated robots to Atari Games and algorithmic evaluations.
To put it in simple terms, the OpenAI Gym enables researchers and research organisations to obtain the best result and arrive at a conclusive decision based on AI inputs. In fact, the gym was initially established to further the companys own deep reinforcement learning research and extend artificial intelligence in the realms of conclusive evaluation.
In December 2016, OpenAI announced another product called Universe. An OpenAI blog says it is a software platform for measuring and training an AIs general intelligence across the worlds supply of games, websites and other applications.
In the realm of artificial intelligence, it is imperative for an AI system to complete all kinds of tasks successfully that a human being can do using a computer. Additionally, Universe helps train a single AI agent in completing computer tasks. And, when coupled with OpenAI Gym, this deep learning mechanism also uses its experiences and adapts to difficult or unseen environments to complete a task at hand.
Advancing machine learning to foray artificial intelligence into the segment of human interaction is a path-breaking innovation, and OpenAIs chatbot GPT is a disruptive name in this sector. A chatbot is an artificial intelligence-based software application which can make human-like conversations. ChatGPT was launched on 30 November and within a week it garnered a whopping million users.
An OpenAI blog post states that their ChatGPT model is trained with a deep machine learning technique called Reinforcement Learning from Human Feedback (RLHF) that helps simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.
While Musk chimed in to praise the chatbot and tweeted saying, ChatGPT is scary good. We are not far from dangerously strong AI, he later took to the microblogging site and said that OpenAI has access to Twitters database, which it used to train the tool. He added, OpenAI was started as open-source & non-profit. Neither are still true.
The Generative Pre-trained Transformer (GPT)-3 model has gained a lot of buzz. It is essentially a language model that leverages deep learning to generate human-like text. Along with machine-generated texts, it can also produce stories, poems as well as codes. It is deemed as an upgrade on the previous GPT-2 model, released in 2019, which is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. To put it simply, language models are a set of statistical tools that enable such technology to predict the next word or syntax of the sentence.
Interestingly, in 2019, OpenAI also goes from being a non-profit organisation to a for-profit entity. In an OpenAI released blog, it said, We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit which we are calling a capped-profit company.
With this, other investors can earn up to 100 times their principal amount but not go beyond that, and the rest of the profit would go towards non-profit works.
Over the years, OpenAI has made itself a pioneering name in developing AI algorithms that can benefit society and, in this regard, it has partnered with other institutions.
In 2019, the company joined hands with Microsoft as the latter invested USD 1 billion, while the AI firm said it would exclusively licence its technology with the tech company, as per a Business Insider report. This would give Microsoft an edge over other organisations like Googles DeepMind AI company.
In 2021, Open AI took a futuristic leap and created DALL-E, one of the best AI tools that can make some of the most stunning masterpieces. And just a year later, it upgraded itself to launch Dall-E2, which provides images with 4x greater resolution and precision.
Dall-E2 is a new AI system that can create realistic images and art from a description in natural language. With swift strokes, this human-like robot hand can paint artworks that merge concepts, attributes and style. If that is not enough, Dall-E2 can build on an existing art piece and create new expanded original canvases. It can add unimaginably realistic edits to an existing image, generating different variations of a previous image.
Such intensive AI innovations and long-term research just go on to show how machines have acquired close to human-like attributes. However, experts have also seen it as the biggest existential threat to humanity, and Elon Musk, too, has shared the same thought.
While humans are the ones who have created it, Stephen Hawking had once told the BBC that AI could potentially re-design itself at an ever-increasing rate, superseding humans by outpacing biological evolution.
There is no denying that artificial intelligence has been taking giant leaps and has its impact felt in almost every aspect. From churning out daily news stories to creating world-class classical art and even making a full-fledged conversation, artificial intelligence and its dynamics have incredible potential, but what is in store for the future is left to be seen.
(Hero image credit: Possessed Photography/ @possessedphotography/ Unsplash; Feature image credit: Andrea De Santis/ @santesson89/ Unsplash)
Here is the original post:
What We Know So Far About Elon Musks OpenAI, The Maker Of ChatGPT - AugustMan Thailand
Rapid Adaptation of Deep Learning Teaches Drones to Survive Any Weather – Caltech
Posted: May 5, 2022 at 1:44 am
To be truly useful, dronesthat is, autonomous flying vehicleswill need to learn to navigate real-world weather and wind conditions.
Right now, drones are either flown under controlled conditions, with no wind, or are operated by humans using remote controls. Drones have been taught to fly in formation in the open skies, but those flights are usually conducted under ideal conditions and circumstances.
However, for drones to autonomously perform necessary but quotidian tasks, such as delivering packages or airlifting injured drivers from a traffic accident, drones must be able to adapt to wind conditions in real timerolling with the punches, meteorologically speaking.
To face this challenge, a team of engineers from Caltech has developed Neural-Fly, a deep-learning method that can help drones cope with new and unknown wind conditions in real time just by updating a few key parameters.
Neural-Fly is described in a study published on May 4 in Science Robotics. The corresponding author is Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and Jet Propulsion Laboratory Research Scientist. Caltech graduate students Michael O'Connell (MS '18) and Guanya Shi are the co-first authors.
Neural-Fly was tested at Caltech's Center for Autonomous Systems and Technologies (CAST) using its Real Weather Wind Tunnel, a custom 10-foot-by-10-foot array of more than 1,200 tiny computer-controlled fans that allows engineers to simulate everything from a light gust to a gale.
"The issue is that the direct and specific effect of various wind conditions on aircraft dynamics, performance, and stability cannot be accurately characterized as a simple mathematical model," Chung says. "Rather than try to qualify and quantify each and every effect of turbulent and unpredictable wind conditions we often experience in air travel, we instead employ a combined approach of deep learning and adaptive control that allows the aircraft to learn from previous experiences and adapt to new conditions on the fly with stability and robustness guarantees."
Time-lapse photo shows a drone equipped with Neural-Fly maintaining a figure-eight course amid stiff winds at Caltech's Real Weather Wind Tunnel.
O'Connell adds: "We have many different models derived from fluid mechanics, but achieving the right model fidelity and tuning that model for each vehicle, wind condition, and operating mode is challenging. On the other hand, existing machine learning methods require huge amounts of data to train yet do not match state-of-the-art flight performance achieved using classical physics-based methods. Moreover, adapting an entire deep neural network in real time is a huge, if not currently impossible task."
Neural-Fly, the researchers say, gets around these challenges by using a so-called separation strategy, through which only a few parameters of the neural network must be updated in real time.
"This is achieved with our new meta-learning algorithm, which pre-trains the neural network so that only these key parameters need to be updated to effectively capture the changing environment," Shi says.
After obtaining as little as 12 minutes of flying data, autonomous quadrotor drones equipped with Neural-Fly learn how to respond to strong winds so well that their performance significantly improved (as measured by their ability to precisely follow a flight path). The error rate following that flight path is around 2.5 times to 4 times smaller compared to the current state of the art drones equipped with similar adaptive control algorithms that identify and respond to aerodynamic effects but without deep neural networks.
Out of the lab and into the sky: engineers test Neural-Fly in the open air on Caltech's campus
Neural-Fly, which was developed in collaboration with Caltech's Yisong Yue, Professor of Computing and Mathematical Sciences, and Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences, is based on earlier systems known as Neural-Lander and Neural-Swarm. Neural-Lander also used a deep-learning method to track the position and speed of the drone as it landed and modify its landing trajectory and rotor speed to compensate for the rotors' backwash from the ground and achieve the smoothest possible landing; Neural-Swarm taught drones to fly autonomously in close proximity to each other.
Though landing might seem more complex than flying, Neural-Fly, unlike the earlier systems, can learn in real time. As such, it can respond to changes in wind on the fly, and it does not require tweaking after the fact. Neural-Fly performed as well in flight tests conducted outside the CAST facility as it did in the wind tunnel. Further, the team has shown that flight data gathered by an individual drone can be transferred to another drone, building a pool of knowledge for autonomous vehicles.
(L to R) Guanya Shi, Soon-Jo Chung, and Michael O'Connell, in front of the wall of fans at Caltech's Center for Autonomous Systems and Technologies
At the CAST Real Weather Wind Tunnel, test drones were tasked with flying in a pre-described figure-eight pattern while they were blasted with winds up to 12.1 meters per secondroughly 27 miles per hour, or a six on the Beaufort scale of wind speeds. This is classified as a "strong breeze" in which it would be difficult to use an umbrella. It ranks just below a "moderate gale," in which it would be difficult to move and whole trees would be swaying. This wind speed is twice as fast as the speeds encountered by the drone during neural network training, which suggests Neural-Fly could extrapolate and generalize well to unseen and harsher weather.
The drones were equipped with a standard, off-the-shelf flight control computer that is commonly used by the drone research and hobbyist community. Neural-Fly was implemented in an onboard Raspberry Pi 4 computer that is the size of a credit card and retails for around $20.
The Science Robotics paper is titled "Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds." Coauthors include Anandkumar and Yue, as well as Xichen Shi (PhD '21), and former Caltech postdoc Kamyar Azizzadenesheli, now an assistant professor of computer science at Purdue University. Funding for this research came from the Defense Advanced Research Projects Agency (DARPA) and Raytheon.
The rest is here:
Rapid Adaptation of Deep Learning Teaches Drones to Survive Any Weather - Caltech
Whats the transformer machine learning model? And why should you care? – The Next Web
Posted: at 1:44 am
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. (In partnership with Paperspace)
In recent years, the transformer model has become one of the main highlights of advances in deep learning and deep neural networks. It is mainly used for advanced applications in natural language processing. Google is using it to enhance its search engine results. OpenAI has used transformers to create its famous GPT-2 and GPT-3 models.
Since its debut in 2017, the transformer architecture has evolved and branched out into many different variants, expanding beyond language tasks into other areas. They have been used for time series forecasting. They are the key innovation behind AlphaFold, DeepMinds protein structure prediction model. Codex, OpenAIs source codegeneration model, is based on transformers. More recently, transformers have found their way into computer vision, where they are slowly replacing convolutional neural networks (CNN) in many complicated tasks.
Researchers are still exploring ways to improve transformers and use them in new applications. Here is a brief explainer about what makes transformers exciting and how they work.
The classic feed-forward neural network is not designed to keep track of sequential data and maps each input into an output. This works for tasks such as classifying images but fails on sequential data such as text. A machine learning model that processes text must not only compute every word but also take into consideration how words come in sequences and relate to each other. The meaning of words can change depending on other words that come before and after them in the sentence.
Before transformers, recurrent neural networks (RNN) were the go-to solution for natural language processing. When provided with a sequence of words, an RNN processes the first word and feeds back the result into the layer that processes the next word. This enables it to keep track of the entire sentence instead of processing each word separately.
Recurrent neural nets had disadvantages that limited their usefulness. First, they were very slow. Since they had to process data sequentially, they could not take advantage of parallel computing hardware and graphics processing units (GPU) in training and inference. Second, they could not handle long sequences of text. As the RNN got deeper into a text excerpt, the effects of the first words of the sentence gradually faded. This problem, known as vanishing gradients, was problematic when two linked words were very far apart in the text. And third, they only captured the relations between a word and the words that came before it. In reality, the meaning of words depends on the words that come both before and after them.
Long short-term memory (LSTM) networks, the successor to RNNs, were able to solve the vanishing gradients problem to some degree and were able to handle larger sequences of text. But LSTMs were even slower to train than RNNs and still couldnt take full advantage of parallel computing. They still relied on the serial processing of text sequences.
Transformers, introduced in the 2017 paper Attention Is All You Need, made two key contributions. First, they made it possible to process entire sequences in parallel, making it possible to scale the speed and capacity of sequential deep learning models to unprecedented rates. And second, they introduced attention mechanisms that made it possible to track the relations between words across very long text sequences in both forward and reverse directions.
Before we discuss how the transformer model works, it is worth discussing the types of problems that sequential neural networks solve.
A vector to sequence model takes a single input, such as an image, and produces a sequence of data, such as a description.
A sequence to vector model takes a sequence as input, such as a product review or a social media post, and outputs a single value, such as a sentiment score.
A sequence to sequence model takes a sequence as input, such as an English sentence, and outputs another sequence, such as the French translation of the sentence.
Despite their differences, all these types of models have one thing in common. They learn representations. The job of a neural network is to transform one type of data into another. During training, the hidden layers of the neural network (the layers that stand between the input and output) tune their parameters in a way that best represents the features of the input data type and maps it to the output.
The original transformer was designed as a sequence-to-sequence (seq2seq) model for machine translation (of course, seq2seq models are not limited to translation tasks). It is composed of an encoder module that compresses an input string from the source language into a vector that represents the words and their relations to each other. The decoder module transforms the encoded vector into a string of text in the destination language.
The input text must be processed and transformed into a unified format before being fed to the transformer. First, the text goes through a tokenizer, which breaks it down into chunks of characters that can be processed separately. The tokenization algorithm can depend on the application. In most cases, every word and punctuation mark roughly counts as one token. Some suffixes and prefixes count as separate tokens (e.g., ize, ly, and pre). The tokenizer produces a list of numbers that represent the token IDs of the input text.
The tokens are then converted into word embeddings. A word embedding is a vector that tries to capture the value of words in a multi-dimensional space. For example, the words cat and dog can have similar values across some dimensions because they are both used in sentences that are about animals and house pets. However, cat is closer to lion than wolf across some other dimension that separates felines from canids. Similarly, Paris and London might be close to each other because they are both cities. However, London is closer to England and Paris to France on a dimension that separates countries. Word embeddings usually have hundreds of dimensions.
Word embeddings are created by embedding models, which are trained separately from the transformer. There are several pre-trained embedding models that are used for language tasks.
Once the sentence is transformed into a list of word embeddings, it is fed into the transformers encoder module. Unlike RNN and LSTM models, the transformer does not receive one input at a time. It can receive an entire sentences worth of embedding values and process them in parallel. This makes transformers more compute-efficient than their predecessors and also enables them to examine the context of the text in both forward and backward sequences.
To preserve the sequential nature of the words in the sentence, the transformer applies positional encoding, which basically means that it modifies the values of each embedding vector to represent its location in the text.
Next, the input is passed to the first encoder block, which processes it through an attention layer. The attention layer tries to capture the relations between the words in the sentence. For example, consider the sentence The big black cat crossed the road after it dropped a bottle on its side. Here, the model must associate it with cat and its with bottle. Accordingly, it should establish other associations such as big and cat or crossed and cat. Otherwise put, the attention layer receives a list of word embeddings that represent the values of individual words and produces a list of vectors that represent both individual words and their relations to each other. The attention layer contains multiple attention heads, each of which can capture different kinds of relations between words.
The output of the attention layer is fed to a feed-forward neural network that transforms it into a vector representation and sends it to the next attention layer. Transformers contain several blocks of attention and feed-forward layers to gradually capture more complicated relationships.
The task of the decoder module is to translate the encoders attention vector into the output data (e.g., the translated version of the input text). During the training phase, the decoder has access both to the attention vector produced by the encoder and the expected outcome (e.g., the translated string).
The decoder uses the same tokenization, word embedding, and attention mechanism to process the expected outcome and create attention vectors. It then passes this attention vector and the attention layer in the encoder module, which establishes relations between the input and output values. In the translation application, this is the part where the words from the source and destination languages are mapped to each other. Like the encoder module, the decoder attention vector is passed through a feed-forward layer. Its result is then mapped to a very large vector which is the size of the target data (in the case of language translation, this can span across tens of thousands of words).
During training, the transformer is provided with a very large corpus of paired examples (e.g., English sentences and their corresponding French translations). The encoder module receives and processes the full input string. The decoder, however, receives a masked version of the output string, one word at a time, and tries to establish the mappings between the encoded attention vector and the expected outcome. The encoder tries to predict the next word and makes corrections based on the difference between its output and the expected outcome. This feedback enables the transformer to modify the parameters of the encoder and decoder and gradually create the right mappings between the input and output languages.
The more training data and parameters the transformer has, the more capacity it gains to maintain coherence and consistency across long sequences of text.
In the machine translation example that we examined above, the encoder module of the transformer learned the relations between English words and sentences, and the decoder learns the mappings between English and French.
But not all transformer applications require both the encoder and decoder module. For example, the GPT family of large language models uses stacks of decoder modules to generate text. BERT, another variation of the transformer model developed by researchers at Google, only uses encoder modules.
The advantage of some of these architectures is that they can be trained through self-supervised learning or unsupervised methods. BERT, for example, does much of its training by taking large corpora of unlabeled text, masking parts of it, and trying to predict the missing parts. It then tunes its parameters based on how much its predictions were close to or far from the actual data. By continuously going through this process, BERT captures the statistical relations between different words in different contexts. After this pretraining phase, BERT can be finetuned for a downstream task such as question answering, text summarization, or sentiment analysis by training it on a small number of labeled examples.
Using unsupervised and self-supervised pretraining reduces the manual effort required to annotate training data.
A lot more can be said about transformers and the new applications they are unlocking, which is out of the scope of this article. Researchers are still finding ways to squeeze more out of transformers.
Transformers have also created discussions about language understanding and artificial general intelligence. What is clear is that transformers, like other neural networks, are statistical models that capture regularities in data in clever and complicated ways. They do not understand language in the way that humans do. But they are exciting and useful nonetheless and have a lot to offer.
This article was originally written by Ben Dickson and published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original articlehere.
Read more:
Whats the transformer machine learning model? And why should you care? - The Next Web
BigBear.ai to Highlight Artificial Intelligence and Machine Learning Capabilities at Upcoming Industry Events – Business Wire
Posted: at 1:44 am
COLUMBIA, Md.--(BUSINESS WIRE)--BigBear.ai (NYSE: BBAI), a leader in AI-powered analytics and cyber engineering solutions, announced company executives are embarking on a thought-leadership campaign across multiple global industry events. The campaign will emphasize how the companys advancements in AI technologies will impact the federal and commercial markets in the coming months.
At these events, BigBear.ai leaders will highlight the capabilities of BigBear.ais newly acquired company, ProModel Corporation, the importance of defining responsible AI usage, and how federal and commercial organizations leverage AI and ML.
The events BigBear.ai is scheduled to address include:
CTMA Partners Meeting May 3-5, 2022: Virginia Beach, VA
Due to the rapid deployment and advancement of sensor technologies, artificial intelligence, and data science, the Department of Defense has turned to a more predictive-based approach to maintaining technology assets. The agencys recently revamped condition-based maintenance plus (CBM+) policy will accelerate the adoption, integration, and use of these emerging technologies while shifting its strategic approach from largely reactive maintenance to proactive maintenance. Participating as part of a panel session to address this trend, BigBear.ai Senior Vice President of Analytics Carl Napoletano will highlight ProModels commercial capabilities and ProModel Government Services legacy capabilities in the federal space.
DIA Future Technologies Symposium May 11-12, 2022: Virtual Event
BigBear.ais Senior Vice President of Analytics, Frank Porcelli, will brief the DIA community about BigBear.ais AI-powered solutions at this virtual presentation. After providing a high-level overview and demonstration of the companys AI products (Observe, Orient, and Dominate), Frank will also offer insights into how AI technologies are being leveraged in the federal sector.
Conference on Governance of Emerging Technologies and Science May 19-20, 2022: Phoenix, Arizona
Newly appointed BigBear.ai General Counsel Carolyn Blankenship will attend the ninth edition of Arizona States annual conference, which examines how to create sustainable governance solutions that address new technologies legal, regulatory, and policy ramifications. During her presentation, Carolyn will detail the importance of Intellectual Property (IP) law in AI and the responsible use of AI and other emerging technologies. Prior to starting as General Counsel, Carolyn organized and led Thomson Reuters cross-functional team that outlined the organizations first set of Data Ethics Principles.
Automotive Innovation Forum May 24-25, 2022: Munich, Germany
ProModel was among the select few organizations invited to attend Autodesks The Automotive Innovation Forum 2022. This premier industry event celebrates new automotive plant design and manufacturing technology solutions. Michael Jolicoeur of ProModel, Director of the Autodesk Business Division, will headline a panel at the conference and highlight the latest industry trends in automotive factory design and automation.
DAX 2022 June 4, 2022: University of Maryland, Baltimore County, Baltimore, Maryland
Three BigBear.ai experts - Zach Casper, Senior Director of Cyber; Leon Worthen, Manager of Strategic Operations; and Sammy Hamilton, Data Scientist/Engagement Engineer - will headline a panel discussion exploring the variety of ways AI and ML are deployed throughout the defense industry. The trio of experts will discuss how AI and ML solve pressing cybersecurity problems facing the Department of Defense and intelligence communities.
To connect with BigBear.ai at these events, send an email to events@bigbear.ai.
About BigBear.ai
BigBear.ai delivers AI-powered analytics and cyber engineering solutions to support mission-critical operations and decision-making in complex, real-world environments. BigBear.ais customers, which include the US Intelligence Community, Department of Defense, the US Federal Government, as well as customers in manufacturing, logistics, commercial space, and other sectors, rely on BigBear.ais solutions to see and shape their world through reliable, predictive insights and goal-oriented advice. Headquartered in Columbia, Maryland, BigBear.ai has additional locations in Virginia, Massachusetts, Michigan, and California. For more information, please visit: http://bigbear.ai/ and follow BigBear.ai on Twitter: @BigBearai.
Go here to read the rest:
Machine learning predicts who will win "The Bachelor" – Big Think
Posted: at 1:44 am
First airing in 2002, The Bachelor is a titan in the world of Reality TV and has kept its most loyal viewers hooked for a full 26 seasons. To the uninitiated, the show follows 30 female contestants as they battle for the heart of a lone male bachelor, who proposes to the winner.
The contest begins the moment the women step out of a limo to meet the lead on Night One which culminates in him handing the First Impression Rose to the lady with whom he had the most initial chemistry. Over eight drama-fuelled weeks, the contestants travel to romantic destinations for their dates. At the end of each week, the lead selects one or two women for a one-on-one date, while eliminating up to five from the competition.
As self-styled mega-fans of The Bachelor, Abigail Lee and her colleagues at the University of Chicagos unofficial Department of Reality TV Engineering have picked up on several recurring characteristics in the women who tend to make it further in the competition. Overall, younger, white contestants are far more likely to succeed, with just one 30-something and one woman of color winning the leads heart in The Bachelors 20-year history a long-standing source of controversy.
The researchers are less clear on how other factors affect the contestants chances of success, such as whether they receive the First Impression Rose or are selected earlier for their first one-on-one date. Hometown and career also seem to have an unpredictable influence, though contestants with questionable job descriptions like Dog Lover, Free Spirit, and Chicken Enthusiast have rarely made it far.
For Lees team, such a diverse array of contestant parameters makes the show ripe for analysis with machine learning. In their study, Lees team compiled a dataset of contestant parameters that included all 422 contestants who participated in seasons 11 through 25. The researchers obviously encountered some adversity, as they note that they consum[ed] multiple glasses of wine per night during data collection.
Despite this setback, they used the data to train machine learning algorithms whose aim was to predict how far a given contestant will progress through the competition given her characteristics. In searching for the best algorithm, the team tried neural networks, linear regression, and random forest classification.
While the teams neural network performed the best overall in predicting the parameters of the most successful contestants, all three models were consistent with each other. This allowed them to confidently predict the characteristics of a woman with the highest probability of progressing far through the contest: 26 years of age, white, from the Northwest, works as a dancer, received her first one-on-one date on week 6, and didnt receive the First Impression Rose.
Lees team laments that The Bachelors viewership has steadily declined over the past few seasons. They blame a variety of factors, including influencer contestants (who are more concerned with growing their online following than finding true love) and the production crew increasingly meddling in the shows storylines, such as the infamous Champagne-gate of season 24.
By drawing on the insights gathered through their analysis, which the authors emphasize was done in their free time, the researchers hope that The Bachelors producers could think of new ways to shake up its format, while improving chances for contestants across a more diverse range of backgrounds, ensuring the show remains an esteemed cultural institution for years to come.
Of course, as a consolation prize, theres always Bachelor in Paradise.
Go here to read the rest:
Machine learning predicts who will win "The Bachelor" - Big Think