23 AI predictions for the enterprise in 2023 – VentureBeat
Posted: December 29, 2022 at 12:20 am
Check out all the on-demand sessions from the Intelligent Security Summit here.
Its that time of year again, when artificial intelligence (AI) leaders, consultants and vendors look at enterprise trends and make their predictions. After a whirlwind 2022, its no easy task this time around.
You may not agree with every one of these but in honor of 2023, these are 23 top AI and ML predictions experts think will be spot-on for the coming year:
In 2023, were going to see more organizations start to move away from deploying siloed AI and ML applications that replicate human actions for highly specific purposes and begin building more connected ecosystems with AI at their core. This will enable organizations to take data from throughout the enterprise to strengthen machine learning models across applications, effectively creating learning systems that continually improve outcomes. For enterprises to be successful, they need to think about AI as a business multiplier, rather than simply an optimizer.
Vinod Bidarkoppa, CTO of Sams Club and SVP of Walmart
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
The hype about generative AI becomes reality in 2023. Thats because the foundations for true generative AI are finally in place, with software that can transform large language models and recommender systems into production applications that go beyond images to intelligently answer questions, create content and even spark discoveries. This new creative era will fuel massive advances in personalized customer service, drive new business models and pave the way for breakthroughs in healthcare.
Manuvir Das, senior vice president, enterprise computing, Nvidia
Were seeing AI and powerful data capabilities redefine the security models and capabilities for companies. Security practitioners and the industry as a whole will have much better tools and much faster information at their disposal, and they should be able to isolate security risks with much greater precision. Theyll also be using more marketing-like techniques to understand anomalous behavior and bad actions. In due time, we may very well see parties using AI to infiltrate systems, attempt to take over software assets through ransomware and take advantage of the cryptocurrency markets.
Ashok Srivastava, senior vice president and chief data officer, Intuit
Next year teams that focus on ML operations, management and governance will have to do more with less. Because of this, businesses will adopt more off-the-shelf solutions because they are less expensive to produce, require less research time and can be customized to fit most needs. MLOps teams will also need to consider open-source infrastructure instead of getting locked into long-term contracts with cloud providers. Open source delivers flexible customization, cost savings and efficiency. Especially with teams shrinking across tech, this is becoming a much more viable option.
Moses Guttman, CEO, ClearML
The biggest source of improvement in AI has been the deployment of deep learning and especially transformer models in training systems, which are meant to mimic the action of a brains neurons and the tasks of humans. These breakthroughs require tremendous compute power to analyze vast structured and unstructured datasets. Unlike CPUs, graphics processing units (GPUs) can support the parallel processing that deep learning workloads require. That means in 2023, as more applications founded on deep learning technology emerge to do everything from translating menus to curing disease, demand for GPUs will continue to soar.
Nick Elprin, CEO, Domino Data Lab
Modern AI technology is already being used to help managers, coaches and executives with real-time feedback to better interpret inflection, emotion and more, and provide recommendations on how to improve future interactions. The ability to interpret meaningful resonance as it happens is a level of coaching no human being can provide.
Zayd Enam, CEO, Cresta
As fear and protectionism create barriers to data movement and processing locations, AI adoption will slow down. Macroeconomic instability, including rising energy costs and a looming recession, will hobble the advancement of AI initiatives as companies struggle just to keep the lights on.
Rich Potter, CEO, Peak
Since model deployment, scaling AI across the enterprise, reducing time to insight and reducing time to value will become the key success criteria, AI/ML engineers will become critical in meeting these criteria. Today a lot of AI projects fail because they are not built to scale or [to] integrate with business workflows.
Nicolas Sekkaki, GM of applications, Data and AI, Kyndryl
As the AI/ML market continues to flood with new solutions, as evident by the volume of startups and VC capital deployed in the space, enterprises have found themselves with a collection of niche, disparate tools at their disposal. In 2023, enterprises will be more conscious of selecting solutions that will be more interoperable with the rest of their ecosystem, including their on-premises footprint and across cloud providers (AWS, Azure, GCP). Additionally, enterprises will gravitate towards a handful of leading solutions as the disparate tools mature and come together in bundles as standalone solutions.
Anay Nawathe, principal consultant, ISG
Advanced machine learning technologies will enable no-code developers to innovate and create applications never seen before. This evolution may pave the way for a new breed of development tools. In a likely scenario, application developers will program the application by describing their intent, rather than describing the data and the logic as theyd do with low-code tools of today.
Esko Hannula, SVP of product management, Copado
This past year was filled with incredibly impressive technological advancements, popularized by ChatGPT, DALL-E 2, Galactica and Facebooks Make-A-Video. These massive models were made possible largely due to the availability of endless volumes of training data, and huge compute and infrastructure resources. Heading into 2023, funding for true blue-sky research will slow down as organizations become more conservative in spending to brace for the looming recession and will shift from investing in fundamental research to more practical applications. With more companies becoming increasingly frugal to mitigate this imminent threat, we can anticipate increased use of pre-trained models and more focus on applying the advancements from previous years to more concrete applications.
John Kane, head of signal processing and machine learning, Cogito
Chatbots are the obvious application for ChatGPT, but they are probably not going to be the first ones. First, ChatGPT today can answer questions, but it cannot take actions. When a user contacts a brand, they sometimes just want answers, but often they want something done process a return, or cancel an account, or transfer funds. Secondly, when used to answer questions, ChatGPT can answer based on knowledge [found] on the internet. But it doesnt have access to knowledge which is not online. Finally, ChatGPT excels at generation of text, creating new content derived from existing online information. When a user contacts a brand, they dont want creative output they want immediate actions. All of these issues will get addressed, but it does mean that the first use case is probably not chatbots.
Jonathan Rosenberg, CTO, Five9
Digital engagement has become the default rather than the fallback, and every interaction counts. While the emergence of automation initially resolved basic FAQs, its now providing more advanced capabilities: personalizing interactions based on customer intent, empowering people to take action and self-serve, and making predictions on their next best action.
The only way for businesses to scale a VIP digital experience for everyone is with an AI-driven automation solution. This will become a C-level priority for brands in 2023, as they determine how to evolve from a primarily live agent-based interaction model to one that can be primarily serviced through automated interactions. AI will be necessary to scale operations and properly understand and respond to what customers are saying, so brands can learn what their customers want and plan accordingly.
Jessica Popp, CTO of Ada
Coming soon are industry-specific AI model marketplaces that enable businesses to easily consume and integrate AI models in their business without having to create and manage the model lifecycle. Businesses will simply subscribe to an AI model store. Think of the Apple Music store or Spotify for AI models broken down by industry and data they process.
Bryan Harris, executive vice president and chief technology officer, SAS
As individuals continue to worry about how businesses and employers will use AI and machine learning technology, it will become more important than ever for companies to provide transparency into how their AI is applied to worker and finance data. Explainable AI will increasingly help to advance enterprise AI adoption by establishing greater trust. More providers will start to disclose how their machine learning models lead to their outputs (e.g. recommendations) and predictions, and well see this expand even further to the individual user level with explainability built right into the application being used.
Jim Stratton, CTO, Workday
Federated learning is a machine learning technique that can be used to train machine learning models at the location of data sources, by only communicating the trained models from individual data sources to reach a consensus for a global model. Therefore instead of using the traditional approach of collecting data from multiple sources to a centralized location for model training, this technique learns a collaborative model. Federated learning addresses some of the major issues that prevail in the current machine learning technique, such as data privacy, data security, data access rights and access to data from heterogeneous sources.
David Murray, chief business officer, Devron
While most people write scrapers today to get data off of websites, natural language processing (NLP) progress has been made where soon you can describe in natural language what you want to extract from a given web page and the machine pulls it for you. For example, you could say, Search this travel site for all the flights from San Francisco to Boston and put all of them in a spreadsheet, along with price, airline, time and day of travel. Its a hard problem, but we could actually solve it in the next year.
Varun Ganapathi, CTO and co-founder, AKASA
With remote work, boundaries are becoming increasingly blurred. Today its common for people to work and converse with colleagues across borders, even if they dont share a common language. Manual translation can become a hindrance that slows down productivity and innovation. We now have the technology to use communication tools such as Zoom that allows someone in Turkey, for example, to speak their native language but allows someone in the U.S. to hear what theyre saying in English. This real-time speech translation ultimately helps with efficiency and productivity while also giving businesses more of an opportunity to operate globally.
Manoj Chaudhary, CTO and SVP of engineering, Jitterbit
By now, everyone has seen AI-created deepfake videos. They are leveraged for a variety of purposes, ranging from reanimating a lost loved one, disseminating political propaganda or enhancing a marketing campaign. However, imagine receiving a phishing email with a deepfake video of your CEO instructing you to go to a malicious URL. Or an attacker constructing more believable, legitimate-seeming phishing emails by using AI to better mimic corporate communications. Modern AI capabilities could completely blur the lines between legitimate and malicious emails, websites, company communications and videos. Cybercrime AI-as-a-Service could be the next monetized tactic.
Heather Gantt-Evans, CISO, SailPoint
In the year ahead, we will see enterprises turn to a hybrid approach to natural language processing combining symbolic AI with ML, which has shown to produce explainable, scalable and more accurate results while leaving a smaller carbon footprint. Companies will expand automation to more complex processes, requiring accurate understanding of documents, and extending their data analytics activities to include data embedded in text and documents. Therefore, investments in AI-based natural language technologies will grow. These solutions will have to be accurate, efficient, environmentally sustainable, explainable and not subject to bias. This requires enterprises to abandon the single-technique approach such as just machine learning (ML) or deep learning (DL) for their intrinsic limitations.
Luca Scagliarini, chief product officer, Expert.ai
Advancements in AI-generated music will be a particularly interesting development. Now [that] tools exist that generate visual art from text prompts, these same tools will be improved to do the same for music. There are already models available that use text prompts to generate music and realistic human voices. Once these models start performing well enough that the public takes notice, progress in the field of generative audio will accelerate even further. Its not unreasonable to think, within the next few years, that AI-generated music videos could become reality, with AI-generated video, music and vocals.
Ulrik Stig Hansen, president, Encord
There will be less investment within Fortune 500 organizations allocated to internal ML and data science teams to build solutions from the ground up. It will be replaced with investments in fully productized applications or platform interfaces to deliver the desired data analytic and customer experience outcomes in focus.[Thats because] in the next five years, nearly every application will be powered by LLM-based neural network-powered data pipelines to help classify, enrich, interpret and serve.
[But] productization of neural network technology is one of the hardest tasks in the computer science field right now. It is an incredibly fast-moving space that without dedicated focus and exposure to many different types of data and use cases, it will be hard for internal-solution ML teams to excel at leveraging these technologies.
Amr Awadallah, CEO, Vectara
When it comes to devops, experts are confident that AI is not going to replace jobs; rather, it will empower developers and testers to work more efficiently. AI integration is augmenting people and empowering exploratory testers to find more bugs and issues upfront, streamlining the process from development to deployment. In 2023, well see already-lean teams working more efficiently and with less risk as AI continues to be implemented throughout the development cycle.
Specifically, AI-augmentation will help inform decision-making processes for devops teams by finding patterns and pointing out outliers, allowing applications to continuously self-heal and freeing up time for teams to focus their brain power on the tasks that developers actually want to do and that are more strategically important to the organization.
Kevin Thompson, CEO, Tricentis
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Follow this link:
How artificial intelligence is helping us explore the solar system – Space.com
Posted: at 12:20 am
Let's be honest it's much easier for robots to explore space than us humans. Robots don't need fresh air and water, or to lug around a bunch of food to keep themselves alive. They do, however, require humans to steer them and make decisions. Advances in machine learning technology may change that, making computers a more active collaborator in planetary science.
Last week at the 2022 American Geophysical Union (AGU) Fall Meeting, planetary scientists and astronomers discussed how new machine-learning techniques are changing the way we learn about our solar system, from planning for future mission landings on Jupiter's icy moon Europa to identifying volcanoes on tiny Mercury.
Machine learning is a way of training computers to identify patterns in data, then harness those patterns to make decisions, predictions or classifications. Another major advantage to computers besides not requiring life-support is their speed. For many tasks in astronomy, it can take humans months, years or even decades of effort to sift through all the necessary data.
Related: Our solar system: A photo tour of the planets
One example is identifying boulders in pictures of other planets. For a few rocks, it's as easy as saying "Hey, there's a boulder!" but imagine doing that thousands of times over. The task would get pretty boring, and eat up a lot of scientists' valuable work time.
"You can find up to 10,000, hundreds of thousands of boulders, and it's very time consuming," Nils Prieur, a planetary scientist at Stanford University in California said during his talk at AGU. Prieur's new machine-learning algorithm can detect boulders across the whole moon in only 30 minutes. It's important to know where these large chunks of rock are to make sure new missions can land safely at their destinations. Boulders are also useful for geology, providing clues to how impacts break up the rocks around them to create craters.
Computers can identify a number of other planetary phenomena, too: explosive volcanoes on Mercury, vortexes in Jupiter's thick atmosphere and craters on the moon, to name a few.
During the conference, planetary scientist Ethan Duncan, from NASA's Goddard Space Flight Center in Maryland, demonstrated how machine learning can identify not chunks of rock, but chunks of ice on Jupiter's icy moon Europa. The so-called chaos terrain is a messy-looking swath of Europa's surface, with bright ice chunks strewn about a darker background. With its underground ocean, Europa is a prime target for astronomers interested in alien life, and mapping these ice chunks will be key to planning future missions.
Upcoming missions could also incorporate artificial intelligence as part of the team, using this tech to empower probes to make real-time responses to hazards and even land autonomously. Landing is a notorious challenge for spacecraft, and always one of the most dangerous times of a mission.
The 'seven minutes of terror' on Mars [during descent and landing], that's something we talk about a lot, Bethany Theiling, a planetary scientist at NASA Goddard, said during her talk. "That gets much more complicated as you get further into the solar system. We have many hours of delay in communication."
A message from a probe landing on Saturn's methane-filled moon Titan would take a little under an hour and a half to get back to Earth. By the time humans' response arrived at its destination, the communication loop would be almost three hours long. In a situation like landing where real-time responses are needed, this kind of back-and-forth with Earth just won't cut it. Machine learning and AI could help solve this problem, according to Theiling, providing a probe with the ability to make decisions based on its observations of its surroundings.
"Scientists and engineers, we're not trying to get rid of you," Theiling said. "What we're trying to do is say, the time you get to spend with that data is going to be the most useful time we can manage." Machine learning won't replace humans, but hopefully, it can be a powerful addition to our toolkit for scientific discovery.
Follow the author at @briles_34 on Twitter and follow us on Twitter @Spacedotcom and on Facebook.
See the original post:
How artificial intelligence is helping us explore the solar system - Space.com
How Does TensorFlow Work and Why is it Vital for AI? – Spiceworks News and Insights
Posted: at 12:20 am
TensorFlow is defined as an open-source platform and framework for machine learning, which includes libraries and tools based on Python and Java designed with the objective of training machine learning and deep learning models on data. This article explains the meaning of TensorFlow and how it works, discussing its importance in the world of computing.
TensorFlow is an open-source platform and framework for machine learning, which includes libraries and tools based on Python and Java designed with the objective of training machine learning and deep learning models on data.
Googles TensorFlow is an open-sourced package designed for applications involving deep learning. Additionally, it supports conventional machine learning. TensorFlow was initially created without considering deep learning for large numerical calculations. However, it has also proven valuable for deep learning development, so Google made it available to the public.
TensorFlow supports data in the shape of tensors, which are multidimensional arrays of greater dimensions. Arrays with several dimensions are highly useful for managing enormous volumes of data.
TensorFlow uses the concept of graphs of data flow with nodes and edges. As the implementation method is in tables and graphs, spreading TensorFlow code over a cluster of GPU-equipped machines is more straightforward.
Though TensorFlow supports other programming languages, Python and JavaScript are the most popular. Additionally, TensorFlow supports Swift, C, Go, C#, and Java. Python is not required to work with TensorFlow; however, it makes working with TensorFlow extremely straightforward.
TensorFlow follows in the footsteps of Googles closed-source DistBelief framework, which was deployed internally in 2012. Based on extensive neural networks and the backpropagation method, it was utilized to conduct unsupervised feature learning and deep learning applications.
TensorFlow is distinct from DistBelief in many aspects. TensorFlow was meant to operate independently from Googles computational infrastructure, making its code more portable for external usage. It is also a more overall machine learning architecture that is less neural network-centric than DistBelief.
Under the Apache 2.0 license, Google published TensorFlow as an open-source technology in 2015. Ever since the framework has attracted a large number of supporters outside Google. TensorFlow tools are provided as add-on modules for IBM and Microsoft, and other machine learning or AI development suites.
TensorFlow attained Release 1.0.0 level early in 2017. In 2017, developers released four further albums. A version of TensorFlow geared for smartphone usage and embedded machines was also released as a developer preview.
TensorFlow 2.0, launched in October 2019, redesigned the framework in several ways to make it simpler and more efficient based on user input. A new application programming interface (API) facilitates the execution of distributed training, with assistance for TensorFlow Lite, enabling the deployment of models on a broader range of systems. However, one must always modify code developed for older iterations of TensorFlow to use the new capabilities in TensorFlow 2.0.
See More: Top 10 DevOps Automation Tools in 2021
TensorFlow models trained on edge devices or smartphones, like iOS or Android, may also be deployed. TensorFlow Lite allows you to compromise model performance and accuracy to optimize TensorFlow structures for performance on such devices. A more compact model 12MB against 25MB, or even 100+MB) is less precise, but the loss in precision is often negligible. It is more than compensated for by the versions energy efficiency and speed.
TensorFlow applications are often complex, large-scale artificial intelligence (AI) projects in deep learning and machine learning. Using TensorFlow to power Googles RankBrain system for machine learning has enhanced the data-gathering abilities of the companys search engine.
Google has also utilized the platform for applications such as automated email answer creation, picture categorization, optical character recognition, and a drug-discovery program developed in collaboration with Stanford University academics.
In addition to Airbnb, Coca-Cola, eBay, Intel, Qualcomm, SAP, Twitter, and Uber, the TensorFlow website lists eBay, Intel, Qualcomm, and Snap Inc. as framework users. STATS LLC, a sports consultancy firm, uses TensorFlow-led deep learning frameworks to monitor player movements during professional sports events, among other things.
TensorFlow enables developers to design dataflow graphs, which are structures that define how data flows via a graph or set of processing nodes. Each node in the graph symbolizes a mathematical process, and each edge between nodes is a tensor, a multi-layered data array.
TensorFlow applications can execute on almost any handy target, including a local PC, a cloud cluster, iOS and Android phones, CPUs, and GPUs. Using Googles cloud, you may run TensorFlow on Googles unique TensorFlow Processing Unit (TPU) hardware for additional acceleration. However, TensorFlow-generated models may be installed on almost any machine on which they will be utilized to make predictions.
Tensorflows architecture consists of three components:
Tensorflow is so named because it accepts inputs in the form of multidimensional arrays, often known as tensors. One may create a flowchart-like diagram (a technique called graph analytics) representing the actions you want to conduct on the input. Input comes in at one end, passes across a system of various actions, and exits the opposite end as output. It is named TensorFlow because a tensor enters it, travels through a series of processes, and finally exits.
A trained model may offer prediction as a service utilizing REST or gRPC APIs in a Docker container. For more complex serving situations, Kubernetes may be used.
TensorFlow employs the following components to accomplish the features mentioned above:
TensorFlow employs a graph-based architecture. The graph collects and explains all series calculations performed during training. The graph offers several benefits. It was initially designed to operate on several CPUs or GPUs and mobile operating systems. Additionally, the graphs portability enables you to save calculations for current or future usage. One may store the graph for future execution.
All calculations in the chart are accomplished by linking tensors. Tensors consist of a node as well as an edge. The node performs the mathematical action and generates output endpoints. The edges describe the input/output connections between nodes.
Tensorflow derives its name directly from its essential foundation, Tensor. All calculations in Tensorflow use tensors. Tensors are n-dimensional vectors or matrices that represent all forms of data. Each value in a tensor has the same data type and a known (or partly known) form. The dimension of the matrices or array is the datas form.
A tensor may be derived from raw data or the outcome of a calculation. All operations in TensorFlow are executed inside a graph. The grid is a sequence of calculations that occur in order. Each operation is referred to as an op node and therefore is interconnected.
The graph depicts the operations and relationships between the nodes. However, the values are not shown. The borders of the nodes is indeed the tensor, which is a method for providing data to the operation.
As we have seen, TensorFlow accepts input in the format of tensors, which are n-dimensional arrays or matrices. This input passes through a series of procedures before becoming output. For instance, as input, we obtain a large number of numbers indicating the Bits of an image, and as output, we receive text such as this is a dog.
Tensorflow provides a way to view what is occurring on your graph. This tool is known as TensorBoard; it is just a web page that allows you to debug your graph by checking its parameters, node connections, etc. To utilize TensorBoard, you must label the graphs with the parameters you want to examine, such as the loss value. Then, you must produce each summary.
Other essential components that enable TensorFlows functionality are:
See More: What is Root-Cause Analysis? Working, Templates, and Examples
Python has become the most common programming language for TensorFlow or machine learning as a whole. However, JavaScript is now a best-in-class language for TensorFlow, and among its enormous benefits is that it works in any web browser.
TensorFlow.js, which is the name of the JavaScript TensorFlow library, speeds calculations using all available GPUs. It is also possible to utilize a WebAssembly background program for execution, which is quicker on a CPU than the standard JavaScript backend. Pre-built models allow you to begin with easy tasks to understand how things function.
TensorFlow delivers all of this to programmers through the Python programming language. Python is simple to pick up and run, and it offers straightforward methods to represent the coupling of high-level abstractions. TensorFlow is compatible with Python 3.7 through 3.10.
TensorFlow nodes and tensors are Python objects; therefore, TensorFlow applications are also Python programs. However, real mathematical calculations are not done in Python. The transformation libraries accessible through TensorFlow are created as efficient C++ binaries. Python only controls the flow of information between the components and offers high-level coding frameworks to connect them.
Keras is used for sophisticated TensorFlow activities such as constructing vertices or layers and linking them. A three-layer fundamental model may be developed with less than ten lines of code, and the training data for the same model takes just a few extra lines of code.
You may, however, peek underneath the hood and perform even more granular tasks, such as designing an individualized training circuit, if you like.
See More: What Is Integrated Development Environment (IDE)? Meaning, Software, Types, and Importance
TensorFlow is important for users due to several reasons:
Abstraction (a key concept in object-oriented programming) is the most significant advantage of TensorFlow for machine learning development. Instead of concentrating on developing algorithms or finding how to link one components output to anothers parameters, the programmer may focus on the overall application logic. TensorFlow takes care of the nuances in the background.
Using an interactive, web-based interface, the TensorBoard visualization package enables you to examine and analyze the execution of graphs. Googles Tensorboard.dev service allows you to host and share machine learning experiments built using TensorFlow. This can retain a maximum of 100 million scalars, a gigabyte of tensor data, and a gigabyte of binary layer for free. (Note that any data stored on Tensorboard.dev is accessible to the public.)
TensorFlow provides further advantages for programmers who need to debug and gain insight into TensorFlow applications. Each graph action may be evaluated and updated independently and openly instead of the whole graph being constructed as a monolithic opaque object and evaluated simultaneously. This eager execution mode, available as an option in older iterations of TensorFlow, has become the default.
TensorFlow also benefits from Googles patronage as an A-list commercial enterprise. Google has accelerated the projects development and provided many essential products that make TensorFlow simpler to install and use. TPU silicon for increased performance in Googles cloud is but one example.
TensorFlow works with a wide variety of devices. In addition, the inclusion of TensorFlow lite helps increase its adaptability by making it compatible with additional devices. One may access TensorFlow from anywhere with a device.
Learning and problem-solving are two cognitive activities associated with the human brain that are simulated by artificial intelligence. TensorFlow features a robust and adaptable ecosystem of tools, libraries, and resources that facilitate the development and deployment of AI-powered applications. The advancement of AI provides new possibilities to address complex, real-world issues.
One may use TensorFlow to create deep neural pathways for handwritten character recognition classification, image recognition, word embedding, recurrent neural networks, frame-to-frame modeling for translation software, natural language processing, and a variety of other applications.
Applications based on deep learning are complex, with training processes needing a great deal of computation. It requires several iterative procedures, mathematical computations, matrix multiplication and division, and so on, and it is time-consuming due to the vast amount of data. These tasks need an extraordinarily long amount of time on a typical CPU. TensorFlow thus supports GPUs, which dramatically accelerates the training process.
Because of the parallelism of work models, TensorFlow is used as a special hardware acceleration library. It employs unique distribution algorithms for GPU and CPU platforms. Based on the modeling rule, users may execute their code on one of the two architectures. The system selects a GPU if none is specified. This method minimizes memory allocation to some degree.
See More: Java vs. JavaScript: 4 Key Comparisons
The true significance of TensorFlow is that it is applicable across sectors. Among its most important uses are:
The TensorFlow framework is most important for two roles data scientists and software developers.
Data scientists have several options for developing models using TensorFlow. This implies that the appropriate tool is always accessible, allowing for the rapid expression of creative methods and ideas. As one of the most popular libraries for constructing machine learning models, TensorFlow code from earlier researchers is often straightforward to locate when attempting to recreate their work.
Software developers may use TensorFlow on a wide range of standard hardware, operating systems, and platforms. With the introduction of TensorFlow 2.0 in 2019, one may deploy TensorFlow models on a broader range of platforms. The interoperability of TensorFlow-created models makes deployment an easy process.
See More: What Is TDD (Test Driven Development)? Process, Importance, and Limitations
TensorFlow is consistently ranked among the best Python libraries for machine learning. Individuals, companies, and governments worldwide rely on its capabilities to develop AI innovations. It is one of the foundational tools used for AI experiments before you can take the product to the market, owing to its low dependency and investment footprint. As AI becomes more ubiquitous in consumer and enterprise apps, TensorFlows importance will continue to grow.
Did you find our TensorFlow guide to be an interesting and informative read? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!
Read more here:
How Does TensorFlow Work and Why is it Vital for AI? - Spiceworks News and Insights
AI in the hands of imperfect users | npj Digital Medicine – Nature.com
Posted: at 12:20 am
Obermeyer, Z. & Emanuel, E. J. Predicting the futurebig data, machine learning, and clinical medicine. N. Engl. J. Med. 375, 1216 (2016).
Article Google Scholar
Klugman, C. M. & Gerke, S. Rise of the bioethics AI: curse or blessing? Am. J. Bioeth. 22, 3537 (2022).
Article Google Scholar
U.S. Food and Drug Administration. Artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) action plan. (2021).
Commission E. Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. European Commission (Brussels, 21.4.2021).
Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 38999. (2019).
Article Google Scholar
Chen T, Guestrin C. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
Markus, A. F., Kors, J. A. & Rijnbeek, P. R. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113, 103655 (2021).
Article Google Scholar
Babic, B., Gerke, S., Evgeniou, T. & Cohen, I. G. Beware explanations from AI in health care. Science 373, 284286 (2021).
Article Google Scholar
U.S. Food and Drug Administration. Clinical Decision Support SoftwareGuidance for Industry and Food and Drug Administration Staff. (2022).
U.S. Food and Drug Administration. U.S. Federal Food, Drug, and Cosmetic Act. (2018).
Gerke, S. Health AI for good rather than evil? the need for a new regulatory framework for AI-based medical devices. Yale J. Health Policy, Law, Ethics 20, 433 (2021).
Google Scholar
Gerke, S., Babic, B., Evgeniou, T. & Cohen, I. G. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. NPJ Digit. Med. 3, 14 (2020).
Article Google Scholar
Nielsen, J. & Molich, R. Heuristic evaluation of user interfaces. Proc. SIGCHI Conf. Hum. factors Comput. Syst. 1990, 249256 (1990).
Google Scholar
Wu, E. et al. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat. Med. 27, 582584 (2021).
Article Google Scholar
Price W.N. II. Medical AI and contextual bias. Harvard Journal of Law and Technology 33, 2019.
Babic, B., Gerke, S., Evgeniou, T. & Cohen, I. G. Algorithms on regulatory lockdown in medicine. Science 366, 12021204 (2019).
Article Google Scholar
Ansell, D. A. & McDonald, E. K. Bias, black lives, and academic medicine. N. Engl. J. Med. 372, 10871089 (2015).
Article Google Scholar
Kostick-Quenet, K. M. et al. Mitigating racial bias in machine learning. J. Law Med. Ethics 50, 92100 (2022).
Article Google Scholar
Blumenthal-Barby, J. S. Good ethics and bad choices: the relevance of behavioral economics for medical ethics. (MIT Press, 2021).
Kahneman D., Slovic S. P., Slovic P. & Tversky A. Judgment under uncertainty: heuristics and biases. (Cambridge university press, 1982).
Pillutla, M. M., Malhotra, D. & Murnighan, J. K. Attributions of trust and the calculus of reciprocity. J. Exp. Soc. Psychol. 39, 448455 (2003).
Article Google Scholar
Corriveau, K. H. et al. Young childrens trust in their mothers claims: longitudinal links with attachment security in infancy. Child Dev. 80, 750761 (2009).
Article Google Scholar
Fett, A.-K. et al. Learning to trust: trust and attachment in early psychosis. Psychol. Med. 46, 14371447 (2016).
Article Google Scholar
Butler, J. K. Jr. & Cantrell, R. S. A behavioral decision theory approach to modeling dyadic trust in superiors and subordinates. Psychol. Rep. 55, 1928 (1984).
Article Google Scholar
Mayer, R. C., Davis, J. H. & Schoorman, F. D. An integrative model of organizational trust. Acad. Manag. Rev. 20, 709734 (1995).
Article Google Scholar
Grover, S. L., Hasel, M. C., Manville, C. & Serrano-Archimi, C. Follower reactions to leader trust violations: A grounded theory of violation types, likelihood of recovery, and recovery process. Eur. Manag. J. 32, 689702 (2014).
Article Google Scholar
Banaji M. R. & Gelman S. A. Navigating the social world: what infants, children, and other species can teach us. (Oxford University Press; 2013).
Fawcett, C. Kids attend to saliva sharing to infer social relationships. Science 375, 260261 (2022).
Article Google Scholar
Kaufmann, L. & Clment, F. Wired for society: cognizing pathways to society and culture. Topoi 33, 45975. (2014).
Article Google Scholar
Vickery, J. et al. Challenges to evidence-informed decision-making in the context of pandemics: qualitative study of COVID-19 policy advisor perspectives. BMJ Glob. Health 7, e008268 (2022).
Article Google Scholar
Muoz, K. A. et al. Pressing ethical issues in considering pediatric deep brain stimulation for obsessive-compulsive disorder. Brain Stimul. 14, 156672. (2021).
Article Google Scholar
Hampson, G., Towse, A., Pearson, S. D., Dreitlein, W. B. & Henshall, C. Gene therapy: evidence, value and affordability in the US health care system. J. Comp. Eff. Res. 7, 1528 (2018).
Article Google Scholar
Wang, Z. J. & Busemeyer, J. R. Cognitive choice modeling. (MIT Press, 2021).
Menon, T. & Blount, S. The messenger bias: a relational model of knowledge valuation. Res. Organ. Behav. 25, 137186 (2003).
Google Scholar
Howard, J. Bandwagon effect and authority bias. Cognitive Errors and Diagnostic Mistakes. 2156 (Springer; 2019).
Slovic, P. The construction of preference. Am. Psychol. 50, 364 (1995).
Article Google Scholar
Levine, L. J., Lench, H. C., Karnaze, M. M. & Carlson, S. J. Bias in predicted and remembered emotion. Curr. Opin. Behav. Sci. 19, 7377 (2018).
Article Google Scholar
Christman, J. The politics of persons: Individual autonomy and socio-historical selves. (Cambridge University Press, 2009).
Samuelson, W. & Zeckhauser, R. Status quo bias in decision making. J. Risk Uncertain. 1, 759 (1988).
Article Google Scholar
Hardisty, D. J., Appelt, K. C. & Weber, E. U. Good or bad, we want it now: fixedcost present bias for gains and losses explains magnitude asymmetries in intertemporal choice. J. Behav. Decis. Mak. 26, 348361 (2013).
Article Google Scholar
Alon-Barkat, S. & Busuioc, M. Decision-makers processing of ai algorithmic advice: automation bias versus selective adherence. https://arxiv.org/ftp/arxiv/papers/2103/2103.02381.pdf (2021).
Bond, R. R. et al. Automation bias in medicine: The influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. J. Electrocardiol. 51, S6S11 (2018).
Article Google Scholar
Cummings, M. L. Automation bias in intelligent time critical decision support systems. Decision Making in Aviation. 289294 (Routledge, 2017).
Jussupow, E., Spohrer, K., Heinzl, A. & Gawlitza, J. Augmenting medical diagnosis decisions? An investigation into physicians decision-making process with artificial intelligence. Inf. Syst. Res. 32, 713735 (2021).
Article Google Scholar
Skitka, L. J., Mosier, K. L. & Burdick, M. Does automation bias decision-making? Int. J. Hum. Comput. Stud. 51, 9911006 (1999).
Article Google Scholar
Dijkstra, J. J., Liebrand, W. B. & Timminga, E. Persuasiveness of expert systems. Behav. Inf. Technol. 17, 155163 (1998).
Article Google Scholar
Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90103 (2019).
Article Google Scholar
Furnham, A. & Boo, H. C. A literature review of the anchoring effect. J. Socio-Econ. 40, 3542 (2011).
Article Google Scholar
Diab, D. L., Pui, S. Y., Yankelevich, M. & Highhouse, S. Lay perceptions of selection decision aids in US and nonUS samples. Int. J. Sel. Assess. 19, 209216 (2011).
Article Google Scholar
Dietvorst, B. J., Simmons, J. P. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144, 114 (2015).
Article Google Scholar
Promberger, M. & Baron, J. Do patients trust computers? J. Behav. Decis. Mak. 19, 455468 (2006).
Article Google Scholar
Gaube, S. et al. Do as AI say: susceptibility in deployment of clinical decision-aids. NPJ Digit. Med. 4, 18 (2021).
Article Google Scholar
Mosier, K. L, Skitka, L.J., Burdick, M. D. & Heers, S.T. Automation bias, accountability, and verification behaviors. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. pp. 204208 (SAGE Publications Sage CA, Los Angeles, CA, 1996).
Wickens, C. D., Clegg, B. A., Vieane, A. Z. & Sebok, A. L. Complacency and automation bias in the use of imperfect automation. Hum. Factors 57, 728739 (2015).
Article Google Scholar
Li, D., Kulasegaram, K. & Hodges, B. D. Why we neednt fear the machines: opportunities for medicine in a machine learning world. Acad. Med. 94, 623625 (2019).
Article Google Scholar
Paranjape, K., Schinkel, M., Panday, R. N., Car, J. & Nanayakkara, P. Introducing artificial intelligence training in medical education. JMIR Med. Educ. 5, e16048 (2019).
Article Google Scholar
Park, S. H., Do, K.-H., Kim, S., Park, J. H. & Lim, Y.-S. What should medical students know about artificial intelligence in medicine? J. Educ. Eval. Health Prof. 16, 18 (2019).
Article Google Scholar
Leavy, S., OSullivan, B. & Siapera, E. Data, power and bias in artificial intelligence. https://arxiv.org/abs/2008.07341 (2020).
Read the original here:
AI in the hands of imperfect users | npj Digital Medicine - Nature.com
AI-as-a-service makes artificial intelligence and data analytics more accessible and cost effective – VentureBeat
Posted: at 12:20 am
Check out all the on-demand sessions from the Intelligent Security Summit here.
Artificial intelligence (AI) has made significant progress in the past decade and has been able to solve various problems through extensive research. From self-driving cars to intuitive chatbots like OpenAIs ChatGPT.
AI solutions are becoming a norm for businesses that wish to gain insights from their valuable company data. Enterprises are looking to implement a broad spectrum of AI applications, from text analysis software to more complex predictive analytics tools. But building an in-house AI solution makes sense only for some businesses, as its a long and complex process.
With emerging data science use cases, organizations now require continuous AI experimentation and test machine learning algorithms on several cloud platforms simultaneously. Processing data through such methods need massive upfront costs, which is why businesses are now turning toward AIaaS (AI-as-a-service), third-party solutions that provide ready-to-use platforms.
AIaaS is becoming an ideal option for anyone who wants access to AI without needing to establish an ultra-expensive infrastructure for themselves. With such a cost-effective solution available for anyone, its no surprise that AIaaS is starting to become a standard in most industries. An analysis by Research and Markets estimated that the global market for AIaaS is expected to grow by around $11.6 billion by 2024.
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
AIaaS allows companies to access AI software from a third-party vendor rather than hiring a team of experts to develop it in-house. This allows companies to get the benefits of AI and data analytics with a smaller initial investment, and they can also customize the software to meet their specific needs. AIaaS is similar to other as-a-service offerings like infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS), which are all hosted by third-party vendors.
In addition, AIaaS models enclose disparate technologies, including natural language processing (NLP), computer vision, machine learning and robotics; you can pay for the services you require and upgrade to higher plans when your data and business scale.
AIaaS is an optimal solution for smaller and mid-sized companies to access AI capabilities without building and implementing their own systems from scratch. This allows these companies to focus on their core business and still benefit from AIs value, without becoming experts in data and machine learning. Using AIaaS can help companies increase profits while reducing the risk of investment in AI. In the past, companies often had to make significant financial investments in AI in order to see a return on their investment.
Moses Guttmann, CEO and cofounder of ClearML, says that AIaaS allows companies to focus their data science teams on the unique challenges to their product, use case, customers and other essential requirements.
Essentially, using AIaaS can take away all the off-the-shelf problem-solving AI can help with, allowing the data science teams to concentrate on the unique and custom scenarios and data that can make an impact on the business of the company, Guttmann told VentureBeat.
Guttmann said that the crux of AI services is essentially outsourcing talent, i.e., having an external vendor build the internal companys AI infrastructure and customize it to their needs.
The problem is always maintenance, where the know-how is still held by the AI service provider and rarely leaks into the company itself, he said. AIaaS on the contrary, provides a service platform, with simple APIs and access workflows, that allows companies to quickly adapt off-the-shelf working models and quickly integrate them into the companys business logic and products.
Guttmann says that AIaaS can be great for tech organizations either having pretrained models or real-time data use cases, enhancing legacy data science architectures.
I believe that the real value in ML for a company is always a unique combination of its constraints, use case and data, and this is why companies should have some of their data scientists in-house, said Guttmann. To materialize the potential of those data scientists, a good software infrastructure needs to be put in place, doing the heavy lifting in operations and letting the data science team concentrate on the actual value they bring to the company.
AIaaS is a proven approach that facilitates all aspects of AI innovation. The platform provides an all-in-one solution for modern business requirements, from ideating on how AI can provide value to actual, with a scaled implementation across a business as a target to tangible outcomes in a matter of weeks.
AIaaS enables a structured, beneficial way of balancing data science, IT and business consulting competencies, as well as balancing the technical delivery with the role of ongoing change management that comes with AI. It also decreases the risk of AI innovation, improving time-to-market, product outcomes and value for the business. At the same time, AIaaS provides organizations with a blueprint for AI going forward, thereby accelerating internal know-how and ability to execute, ensuring an agile delivery framework alignment, and transparency in creating the AI.
AIaaS platforms can quickly scale up or down as needed to meet changing business needs, providing organizations with the flexibility to adjust their AI capabilities as needed, Yashar Behzadi, CEO and founder of Synthesis AI, told VentureBeat.
Behzadi said AIaaS platforms can integrate with a wide range of other technologies, such as cloud storage and analytics tools, making it easier for organizations to leverage AI in conjunction with other tools and platforms.
AIaaS platforms often provide organizations with access to the latest and most advanced AI technologies, including machine learning algorithms and tools. This can help organizations build more accurate and effective machine learning models because AIaaS platforms often have access to large amounts of data, said Behzadi. This can be particularly beneficial for organizations with limited data available for training their models.
AIaaS platforms can process and analyze large volumes of text data, such as customer reviews or social media posts, to help computers and humans communicate more clearly. These platforms can also be used to build chatbots that can handle customer inquiries and requests, providing a convenient way for organizations to interact with customers and improve customer service. Computer vision training is another large use case, as AIaaS platforms can analyze and interpret images and video data, such as facial recognition or object detection; this can be inculcated in various applications, including security and surveillance, marketing and manufacturing.
Recently, weve seen a boom in the popularity of generative AI, which is another case of AIaaS being used to create content, said Behzadi. These services can create text or image content at scale with near-zero variable costs. Organizations are still figuring out how to practically use generative AI at scale, but the foundations are there.
Talking about the current challenges of AIaaS, Behzadi explained that company use cases are often nuanced and specialized, and generalized AIaaS systems may need to be revised for unique use cases.
The inability to fine-tune the models for company-specific data may result in lower-than-expected performance and ROI. However, this also ties into the lack of control organizations that use AIaaS may have over their systems and technologies, which can be a concern, he said.
Behzadi said that while integration can benefit the technology, it can also be complex and time-consuming to integrate with an organizations existing systems and processes.
Additionally, the capabilities and biases inherent in AIaaS systems are unknown and may lead to unexpected outcomes. Lack of visibility into the black box can also lead to ethical concerns of bias and privacy, and organizations do not have the technical insight and visibility to fully understand and characterize performance, said Behzadi.
He suggests that CTOs should first consider the organizations specific business needs and goals and whether an AIaaS solution can help meet these needs. This may involve assessing the organizations data resources and the potential benefits and costs of incorporating AI into their operations.
By leveraging AIaaS, a company is not investing in building core capabilities over time. Efficiency and cost-saving in the near term have to be weighed against capability in the long term. Additionally, a CTO should assess the ability of the more generalized AIaaS offering to meet the companys potentially customized needs, he said.
Behzadi says that AIaaS systems are maturing and allowing customers to fine-tune the models with company-specific data, and this expanded capability will enable enterprises to create more targeted models for their specific use cases.
Providers will likely continue to specialize in various industries and sectors, offering tailored solutions for specific business needs. This may include the development of industry-specific AI tools and technologies, he said. As foundational NLP and computer vision models continue to evolve rapidly, they will increasingly power the AIaaS offerings. This will lead to faster capability development, lower cost of development, and greater capability.
Likewise, Guttmann predicts that we will see many more NLP-based models with simple APIs that companies can integrate directly into their products.
I think that surprisingly enough, a lot of companies will realize they can do more with their current data sScience teams and leverage AIaaS for the simple tasks. We have witnessed a huge jump in capabilities over the last year, and I think the upcoming year is when companies capitalize on those new offerings, he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Visit link:
What We Know So Far About Elon Musks OpenAI, The Maker Of ChatGPT – AugustMan Thailand
Posted: at 12:20 am
Speak of Elon Musk and in all probability, companies like Twitter, Tesla or SpaceX will come to your mind. But little do people know about Elon Musks company OpenAI an artificial intelligence (AI) research and development firm that is behind the disruptive chatbot ChatGPT.
The brainchild of Musk and former Y Combinator president Sam Altman, OpenAI launched ChatGPT in November 2022 and within a week, the application saw a spike of over a million users. Being able to do anything between coding and interacting that mimics human intelligence, ChatGPT has surpassed previous standards of AI capabilities and has introduced a new chapter in AI technologies and machine learning systems.
If you are intrigued by artificial intelligence and take an interest in deep learning and how they can benefit humanity, then you must know about the history of OpenAI and the levels AI development has reached.
Launched in 2015 and headquartered in San Francisco, this altruistic artificial intelligence company was founded by Musk and Altman. They saw collaborations with other Silicon Valley tech experts like Peter Thiel and LinkedIn co-founder Reid Hoffman who pledged USD 1 billion for OpenAI that year.
To quote an OpenAI blog, OpenAI is a non-profit artificial intelligence research company. It further said, OpenAIs mission is to ensure artificial general intelligence benefits all of humanity in a holistic way, with no hope for profit.
Today, OpenAI LP is governed by the board of OpenAI non-profit. It comprises OpenAI LP employees Greg Brockman (chairman and president), Ilya Sutskever (chief scientist) and Sam Altman (chief executive officer). It also has non-employees Adam DAngelo, Reid Hoffman, Will Hurd, Tasha McCauley, Helen Toner and Shivon Zilis onboard as investors and Silicon Valley support.
Key strategic investors include Microsoft, Hoffmans charitable foundation and Khosla Ventures.
In 2018, three years after the company came into being, Elon Musk resigned from OpenAIs Board to avoid any future conflict as Tesla continues to expand in the artificial intelligence field. However, Musk will continue to donate to its non-profit cause and be a strong advisor.
Although Elon Musks resignation was announced by OpenAI on grounds of conflict of interest, the current Twitter supremo later said that he quit because he couldnt agree with certain company decisions and that he wasnt involved with the artificial intelligence firm for over a year.
Plus, Tesla was also looking to hire some of the same employees as OpenAI and, therefore, Add that all up & it was just better to part ways on good terms, he tweeted.
However, things did not end there. In 2020, Musk tweeted OpenAI should be more open imo, answering an MIT Technology Review investigation that unearthed a deep-rooted secretive business model which contradicts its no-profit ideology and transparency.
OpenAI should be more open imo
Elon Musk (@elonmusk) February 17, 2020
Musk has also raised questions over safety concerns and tweeted, mentioning Dario Amodei, a former Google engineer who now leads OpenAIs strategy, I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high.
Over the years, OpenAI has set a high benchmark in the artificial general intelligence segment with innovations and products that are aimed at mimicking human behaviour and even surpass human intelligence.
In April 2016, the company announced the launch of the OpenAI Gym, a toolkit for developing and comparing reinforcement learning algorithms. Wondering what it is?
Reinforcement learning (RL) is the subfield of machine learning concerned with decision making and motor control. It studies how an agent can learn how to achieve goals in a complex, uncertain environment, says an OpenAI blog. These environments range from simulated robots to Atari Games and algorithmic evaluations.
To put it in simple terms, the OpenAI Gym enables researchers and research organisations to obtain the best result and arrive at a conclusive decision based on AI inputs. In fact, the gym was initially established to further the companys own deep reinforcement learning research and extend artificial intelligence in the realms of conclusive evaluation.
In December 2016, OpenAI announced another product called Universe. An OpenAI blog says it is a software platform for measuring and training an AIs general intelligence across the worlds supply of games, websites and other applications.
In the realm of artificial intelligence, it is imperative for an AI system to complete all kinds of tasks successfully that a human being can do using a computer. Additionally, Universe helps train a single AI agent in completing computer tasks. And, when coupled with OpenAI Gym, this deep learning mechanism also uses its experiences and adapts to difficult or unseen environments to complete a task at hand.
Advancing machine learning to foray artificial intelligence into the segment of human interaction is a path-breaking innovation, and OpenAIs chatbot GPT is a disruptive name in this sector. A chatbot is an artificial intelligence-based software application which can make human-like conversations. ChatGPT was launched on 30 November and within a week it garnered a whopping million users.
An OpenAI blog post states that their ChatGPT model is trained with a deep machine learning technique called Reinforcement Learning from Human Feedback (RLHF) that helps simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.
While Musk chimed in to praise the chatbot and tweeted saying, ChatGPT is scary good. We are not far from dangerously strong AI, he later took to the microblogging site and said that OpenAI has access to Twitters database, which it used to train the tool. He added, OpenAI was started as open-source & non-profit. Neither are still true.
The Generative Pre-trained Transformer (GPT)-3 model has gained a lot of buzz. It is essentially a language model that leverages deep learning to generate human-like text. Along with machine-generated texts, it can also produce stories, poems as well as codes. It is deemed as an upgrade on the previous GPT-2 model, released in 2019, which is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. To put it simply, language models are a set of statistical tools that enable such technology to predict the next word or syntax of the sentence.
Interestingly, in 2019, OpenAI also goes from being a non-profit organisation to a for-profit entity. In an OpenAI released blog, it said, We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit which we are calling a capped-profit company.
With this, other investors can earn up to 100 times their principal amount but not go beyond that, and the rest of the profit would go towards non-profit works.
Over the years, OpenAI has made itself a pioneering name in developing AI algorithms that can benefit society and, in this regard, it has partnered with other institutions.
In 2019, the company joined hands with Microsoft as the latter invested USD 1 billion, while the AI firm said it would exclusively licence its technology with the tech company, as per a Business Insider report. This would give Microsoft an edge over other organisations like Googles DeepMind AI company.
In 2021, Open AI took a futuristic leap and created DALL-E, one of the best AI tools that can make some of the most stunning masterpieces. And just a year later, it upgraded itself to launch Dall-E2, which provides images with 4x greater resolution and precision.
Dall-E2 is a new AI system that can create realistic images and art from a description in natural language. With swift strokes, this human-like robot hand can paint artworks that merge concepts, attributes and style. If that is not enough, Dall-E2 can build on an existing art piece and create new expanded original canvases. It can add unimaginably realistic edits to an existing image, generating different variations of a previous image.
Such intensive AI innovations and long-term research just go on to show how machines have acquired close to human-like attributes. However, experts have also seen it as the biggest existential threat to humanity, and Elon Musk, too, has shared the same thought.
While humans are the ones who have created it, Stephen Hawking had once told the BBC that AI could potentially re-design itself at an ever-increasing rate, superseding humans by outpacing biological evolution.
There is no denying that artificial intelligence has been taking giant leaps and has its impact felt in almost every aspect. From churning out daily news stories to creating world-class classical art and even making a full-fledged conversation, artificial intelligence and its dynamics have incredible potential, but what is in store for the future is left to be seen.
(Hero image credit: Possessed Photography/ @possessedphotography/ Unsplash; Feature image credit: Andrea De Santis/ @santesson89/ Unsplash)
Here is the original post:
What We Know So Far About Elon Musks OpenAI, The Maker Of ChatGPT - AugustMan Thailand
Grant Cardone is back on the hook in a class action suit – The Real Deal
Posted: at 12:19 am
A photo illustration of Grant Cardone (Getty, Google Maps, United States District Court for the Central District of California)
Grant Cardone has said, Most opportunities are disguised as problems. If thats true, he has had his share of opportunities in 2022.
In May, unknown robbers allegedly stole his expensive designer watch in a VIP area at Hardrock Stadium during the Miami Grand Prix.
A recent Palm Beach Post investigation uncovered Cardone Capital has overcharged tenants living in an apartment complex with workforce housing owned by the Aventura-based firm.
And last week, a federal appeals judge reversed last years dismissal of a Los Angeles class action lawsuit against Cardone and his company alleging he misled investors on social media about potential profits they could make from his multifamily deals. Cardone Capital owns about $5 billion in apartment rental complexes in South Florida and across the country
U.S. Appeals Judge Barbara Lynns decision means plaintiff Luis Pinos complaint can move forward, and other investors can join the lawsuit or file their own claims against Cardone and Cardone Capital. In siding with Pino, Lynn determined that Cardones social media posts promoting his crowd-funded investments are subject to federal securities regulations that guard against misstatements and omissions.
Cardone did not respond to a request for comment.
To his millions of followers, Cardone flaunts his personal wealth while doling out advice on how they too can become rich by putting their savings, 401K earnings and other investments in apartment buildings and communities, particularly the properties he owns and the ones he is looking to purchase. Cardone also promotes his real estate deals at conferences and forums he hosts around the country.
Investors place their money into real estate funds overseen by Cardone and his firm, which generate fees from the acquisition, management and disposition of the real estate assets.
In 2020, Pino sued Cardone and Cardone Capital, alleging he violated securities laws based on alleged misleading statements about his real estate funds on social media. A year earlier, Pino, who resides in Inglewood, California, invested $10,000 in two Cardone Capital real estate funds after attending a Cardone summit in Anaheim, the complaint states.
In May of last year, U.S. District Judge John F. Walter ruled in Cardones favor. Walter concluded that Pino failed to adequately allege that Cardone made material misstatements and omissions. Lynn, the appeals judge, disagreed. She determined that Cardones Instagram posts and YouTube videos are the types of potentially injurious solicitations that are intended to command attention and persuade potential investors.
Pino fairly alleges that the nature of social media presents dangers that investors will be persuaded to purchase securities without full and fair information, Lynn wrote.
Contact Francisco Alvarado
Read the original post:
Grant Cardone is back on the hook in a class action suit - The Real Deal
Dave Has Questions about WTF is Going on With AMC Stock… I … – Barstool Sports
Posted: at 12:19 am
Bah gawd, that's AMC's music.
It felt a lot like AMC was getting on the straight and narrow (as I mentioned in yesterday's newsletter).
The Water Coolest- Mouth breathers with "MOASS" tattoos who consider Adam Aron a father figure aren't going to like AMC's latest move.
The movie theater chain made a wait for it sound business decision. You might recall AMC made headlines for some recent head scratchers, like buying a huge stake in a literal gold mining company, and effectively splitting its stock via the issuance of preferred shares called APEs.
But this time around it wasn't something it did. It's what it didn't do. The theater chain decided against buying up some theaters owned by Cineworld. Cineworld recently filed for bankruptcy.
Pumping the brakes would have made sense considering shares had fallen below where they were trading when meme stonk mania kicked off in Janaury 2021
The Water Coolest- AMC shares closed at $4.89 on Monday. That's below where the stonk was trading in January 2021, meaning the cinema chain has lost all of its meme stonk mania gainz pours out $17 ICEE in a commemorative Avatar cup
Andddd it took less than 24-hours for head crayon eater, AMC CEO Adam Aron to prove me wrong.
"But just when I thought I was out, they pull me back in"
Today AMC did what it does best: act like an absolute wildcard. Which, to be fair, has worked out ok so far.
Bloomberg- AMC Entertainment Inc. sank after proposing to convert preferred equity units into common shares along with a 10-to-1 reverse stock split.
The changes would stop investors from pushing AMC toward penny stock territory, Adam Aron, chief executive officer of the worlds largest movie theater chain, said Thursday. The preferred equity units debuted in August and were quickly caught up in volatility linked to retail trading of so-called meme stocks.
AMC also said Thursday that it raised $110 million through the sale of preferred equity units to debt holder Antara Capital LP at a weighted average price of 66 cents each, below market value.
There are really three things happening here.
1) AMC is proposing that it converts its "APE" preferred shares to AMC stock. A little history lesson: back in August AMC offered up a "special dividend" to shareholders. For each AMC stock you held, you got an APE preferred share (spoiler: they can be converted back to AMC shares). This effectively acted like a stock split. But it was viewed as shady as fuck by Wall Street. AMC shareholders wouldn't vote to allow the chain to create more AMC stock out of thin air to sell and raise more money (because it would dilute the outstanding shares). So the big brains over in AMC's creative accounting department dreamt up the APE shares, which didn't require a shareholder vote to sell. And sell they did. We learned this week that the company printed $162 million in straight cash homie via its (totally legal) scheme.
2) The company is proposing a 10-to-1 reverse stock split on its AMC shares. For every 10 shares you own, you get 1 shiny new stonk that goes all Grant Cardone (read: worth 10x the original price). The individual share price goes up while the total number of outstanding shares drops. The value of the company remains the same. This will help avoid penny stock territory and could attract some institutional investors I guess. So, essentially polishing a fucking turd.
3) AMC also sold a boat load ($110 million worth) of APEs to Antara Capital. Remember, this is EXACTLY why AMC created these preferred shares. The 9-figure pay day will help pay down debt and probably finance some other batshit ideas Adam Aron has up his sleeve. One issue, if you're a shareholder, is that the APEs were sold to Antara at a 3% discount. Investors like this sorta thing as much as they like calling their mom's new husband "dad."
All of this had an impact on AMC and APE prices, obviously. And people had questions understandably so.
At the time of the writing AMC shares were down 12% and APE shares were up 77%. Keep in mind that in theory these shares should be trading roughly around the same level since the "dividend" was a stock split of sorts. But yesterday APE closed below a dollar, and AMC finished above $4. The market sees an opportunity currently since APE shares are likely to be converted to AMC shares and trading a thing that costs $1 for something that's $4 is never not a good deal. Because math.
The other plans AMC has muddy the waters a bit. Selling the APE shares at a discount is a red flag for investors. And looking to avoid penny stock territory by completing a 10-for-1 reverse split doesn't exactly scream blue-chip investment.
Want more business and markets news? I write a daily email newsletter that hits your inbox at 6 AM covering all the finance and markets stories you need to know. Subscribe below
Oh, and if you can't wait for the newsletter I'm tweeting about this stuff in real time (@JPMorinChase).
Link:
Dave Has Questions about WTF is Going on With AMC Stock... I ... - Barstool Sports
Sobornost? Or Ego? Two Women and Two Paths – Catholic Exchange
Posted: at 12:18 am
The chronicle of human events often seems to be an unrelenting tale of woe, emphasizing the presence of sin in our world. But God can bring good out of seemingly irremediable circumstances. Furthermore, he often uses pairs of people and their choices to teach us lessons about good and evil, or the right and wrong paths. That divine catechesis begins with Abel and Cain, and continues with Jacob and Esau, Moses and Pharaoh, David and Saul, and Peter and Judas.
We can also find historical examples closer to our era. The 1917 Russian Revolution was unmistakably a curse; paradoxically it was also a blessing. For the Russians, Ukrainians, Byelorussians, and others who eventually comprised the populace of the Soviet Union, the revolution meant oppression, deportation, collectivization, and a host of other evils. Externally, the world received Russian-Soviet refugees bearing the ills of occultism, atheism, non-Bolshevik Communism, and schism. But the Bolsheviks also forced out thinkers, writers, and religious people who enriched the West. The Russian-Soviet loss was the worlds gain. The bulk of the Russian-Soviet thinkinggood or badthat flowed into the diaspora can be distilled down to spiritual anthropology, or how we understand the human being. Is he the highest form of animal, able to be analyzed by his component parts, instincts, and physical needs? Does she have a soul? How and why do they interact? Russian-Soviet refugees provided multiple irreconcilable answers.
According to the esoteric thinkers George Gurdjieff (1877-1949) and his disciple P.D. Ouspensky (1878-1947), humans are beings mostly unaware of the ways of higher consciousness and ignorant of large troves of hidden knowledge. The duo attempted, like their counterparts in Theosophy and Anthroposophy, to guide people into those purported higher levels of consciousness. Their school of thought is not compatible with a classical Christian understanding of humans, creatures of body and soul living under the burden of original sin.
Much closer to the traditional view were the works of writer Yevgeny Zamyatin and philosopher Nikolai Berdyaev. Zamyatin (1884-1937) is best known for his novel We, which centers on a nameless character in a future collectivist dystopia. The novel presents a technocratic world with a rigidly rational form of collective life in which food, clothing, sex, work, and time are all tightly regulated. This society exists in a sterile city protected by an encircling wall from nature and the remaining (primitive) humans. The protagonist, D-503, is a gifted engineer and lead builder of a space ship that will allow this society to spread its truths to other planets. D-503 appears to be a scrupulously logical and mathematical type; however, he also has a poetic and sensuous side. His world is upset when he meets a bold, mysterious woman, I-330. His deepening involvement with her takes him farther and farther from the truths he thinks he knows, even outside the wall and amongst the primitive people. By saying yes to her, he becomes more fully alive, even to the extent of betraying societys values and breaking its laws. A doctor he sees gives him a startling diagnosis: Apparently, you have developed a soul (89). In the meantime he has also said yes to O-90, his state-sanctioned sex partner and the maternal counterpart to I-330. His yes to her results in the conception of their child, even though that act is punishable by her death. In the end he helps O-90 escape outside the wall carrying their unborn child. D-503 cannot live with the chaos of freedom resulting from his yes and submits to a lobotomy-like operation. As the novel ends, he is a more pliant member of society, but we are left with an ambiguous conclusion as the forces of nature and the primitive people seem to be toppling the city society. The key fact revealed is that a person is one who says yes to anotherlove overcomes ego.
Like Zamyatin, Berdyaev (1874-1948) thought much about the relationship between individuals and society. He critiqued what he called the bourgeois spirit, whether capitalist or Marxist, by which he meant that which focuses on the material and rejects the spiritual. Freedom is a difficult thing, Berdyaev writes in Slavery and Freedom. It is easier to remain in slavery (247). According to Berdyaev: The real we, that is, the community of people, communion in freedom, in love and mercy, has never been able to enslave man, on the contrary it is the realization of the fullness of the life of personality, its transcension [sic] towards another (104).
That understanding of human relationship brings us to two Russian women, both migrs to North America, who proposed divergent approaches to life: Alisa Zinovyevna Rosenbaum (better known as Ayn Rand, 1905-1982) and Catherine de Heuck Doherty (1896-1985). Rand was a major libertarian thinker, novelist, and founder of the philosophy of Objectivism. Doherty was a Catholic activist and mystic who eventually settled in Canada and began the Madonna House apostolate. Rand grew up with religion (Judaism in her case) but became an ardent atheist, while Doherty was raised in a Russian Orthodox family. Influenced by Zamyatins We, Rand thought about the person and society under totalitarianism, but came to different conclusions in her early novel Anthem. The world of Anthem is if anything more collectivist and totalitarian than We, but also much more primitive and quasi-religious. Rands protagonist is named Equality 7-2521. Despite his intellectual promise he has been designated as a street sweeper, in part because of his rebellious nature. While performing his duties he discovers technology from a previous age. His experiments lead him to re-discover electrical light and thus a new Prometheus. He also spots and meets an attractive female, Liberty 5-3000. Accused of being an evildoer, Equality flees into the forest. Eventually Liberty joins him and they begin a new life with new names. Liberty is not his equalshe is drawn to him by his demigod characteristics. Equality says no to society and yes only to himself. And here, over the portals of my fort, I shall cut in the stone the word which is to be my beacon and my banner. The word which will not die, should we all perish in battle. The word which can never die on this earth, for it is the heart of it and meaning and the glory. The sacred word: EGO (122-123).
Rand developed her thinking on ego into Objectivism, elaborated in a number of non-fiction books but also in the well-known novels The Fountainhead and Atlas Shrugged. In Anthem, Rands Equality 7-2521 hopes to build and indeed becomes a builder solely through the force of his own will. Her protagonist is also persecuted by the ruling authorities: reviled (79-80), threatened with burning at the stake (80), and anathematized (82). What is not done collectively cannot be good, said [one of the ruling Council members] International 1-5537 (81). Rands views have remained influential, especially in North America, especially because of her defiance of all external authority.
Her counterpart Doherty is not as well-known to the public at large, but her books are influential nonetheless. Her best known work is Poustinia, an overarching look at Eastern Christian beliefs and practices for a Western audience. Doherty, unlike Rand, emphasized obedience to constituted authority. According to the author notes for her book Molchanie: At the beginning of her new life in the West, Catherine accepted the teachings of the Catholic Church, without rejecting the spiritual wealth of her Orthodox heritage (87). In the same book Doherty supplies the antidote to the alienation inherent in Rands ego-driven philosophy: [T]here is only one way to bring people to God, and that is to love each individual personally. It is to love one totally, completely, utterly.Yes, love must be communicated person to person, otherwise it will not be effective (77). In her book on pilgrimage, Strannik, Doherty insists that a prerequisite for pilgrimage is sobornost, which is reminiscent of solidarity in Western Catholic teaching. Sobornost reunites you to God and man and it is a unity that must not be broken (47). This unity also requires kenosis or self-emptying, a scriptural concept much-appreciated by Russian theologians.
The whole of Dohertys thought and mission is found in The Little Mandate, which reads in part: Arise go! Sell all you possess. Give it directly, personally to the poor. Take up My cross (their cross) and follow Me [Christ], going to the poor, being poor, being one withthem, one withMe. This is sobornost manifested and incarnate, the antithesis of Rands praise of ego.
Archbishop Fulton J. Sheen (1895-1979), writing during the Cold War, discerned the societal and spiritual consequences of rising individualism in our culture: As persons surrender a sense of responsibility to God, to the state, to family and to their vocation in life, they dissolve into atoms; atoms exist only for themselves. To say we live in the atomic age may be a more unfortunate characterization than we know; for if we are nothing but atomic individuals, then we are ready either to be split or fissioned mentally, or else collectivized into a socialistic dictatorship. The latter is nothing but the forcible organization of the chaos created by a conflict of individual egotism (213-214). Sheen correctly interpreted the signs of the times, foreseeing that a godless society reliant on science for guidance would be a society adrift, prone to either individualism or collectivismboth paths to a soul-crushing and dehumanizing existence.
Image by AwesProduction on Shutterstock
References:
Berdyaev, Nikolai. Slavery and Freedom. Trans. R.M. French. London: Geoffrey Bles, 1944.
Doherty, Catherine. Molchanie: Experiencing the Silence of God. (Combermere, ON: Madonna House, 2009).
Doherty, Catherine. Poustinia: Christian Spirituality of the East for Western Man. (South Bend, IN: Ave Maria Press, 1981).
Doherty, Catherine. Strannik: The Call to the Pilgrimage of the Heart. (Combermere, ON: Madonna House, 1991).
Rand, Ayn. Anthem. New York: Signet, 1946.
Sheen, Fulton J. Guide to Contentment. Canfield, OH: Alba House, 1996.
Zamyatin, Yevgeny. We. Trans. Mirra Ginsburg. New York: Harper Voyager, 2012.
Read more from the original source:
Sobornost? Or Ego? Two Women and Two Paths - Catholic Exchange
MPL 59th National Senior R3: The Systematic Pawn Structure … – ChessBase India
Posted: at 12:17 am
Three GMs, four IMs and one FM have made ahat-trick start of 3/3. They are - GM Sethuraman S P (PSPB), GM Abhijeet Gupta (PSPB), GM Iniyan P (TN), IM Aronyak Ghosh (RSPB), IM Koustav Chatterjee (WB), IM Harshavardhan G B (TN), IM Nitin S (RSPB) and FM Vedant Panesar (MAH). Who will be among the leadersafter the fourth round?
IM Nitin S scored a fantastic victory against GM Leon Luke Mendonca | Photo:Aditya Sur Roy
IM Nitin S (2372) traded the queenson the eleventh move in the Caro-Kann against GM Leon Luke Mendonca (2566). The former started fragmenting Black's pawn structure and kept on it.
Position after 27.e5!
Black has six pawns, four pawn islands, two isolated and two isolated doubled pawns - that is the like the pawn structure one should not have. White found the perfect 27.e5! even though 27.Rxf4 was alright too having the idea of e5 in the next move.What followed next is the attraction of the black king towards White's own side. 27...fxe5 28.Nxe5+ Ke6 29.Nxc6Rc8 30.Re1+ Kf6 31.Rxf4+ Kg5 32.Rf2 Kg4 33.Ne5+ Kg3 34.Rf3+ Kg2 35.Re2+ Kg1 36.Rd3 and Black resigned as Rd1# isunstoppable.
Final position after 36.Rd3
Position after 36...Rh8
White's king is much safer than Black's. Keeping that in mind, find out how White could have finished things off here. The position certainly screams like something's gotta give.
Position after 48...Qh6
Sometimes itbecomes difficult for a player to accept a draw even in drawn position because he has already conceded a draw against anotherrelatively lower rated player. The reason behind it is quite simple, the current Elo rating system does not favor adults. Thus, the desperation to score a win increases, resulting in human errors.48...Qh6 was uncalled for. Black has zero breakthroughs, his pieces act like a furniture, much like White's dark square bishop. Just keeping the black queen in the back rank is enough to draw the game. 48...Qh6 invited trouble. White did not notice it at first 49.Bf1 and then Kc8madeit that much obvious. Find out whyBlack's last two move were erroneous.
IM Nitin S (RSPB) - GM Leon Luke Mendonca (Goa): 1-0
IM Vardaan Nagpal(HAR) - GM Karthik Venkataraman (AP): 0.5-0.5
Subhayan Kundu (WB) - GM Mitrabha Guha (WB): 0.5-0.5
GM Deep Sengupta (PSPB) - Utkal Ranjan Sahoo (ODI): 0.5-0.5
IM Mehar Chinna Reddy C H (RSPB) - GM Karthikeyan P (RSPB): 0.5-0.5
GMNeelotpal Das (RSPB) - FM Ritvik Krishnan (MAH): 0.5-0.5
IM Avinash Ramesh (TN) - GM Shyam Sundar M (TN): 0.5-0.5
FMM Anees M (TN) - IM Vignesh N R (RSPB): 0.5-0.5
GM Venkatesh M R (PSPB) -CM AadityaDhingra (HAR): 0.5-0.5
Shreyansh Daklia (CHT) - IM Neelash Saha (WB): 1-0
IM Srihari L R (TN) - Kartavya Anadkat (GUJ): 0.5-0.5
CM Gaurang Bagwe (MAH) - IM Ameya Audi (Goa): 0.5-0.5
Kishan Gangolli (KAR) - GM Laxman R R (RSPB): 0.5-0.5
GM Deepan Chakkravarthy (RSPB) - Laishram Imocha (PSPB): 0-1
S Badrinath (PUD) - IM Arghyadip Das (RSPB): 0.5-0.5
Rupam Mukherjee (WB) - IM D K Sharma (LIC): 0.5-0.5
A total of 196 players including 18 GMs and27 IMs are taking part in this tournament organized by Delhi Chess Association. The event is taking place in New Delhi from 22nd December 2022 to 3rd January 20233. The 13-round Swiss league tournament has a time control of 90 minutes for 40 moves followed by 30 minutes with an increment of 30 seconds from move no.1
Details
Details
Delhi Chess Association
Tournament Regulations
The rest is here:
MPL 59th National Senior R3: The Systematic Pawn Structure ... - ChessBase India