What is Data Mesh?

Image
  The book raised an interesting question: Is robotics considered a branch of AI?  I'd love to hear your thoughts beyond what the book suggests. What’s your perspective? How do you believe AI and robotics can benefit business developers? Who is the Author? Ulrika Jägare is an  M.Sc. director at Ericsson AB, boasting 22 years of telecommunications experience in various leadership roles, including research & development, product management, services, and sales. For the past 12 years, she has focused on AI and data science, working to bridge the gap between technology and business for practical applications. Ulrika established Ericsson's first AI strategy and played a key role in implementing a data-driven approach through global initiatives. She initiated Ericsson's first AI-based commercial offerings and currently leads a global AI and automation initiative in the Internet of Things (IoT) sector. Passionate about helping other companies leverage data science a...

What's The Doability Method?

 

Beyond Algorithm:AI for Business


Who are  the authors?

Dr. James Luke, David Porter, and Dr. Padmanabhan Santhanam share their experiences and journeys in the field of artificial intelligence (AI) and data science.

1-Dr. Luke recalls his lifelong dream of creating intelligent machines, which led him to a career focused on building AI systems that solve real issues. He faced skepticism throughout his career, particularly in the late 1990s when AI was not widely accepted. Despite this, he now sees a surge of interest in AI, which brings both excitement and concern. He wishes to share his insights in this book to help others avoid the pain of project failures, emphasizing the importance of learning effectively rather than through mistakes.

2-David Porter began his education in IT and developed a strong passion for data and analytics, graduating in 1995. He has since worked in data science, holding senior roles at various firms, and specializing in counter-fraud systems. His work has involved collaboration with governments worldwide, highlighting the universal nature of financial crime. He co-invented the Net Reveal software that helped establish the UK’s first Insurance Fraud Bureau and contributed to the design of a significant tax compliance system. His career reflects a love for the challenges posed by criminals, as each new AI advancement inspires him to enhance his strategies. Porter joined IBM in 2016, drawn by the concept of using AI for crime detection, and has utilized Natural Language Processing (NLP) to leverage insights from written data.

3-Dr. Santhanam began his career in software engineering research at IBM after graduate school. His work aimed to improve the quality and productivity of software development through various tools and strategies. He developed a successful automated tool for assessing software project maturity and, for the past 15 years, has focused on leveraging NLP in software projects to extract necessary information automatically. His interests lie in developing trustworthy AI systems and improving traditional software systems. In this book, he aims to provide practical insights into AI Engineering.

The authors of the book Beyond Algorithms: Delivering AI for Business also include Murray Campbell, Laura Chiticariu, Daphne Coates, Richard Hairsine, Roy Hepper, Kai Mumford, Michael Nicholson,  each contributing their expertise to the content of the book

Why this book? What is this book about?

The book "Beyond Algorithms: Delivering AI for Business" is designed to address the increasing significance of artificial intelligence (AI) in various business contexts

 -It aims to provide a comprehensive understanding of how AI can be effectively integrated into enterprises to achieve practical value

-The authors emphasize the importance of appreciating both the strengths and limitations of AI, offering a balanced perspective rather than succumbing to the hype often surrounding the technology

-One of the key motivations for writing this book is the need for a practical guide that helps stakeholders navigate the complexities of AI implementation.

-It endeavors to demystify AI concepts for a broad audience, ensuring that leaders and practitioners can make informed decisions about AI projects .

-The book discusses critical themes such as project management, expectation setting, and the potential for creating new business opportunities through AI .

-Furthermore, it highlights the historical context of AI, including reflections on the early days of AI research and development, while providing insights into contemporary challenges and future possibilities in the field.

-This book addresses the growing interest in artificial intelligence (AI) within the business sector by providing a practical guide on how organizations can effectively leverage AI to achieve business value. It focuses on practical advice for selecting AI projects, managing them efficiently, and understanding the strategic implications of AI in a business context.

-The book contains case studies of actual AI implementations that illustrate what works in practice and what does not. These real-world examples are meant to provide readers with insights into successful AI strategies and applications, enhancing their understanding of the challenges and successes related to AI projects in business
This positions the book as a valuable resource for anyone engaged in the transition to AI-driven solutions within business environments

Who is the book target Audience? What is the book structure?

Target Audience:

-The book aims to assist individuals interested in applying AI across various sectors by bridging the gap between technology capabilities and business needs. It encompasses a broad audience, from technical experts to business stakeholders.

-Book Structure:

The book will cover core AI concepts, including practical examples, technical insights, philosophical considerations, and case studies of AI applications. Subsequent chapters will delve into algorithm choices, project selection, measuring business value, and the engineering aspects of AI systems.

How does business professionals and developers benefit from the book?

Business professionals and developers benefit from this book:

1. The book offers a roadmap for realizing the full potential of AI technologies. It equips them with practical tools and frameworks to evaluate and implement AI projects effectively, guiding them through the complexities and potential pitfalls of AI initiatives.

2.By focusing on the need for a solid strategy and understanding various AI applications, the book helps ensure that stakeholders can make informed decisions

What are the real business cases enclosed in the book?

Here are some notable examples highlighted within the book:

1. Customer Support Automation:
A case study discusses the implementation of customer service bots designed to handle simpler informational queries during off-hours. This allowed companies to enhance customer support without needing to match the full functionality of human agents initially. As the performance of the bots improved, businesses could gradually expand their capabilities.

2. AI in Code Breaking:

The historical example of Bletchley Park during World War II illustrates the importance of investing in technological innovation without a clear business case upfront. The British government's commitment to code-breaking work, particularly under the leadership of figures like Alan Turing, exemplifies the transformative potential of trusting in AI-like advancements for crucial decision-making.

3. Predictive Maintenance in Manufacturing:

One case details how manufacturers implemented AI for predictive maintenance, using data analytics to foresee equipment failures before they occurred. This proactive approach reduced downtime and maintenance costs, showcasing the economic efficiency AI can add to traditional manufacturing processes.

4. Healthcare Diagnosis:

AI applications in the healthcare sector are explored through examples of diagnostic systems that assist medical professionals in identifying diseases. These applications underscore the collaborative potential of AI in augmenting human expertise rather than replacing it

5. Supply Chain Optimization:

The book discusses how companies have utilized AI technologies to optimize their supply chain operations. By employing machine learning algorithms to analyze and predict demand, organizations could more efficiently manage inventory and logistics, thereby improving service levels and reducing costs

Moreover, the book covers some notable companies:

1. IBM: The book explores IBM's efforts in AI-driven healthcare solutions, particularly through its Watson platform, which has been used to assist in medical diagnosis and treatment recommendations.

2. Google: Google's utilization of AI for improving search algorithms, language translation, and various machine learning applications illustrates how a tech giant leverages data to enhance services.

3.General Electric (GE): GE is highlighted for its use of predictive maintenance in the industrial sector, where AI is employed to analyze data from manufacturing equipment, ultimately aiming to prevent downtime and reduce maintenance costs.

4. Amazon: Amazon's recommendation systems and supply chain optimizations are explored in the context of AI's role in personalizing customer experiences and ensuring efficient logistics.

5. Microsoft: Microsoft’s initiatives in AI, particularly through Azure's AI services for enterprises, are discussed to emphasize the integration of AI into business practices.


What are the origin  and definitions of artificial intelligence (AI) research in light of the book?


The book outlines the origins and definitions  Here's a summary of some key points:

1.The Beginning:

-Alan Turing's pivotal paper "Computing Machinery and Intelligence" introduced the concept of machines emulating human intelligence, leading to the formulation of the "Turing Test".

-The 1956 Dartmouth Summer Workshop, organized by key figures including John McCarthy and Marvin Minsky, marked the official introduction of the term "Artificial Intelligence" as a field of research.

2.Defining AI:

-John McCarthy described AI as “the science and engineering of making intelligent machines.” He defined "intelligence" as the computational ability to achieve goals.

-Marvin Minsky provided a similar definition, emphasizing tasks that would require intelligence if performed by humans.

-The Encyclopedia Britannica defines AI as the capacity of computers or robots to perform tasks typically requiring intelligent beings.


What’s The AI effect?


The AI Effect:

-This concept refers to the phenomenon where successful AI systems become unnoticed, and researchers are left focusing on the more challenging tasks that machines can't yet perform.

-Larry Tesler’s theorem posits that "Intelligence is whatever machines haven’t done yet," highlighting society's reluctance to acknowledge that machines can perform traditionally human tasks.

What's the current state of AI?

Current State of AI:

1.Digital Infrastructure:

Society has developed an extensive digital framework filled with data, unlike two decades ago when foundational IT infrastructure was still being established.

2. Data Utilization:

The ability to leverage vast amounts of data presents new business opportunities. Many organizations need automation and intelligence to analyze this data effectively.

3.Technological Advancements: Improvements in computing power, memory, and cloud resources have made previously impractical algorithms applicable today, fueling AI innovation.

4.Performance of Machine Learning Algorithms:

There has been considerable progress in machine learning, especially in specific tasks where these algorithms can match or outperform human capabilities.

5.Cautionary Notes:

Risk of Another AI Winter:

Despite optimism, many AI projects still fail beyond the prototyping phase, often due to misdiagnoses of their causes. Common misinterpretation is assuming failures are due to AI's limits rather than issues with project choices or engineering.

6.Call for Collaboration: Successful AI applications require more than just advanced algorithms; they need a cohesive effort from data scientists, engineers, ethicists, and other specialists to build reliable end-to-end solutions.

7.Societal Impact:

Wider Understanding of AI: There is a pressing need for more people in society to comprehend AI's workings and implications, given its pervasive impact on decision-making processes.

What is the author’s insight ofthe Business Applications in light of the book?

-The Author discusses the complexities of building applications, with a particular focus on Artificial Intelligence (AI) and its differences from traditional software systems. It begins with a humorous situation where a team had to deal with a complex requirements document written in Arabic. Due to budget constraints, they used a free online translation tool, resulting in amusing translations that highlighted the challenges of AI.

-The author emphasizes that AI applications are fundamentally different from conventional applications, as the behavior of AI is heavily defined by data rather than just software code. In AI, particularly in Machine Learning, the data is integrated with the code and plays a crucial role in determining functionality.
 

-Extensive data cleansing is necessary for both training and production environments, as maintaining data integrity is integral to operational success.

-A significant point made is the acceptance of mistakes in AI applications. These systems are designed to perform tasks traditionally done by humans, which means they can also make errors—errors that are often perceived more negatively by the public compared to human-made mistakes, as exemplified by the media's reaction to accidents involving driverless cars.

-The ability to generalize is another critical aspect of AI. The author notes that like humans, AI systems should be capable of applying learned knowledge to new and unfamiliar situations. This generalization is a hallmark of intelligence, allowing both humans and AI to operate effectively in varying contexts.

In conclusion, the authors explain that to successfully build AI applications, one must consider the nuances of data, the design of human-like decision-making processes, and the importance of generalization, highlighting that these factors are crucial for understanding and enhancing AI functionality.

What is the difference between AI application and Traditional Application?


AI Aplications vs. Traditional Applications:

-AI applications differ significantly from traditional software systems, particularly in how functionality is defined by data rather than software code.

Differences in Building AI Applications

1.Data-Driven Functionality
: In AI applications, especially those using Machine Learning (ML), data determines functionality, unlike conventional applications where code defines behavior.

2.Data Cleansing: Effective training data requires extensive cleansing; this must continue consistently in production, often requiring automation to avoid manual interventions.

3.Mistakes in AI: AI systems may make incorrect decisions, leading to challenges in accountability, especially in high-stakes scenarios like self-driving cars.

4.Generalization vs. Memorization: Unlike humans, AI must learn to generalize rather than memorize, or it risks "overfitting" to training data.

5-Need for Experimentation: AI technology evolves, necessitating an adaptable architecture that allows for continuous evaluations and updates.

6.Determining Right Decisions: Evaluating AI decisions can be challenging, particularly in complex cases (e.g., medical treatments).

7.AI Ethics: Ethical considerations must guide AI development, ensuring applications are competent and adhere to ethical standards.

8.AI Accountability: Responsibility for AI failures (e.g., in autonomous vehicles) is complex, involving developers, owners, and systems.

9.Determinism in AI: Unlike conventional software, AI systems might not behave the same way with identical inputs due to various factors, including training data presentation.

10.Impact on the Environment: AI applications can significantly affect their operational environment, leading to risks of unexpected outcomes.

What are the Prominent AIApplications?

Prominent AI Applications of the Last Seven Decades

-Board Games: AI has a rich history in game playing, from early programs like Checkers to advanced systems like AlphaGo, showcasing AI's strategic capabilities.

-Key Projects:

1.Checkers Program (1951-1959): Introduced "Machine Learning."As the author mentioned"Arthur Samuel at IBM used IBM 701 and 704 machines to create a program that played checkers. He introduced the phrase ‘Machine Learning’ in the literature for the fi rst time."

2.TD-Gammon (1993): Advanced reinforcement learning.

3. Deep Blue (1997): Defeated a world chess champion.

4. AlphaGo (2016): Beat the Go world champion using deep learning techniques.

5.Natural Language Understanding (NLU): The evolution of AI in processing and responding to human language.

6.ELIZA (1966): The first chatbot for basic human interaction.

7.SHRDLU (1968-1970): Interacted with virtual environments based on human commands.

8.Watson-Jeopardy! (2007-2011): Open-domain question-answering system that beat human champions.

9.Project Debater (2019): Engaged in live debates using extensive knowledge.


What are the three stages of AI application?

The three stages of an enterprise AI application involve a process that goes beyond just development; it includes deployment and ongoing sustain to maximize business value. Here they are clearly explained:

1. Development: This stage focuses on creating the AI application, which requires integrating multiple elements such as defining business requirements, acquiring and preparing data, building and training the AI model, and conducting comprehensive testing.

2. Deployment: After development, the application is deployed in a production environment. This phase includes ensuring that the AI model integrates well with other components and systems, and that it functions as intended under real-world conditions.

3. Sustain: Unlike traditional applications, AI applications require continuous monitoring and updating due to their heavy reliance on data. This stage ensures that the model performs well over time, adapting to new data inputs and maintaining accuracy, which is crucial for realizing the intended business benefits. This involves not only maintenance but also enhancements as data and requirements evolve

Important information:

Sustain vs. Maintain: The term 'sustain' is highlighted due to the heavy data dependency of AI applications. Continuous monitoring and updating are essential for optimal performance.

What should we know about Building business applications?

1. AI Model Deployment Challenges: A key challenge highlighted is the potential for machine learning models to misbehave during deployment due to 'Data Skew', where the training data may not match the live data encountered in production. This can arise from inadequate selection of training data or changes in the available features over time, necessitating consistent monitoring and adjustments to the models.

2.Importance of Monitoring: Monitoring AI systems is depicted as essential rather than optional. The text emphasizes that systematic monitoring helps to manage unexpected model behaviors and ensures that the model continues to perform effectively as conditions change.

3.Trustworthiness Testing: The need for independent testing teams to assess AI applications for trustworthiness is discussed. This includes evaluations of fairness, robustness, and transparency. The emphasis is placed on documenting outcomes and identifying shortcomings, marking a shift in how potential defects are recognized in AI systems.

4. Operational Integration: The book outlines three main areas in which the inclusion of ML models complicates traditional operations:
   - The necessity for parallel management of evolving ML models alongside other application components.
   - The difficulty in assessing whether the observed behavior of applications is correct, often due to the lack of a definitive correct answer for comparison.
   - The increased frequency of updates required for ML models compared to static components of the application

5. Algorithmic Insights and Classification: The book includes insights on algorithmic functions, such as the distinction between the types of algorithms for different tasks and the trade-offs between rule-based systems and machine learning approaches. This segment highlights the practical considerations in choosing appropriate algorithms based on specific business needs, including case studies and examples .


What are the advances in Robotics?

Summary of Advances in Robotics

Origins of the Term "Robot":

-Coined by Czech author Joseph Capek in 1917 as “automat,” meaning artificial worker.

-Karel Capek’s play "RUR" in 1921 popularized the term derived from "robota" (forced labor).

-Isaac Asimov's Contributions:

Established three laws for robots:

1.A robot cannot harm a human or allow harm through inaction.

2.A robot must obey human orders unless it conflicts with the first law.

3.A robot must protect its own existence unless it conflicts with the first two laws.

Industrial Application of Robotics:


In 1958, General Motors introduced the first industrial robot, Unimate, revolutionizing automobile production.

Beginning of widespread use of robots across various industries.

Goals of Robotics:


-Eliminate repetitive tasks for humans.

-Enhance human capabilities.

-Operate in hazardous environments unsuitable for human workers.

-Complex Functions and Sensors:

Robots require various sensors (visual, infrared) for complex tasks.

Mobility types:

1.Fixed Robots: Operate in a fixed coordinate system for specific tasks.

2.Mobile Robots: Navigate open environments using sensors for location and orientation.

Examples of Robotics Categories and Applications:

Industrial/Fixed:

YASKAWA: Welding, Painting robots.

KAWASAKI: Assembly, Material handling robots.

Service/Fixed:

Intuitive Surgical Inc.: Da Vinci surgical robot for precise movements.

Service/Mobile:

iRobot Corp: Roomba for domestic floor cleaning.

SoftBank Robotics: Pepper, social humanoid robot for basic human interaction.

Service/Mobile (Space):

NASA Mars Pathfinder: Sojourner Rover, advanced sensors, and independent decision-making.

NASA Robonaut: Humanoid robot assisting astronauts.

Service/Mobile (Commercial):

Boston Dynamics: Legged robots for difficult terrains and unstructured environments.

Let’s explore ogether important Terminologies in light of the book:

1.Ubiquity of AI: The term AI is often misused in product marketing; many products claim to use AI but may not genuinely incorporate it.

2.Impact of IoT: The Internet of Things (IoT) is generating vast amounts of data, enabling algorithms to appear intelligent even though they may be very simple.

Examples of Simple AI: Algorithms can predict travel times, remind users of errands, and even assist with daily tasks through basic data tracking.

3.AI Effect: A problem is considered an AI problem only until it is solved; once resolved, it no longer falls under the category of AI.

4.Dominance of Web Companies: Major web companies (Google, Facebook, Amazon) heavily influence public perception of AI through data collection and the development of mobile and web applications.

5.Public Perception: Many successful AI applications are based on simple algorithms and rely on vast amounts of user data collected for free through their services.

6.Emergence of APIs: Basic question-and-answer systems and speech recognition functionalities exemplify how AI is being deployed in everyday applications.

7.Challenges in Enterprise AI: Enterprises face higher complexity compared to web companies, as they deal with specific business problems and must navigate stakeholder approval and regulatory compliance.

8.Complexity Factors: Enterprise AI applications must consider business relevance, stakeholder agreement, application complexity, correctness, consistency, and data governance.

9. Integration Complexity: As enterprise solutions grow in complexity, integrating multiple AI components with interdependencies can result in compounded errors.

Why the author does mentioned"It’s Not Just the Algorithms, Really!"

During the development of a Question Answering system, the project team often blamed the algorithm for failures.

The actual issues stemmed from conflicting data in the training set, not from the AI's intelligence.

Data Quality Over Algorithm Quality:

The training data was inconsistent, similar to giving conflicting messages to a child and blaming them for misunderstanding.

A better algorithm was not the solution; cleaning the training data was essential.

Algorithm Addiction:

There's a tendency among team members to overemphasize the importance of algorithms when problems occur in AI systems.

Recognizing the risks of "algorithm addiction" is crucial for successful AI delivery.

Understanding Algorithms:

Algorithms are systematic steps to transform input into output, akin to cooking recipes.

They must have defined inputs and outputs, include clear steps, be effective, and terminate after a finite number of operations.

Types of AI Algorithms:

Key properties of algorithms include definiteness, effectiveness, and finiteness.

AI algorithms adapt, learn from experience, interact with humans, and tolerate errors.

Diverse Applications of AI:

AI can automate tasks, improve efficiency, or perform new tasks.

Categories include Narrow AI (specializing in one data type) and Broad AI (integrating multiple data sources).

Historical Examples of Algorithms:


Babylonian algorithm for calculating diagonals, Euclid’s algorithm for finding GCD, and sorting algorithms (like Bubble Sort).


What's the AI Development?
1.Goal of AI Development: Enhance systems with more intelligence for better decision-making and competitive advantage in various operational environments.

2.Terminology Confusion: AI, Machine Learning (ML), and Deep Learning (DL) are often used interchangeably but have distinct meanings.

3.Scope of AI: Encompasses a wide range of applications, including:

Sensory tasks (e.g., speech recognition, visual perception)

Interpretation and decision-making (e.g., medical diagnosis, resource scheduling)

4.Popular AI Applications:

Driverless vehicles

Personal assistants (e.g., Siri, Alexa)

Game-playing machines (e.g., Google’s AlphaZero)

Automated trading tools

Robotics (e.g., product sorting in warehouses)

5.Diversity of AI Technologies:

AI applications can be developed using various underlying technologies and algorithms.

Use of Algorithms in AI:

Single algorithms may perform simple tasks (e.g., detecting tumors in X-rays).

Complex tasks often require multiple algorithms integrated for enhanced intelligence.

Case Study - Medical Diagnosis:

An AI system might utilize:

Neural networks for X-ray classification

Rules-based systems for analyzing patient notes

Conventional mathematical models for additional analytics.

Algorithm Selection:

Critical to choose the appropriate algorithm for each specific task to ensure effective application delivery.


What is the AI discipline? What are the AI Terminologies?

Interdisciplinary Nature: AI encompasses various fields such as cognitive science, neuroscience, mathematics, computer science, engineering, and ethics.

Key Topics and Terminology in AI

-Perception: Involves deriving information from sensory inputs (like text, vision, speech) to build knowledge.

-Knowledge Representation: Concerns the accumulation and storage of semantic knowledge in a structured format (ontologies, graphs) for practical use.

-Learning: Refers to the capability of an AI system to learn from data and human inputs to enhance its knowledge base.

-Reasoning: Utilizes the knowledge base for making practical decisions.

Problem Solving by Search: Involves using the knowledge base to find answers for specific tasks.

Common Sense: Incorporates assumptions about the world that are usually evident to humans without explicit training.

Rule-Based Systems: Creation of systems based on a set of rules and relevant data, often derived from expert knowledge.

Planning: Involves designing a sequence of actions to reach a goal.

Machine Learning (ML) in AI

Supervised Learning: Techniques like Logistic Regression and Decision Trees where the model is trained on labeled data.

Unsupervised Learning: Techniques such as clustering where patterns are found without labeled outcomes.

Reinforcement Learning: Learning through trial and error to maximize rewards.

Artificial Neural Networks (ANNs)

Describes the structure inspired by the human brain, consisting of interconnected neurons.

Deep Neural Networks (DNNs): ANNs with multiple hidden layers, enhancing the capacity for learning complex patterns.


What is the C5 rule?

Summary of the C5 Rule-Based Model

Overview: C5 is a machine learning (ML) system that generates explainable rules from training data, developed by Ross Quinlan.

Decision Tree Construction:

The model generates a decision tree by evaluating parametric tests on input features (e.g., height, mass, type).

Information Gain is calculated for each potential test to determine which test divides the training set into subsets most effectively.

The process is applied recursively to build a complete decision tree.

Rule Generation:

After the decision tree is constructed, C5 converts it into a set of rules.

A pruning algorithm is used to generalize the rules by removing overly specific elements.

Rules are applied sequentially, with the first rule that "fires" being the output decision.

Why does understanding of the algorithm matter to the business professionals?

The significance of understanding AI algorithms was emphasized in a recent client meeting.

A powerful AI system was showcased; however, it was dismissed by the client's senior engineer in favor of deep learning, based on biases.

This dismissal illustrates a prevalent misconception, often exacerbated by media portrayals of AI.

Deep learning (DL) is a method for defining and enhancing deep neural networks (DNNs) rather than being an independent system.

It is inaccurate to assert that machine learning (ML) algorithms cannot generate rules-based systems; for instance, Quinlan’s C5 can produce interpretable rules.

The engineer’s firm belief in his flawed understanding raises concerns, particularly regarding substantial financial implications.

The core issue involves not only a knowledge gap but also a lack of recognition of that gap.



What is "Human Interpretation of Artificial Neural Networks"?

Research Overview:

MIT researchers examined Deep Convolutional Neural Networks (CNNs) to understand their learning processes and mapping to human interpretations.

Intermediate layers in Deep Neural Networks (DNNs) represent a hierarchy of concepts from colors to objects and scenes.

Findings:

DNNs are effective at automatically discovering and defining features related to image and speech data.

The ability to learn features is significantly enabled by large volumes of data.

Factors Influencing DNN Performance:

Complexity of the problem: Simple problems may be learned well, but complex ones (e.g., stock market predictions) may not be realistically modeled without extensive data.

Volume of training data: More data increases the likelihood of accurately learning underlying models.

Efficiency of deep learning algorithms: Algorithms need to efficiently explore parameter combinations to find optimal solutions.

Algorithm Suitability:

No single best algorithm exists; suitability depends on the specific problem.

DNNs excel in data-rich contexts like image/speech recognition; conventional modeling is preferred where domain knowledge is rich.

Transfer Learning:

Useful for reusing knowledge across domains, leveraging pre-trained models to adapt to new tasks.

Example in sentiment analysis where models from one domain improve performance in related domains.

Reinforcement Learning:

Applies when AI must decide sequential actions for optimal outcomes, utilizing historical data for training.

Setting appropriate rewards for AI actions is crucial for effective learning.

Comparison with Human Intelligence:

Artificial neurons are simplistic compared to complex human neurons and their interrelationships.

Human brains have evolved architectures supporting learning and processing, unmatched by current AI capabilities.

Conclusion:

DNNs are powerful tools in AI, but their application must be grounded in understanding their limits and the nuances of the problems they aim to solve.

What are the key points for business professionals to remember about AI Algorithm?



Algorithms are crucial for AI delivery but are only one component.

Key points to remember:

1.Algorithms transform inputs to outputs through various types, such as neural networks and rule-based models.

2.Focusing too much on specific algorithms can derail a project.

3,Business applications may require a mix of different algorithms, both AI and non-AI.

4.Recent AI successes are largely due to advancements in Supervised Machine Learning (ML), particularly using Artificial Neural Networks (ANNs) and Deep Neural Networks (DNNs).

5.Effective DNN model development requires large amounts of high-quality labeled training data.

6.ANNs are not the sole form of ML; there are other algorithms that derive rules from data.

7.ANN models are generally opaque, making it difficult to interpret outputs.

8.Unsupervised ML aids in data understanding.

9.Rule-based algorithms are more intuitive and understandable.

10.Domain-specific models offer richer descriptions of behavior compared to generic ANN models.

11.ANNs are especially advantageous when developing detailed discipline-specific models is challenging.

12.Transfer Learning can utilize knowledge from one domain to aid related domains with less data.


How can business professionals select the right AI project?

Let's start with Author personal story

Context: The author reflects on a career shift in 1994, leaving a secure job to pursue a fortune in Formula 1.

Proposal to Formula 1 Team: The author approached a top Formula 1 team, suggesting AI-driven ideas to revolutionize motor racing.

Initial Ideas: Two ideas emerged:

Using AI for car setup, a complex problem lacking sufficient data.

Using AI for race strategy, deemed less important but easier to test.

Project Execution:

Worked on the car setup project but faced challenges due to data insufficiency and complexity.

Shifted focus to race strategy, which could be simulated to overcome data limitations.

Eventually, the project was shut down due to the wrong initial problem selection, leading to failure in achieving financial goals.

AI Project Success: Most AI projects in enterprises often end as proofs of concept, failing to transition to production.
In this Context: What’s

The Doability Method?

The Doability Method: Developed at IBM, this method helps prioritize AI projects and manage their execution.

Step 1: Assess candidate business ideas for their suitability for current AI technology.

Step 2: Evaluate suitable ideas based on business value and technical feasibility using five themes: Business Problem, Stakeholders, Trust, Data, and AI Expectation.


Innovation and AI: Innovation involves assessing which emerging technology projects deliver real operational value; many ideas fail to realize benefits in AI due to unpredictability and the need for experimentation.

What's the Portfolio-Based Approach and Doability Method?

Portfolio-Based Approach:

Flexible framework to enhance chances of success in project management.

Typically starts with 10-20 ideas; only 2-3 may be immediately actionable.

Projects may be dropped due to:

Lack of business value.

Technical infeasibility.

Projects can be shelved and revisited when obstacles are resolved.

Important to keep on watch for technological advancements (e.g., new sensors).

Recognizing when to stop a project can reveal insights about business opportunities and workforce expertise.

Doability Method Overview:

Step 1: Evaluating AI Application

Assess if AI is the right solution for proposed projects.

Supervised machine learning (ML) is fundamental for enterprise AI.

Key Evaluation Questions:

Can Humans Perform the Task at Some Scope and Speed?

AI best used for automating existing tasks with proven human performance.

Can They Explain the Reasoning?

Existing algorithms or rules assist AI implementation.

Is the Reasoning Behind the Task Practical to Encode?

Some tasks are complex and difficult to encode effectively.

Can the Task Be Broken into Smaller Tasks?

Preferable to simplify complex tasks into manageable sub-tasks for AI.

Can Humans Evaluate the Task if Done by AI?

Essential for validation of AI results.

Is it Feasible to Get Sufficient Labeled Data?

Quality and consistency of labeled data are crucial for supervised ML success.

Important Considerations:

Continuous evaluation needed for project feasibility regarding AI integration.

The method underscores the necessity of understanding both data management and domain knowledge.

The success of any AI initiative heavily relies on the quality and relevance of data utilized.

What is the Business Value and Impact of AI in light of the book?


The Story of Bletchley Park in WWII:

Bletchley Park was established by the British Government to break German codes.

Initially started with a small group of codebreakers that expanded rapidly.

Bletchley Park achieved significant advancements in codebreaking methods, allowing for the industrial-scale conversion of coded messages into plain text.

The success of Bletchley Park is argued to have shortened WWII by two years and laid foundations for the computer industry.

Alan Turing played a pivotal role at Bletchley Park, known as a founding figure in AI and computing.

Business Models:

The business case for Bletchley Park would have seemed unrealistic: investing in theoretical academics for developing new technologies.

The British Government recognized the importance and potential of the project, investing heavily in code-breaking without a formal business case.

Challenges of AI Applications:

AI's ability to make decisions traditionally reserved for humans raises ethical considerations, such as:

Ethics, explainability, and transparency must be integral to AI projects.

Stakeholder engagement is crucial due to social and moral implications.

AI projects require more experimentation due to uncertainty in outcomes and reliance on data quality.

Building AI Business Cases:

AI projects must justify investment with clear business value.

Business cases may aim for efficiency, enhancements, or new capabilities:

Efficiency: AI automates tasks done by humans, reducing cost and time.

Enhancements: AI improves existing technologies or performs tasks better than humans.

New Capabilities: AI enables tasks previously impossible for humans, such as pattern recognition in large datasets.

Importance of Measurability:

AI success must be measurable, comparing performance before and after AI implementation.

Indirect benefits, such as reputation impact, can also be significant but are harder to quantify.

Stakeholders in AI Projects:

AI projects involve numerous stakeholders, including:

Enterprise staff (engineers, investors) involved in the AI development.

Societal factors (regulators, general public, media).

End consumers who benefit from AI applications.

Engaging directly impacted employees is critical for successful AI integration.

Conclusions:

The successful integration of AI requires careful consideration of ethical issues, social impact, stakeholder engagement, and rigorous definitions of business value.

Organizations need clear strategies for measuring both direct and indirect returns on AI investments.

Ethical practices in AI development are paramount, ensuring technology serves to enhance human skills rather than replace them.


What's the Trustworthy AI?

Ethical Considerations in AI:

Technology misuse and societal changes require careful human judgment; AI is not to blame for these issues.

Important ethical topics include:

Fairness

Explainability

Transparency of AI systems.

AI can make mistakes and is vulnerable to adversarial attacks.

Business Value and Ethical Impact:

Enterprises must prioritize delivering trustworthy AI applications for client trust.

Ethical considerations enhance project value and financial return.

AI is expected to perform at higher standards than human judgment, particularly in sensitive areas (e.g., driverless cars).

Society's acceptance of technology changes over time, influencing ethical perceptions.

Stakeholders (customers, suppliers, press, regulators, public) will assess projects.

There is a need for genuine commitment to trustworthiness in AI, not just workaround solutions to ethical concerns.

Fairness and Bias in AI:

Bias in AI originates from human constructs and data representation.

Diversity in development teams is essential to mitigate bias in AI applications.

Notable examples of bias in AI include:

Amazon's biased recruitment AI.

Google ads favoring certain demographics.

Biased outcomes in criminal justice systems.

Historical prejudices can be embedded in AI through biased training data.

Explainability in AI:

Explainability is critical for high-stakes AI decisions (e.g., medical, legal, financial).

Different stakeholders (engineers, decision-makers, regulators, consumers) require tailored explanations.

The 2020 exam results scandal in the UK highlights the risks of uncontextualized AI model application.

What's the Weakness of ML Systems?


Despite advancements, ML systems have fundamental weaknesses in critical applications.

Two primary issues are: inevitability of mistakes and susceptibility to adversarial attacks.

Making Mistakes

ML models are built on 'Statistical Learning' and learn from training data.

The mapping from inputs to outputs is not unique and can exhibit scatter.

More complex statistical models require more data but may struggle with generalization on unseen data.

Even confident predictions can fail on specific instances.

It's crucial to quantify uncertainties in model predictions.

In critical systems, even a small percentage of errors can have severe consequences (e.g., healthcare decisions).

Management of tolerable error rates and their consequences is essential in business applications.

Susceptibility to Adversarial Attacks

Deep Neural Networks are prone to adversarial attacks that can mislead outcomes.

Attacks depend on how much access adversaries have to the model and training data.

If adversaries possess training data, they can exploit it to design effective attacks.

Common attack type: "Evasion Attack," wherein slight modifications to input can drastically change output.

Examples include misclassifying stop signs or images with imperceptible noise.

These problems extend beyond visual data to other formats like text.

Toolkits are available to detect and mitigate such adversarial attacks.

Developers of AI applications must consider potential attacks and their implications.


How Do You Know that AI model is working?

1.Accuracy Perception:

Achieving high accuracy (e.g., 95%) is often considered excellent, as in school grades.

Real-world context can alter the perception of accuracy (e.g., a pilot claiming to crash once in twenty).

Challenges in AI Accuracy:

AI needs to deliver performance comparable to human decision-making, not just meet specifications.

2.Traditional Software Quality Management:

Quality is managed through explicit specifications, defect management, and various testing levels (unit, functional, etc.).

Incident reports help diagnose and fix bugs post-deployment.

3.Assessing AI Performance:

Questions to assess AI effectiveness include:

Is it trained for the right situation?

Was it properly trained with appropriate data?

Are there governance safeguards in place?

Complex Behavior in AI Testing:

AI systems exhibit complex behavior that can't always be predicted, especially with machine learning (ML).

Errors in outputs are inevitable due to statistical nature of ML, leading to the need for careful management of data and models.

4.Measurement of AI Performance:

Importance of understanding various accuracy metrics related to two prediction types: numerical and classification.

AI predictions often require multiple metrics beyond simple accuracy measurements.

5.Precision and Recall:

Evaluation of AI systems in sensitive applications (e.g., medical, security) requires considering True Positives, False Positives, True Negatives, and False Negatives to assess quality.

6.Cost of Errors:

Costs related to false positives and negatives vary greatly by application, impacting how systems are tuned.

7.Complex Classification Decisions:

Introducing scenarios with multiple classification outcomes increases the complexity of quality assessments (e.g., PASS, FIX, FAIL).

Natural Language Understanding (NLU):

Quality metrics in NLU applications need to reflect the value of different types of entities being extracted.


What is the significance data workflow?

Importance of Data Workflow:

Essential for managing AI applications throughout their operational life.

Impacts model performance and maintenance costs more than the modeling step itself.

Requires optimization and cross-validation alongside data modeling.

Data Workflow Activities:

Requirements (Define): Start with clear business objectives to determine data needs.

Acquisition (Get): Identify sources of data; consider data wrangling and ownership.

Ingestion (Load): Load data into infrastructure, ensuring formats and volumes are appropriate.

Preparation (Clean-Up): Decide which data to use; requires understanding of the AI application.

Merging (Combine): Address problems with data merging and ensure key identifiers link datasets.

Augmentation (Enhance): Enhance data as needed for AI algorithms, especially for labeling.

Feature Engineering (Transform): Identify critical features for the model while managing dimensionality.

Modeling (Use): Create the best model using various approaches and validate with quality data.

Testing (Check): Check that the model generalizes well with unseen data.

DevOps (Deploy): Ensure version tracking of models and data for effective governance.

Operations (Evolve): Monitor AI model behavior and adapt to evolving data distributions.

Data Governance:

Critical for managing overall data workflow; covers business and technical aspects.

Ensures data quality, addresses biases, and maintains traceability.

Improving Data Quality:

Measure data to identify and address quality issues.

Utilize tools for data cleansing and fix issues at source to enhance ongoing data quality.



What is AI factory case study?



Problem: Scaling issues in building multiple AI applications with a small team; lack of resources and cumbersome evaluation processes.

Solution: Introduction of the "AI Factory".

A small, dedicated team was formed, including a lead architect and three developers.

Infrastructure Development:

Established the "AI Factory Floor" for easy engagement.

Created a repository for AI tools, services, models, and data as a starting point.

Configured cloud servers using Red Hat's multi-cloud technology to reduce startup efforts.

Tool Development:

Developed various tools to enhance efficiency, such as:

Tools for comparing output with truth data.

Tools for file ingestion and segmentation.

Standards Definition:

Established standards to integrate multiple technologies and partners.

Efficiency Goals:

Aim to drastically reduce engagement time from months to days for AI application requests.

Comparison with Cloud Companies:

The AI Factory is non-proprietary, utilizing services from multiple cloud (and other) suppliers.

Created a powerful environment without large investments.

Requirements for an AI Factory:

A specialized team for building and operating AI services.

Provisioned infrastructure for rapid configuration and evaluation.

A library of reusable assets.

Tools for quick evaluation and development.

DevOps processes for operational deployment.

Benefits of AI Factory:

Significantly improves the effectiveness of evaluating and delivering AI applications.

Enables the reuse of existing capabilities, reducing the need to start from scratch for new ideas.

Suitable for enterprises with extensive analytics needs; smaller enterprises may benefit from consulting services.

This overview highlights the challenges faced, the strategic solutions implemented, and the advantages of establishing an AI Factory for efficient and rapid AI application development.


What are the authors want us to know prior to starting AI project?

Getting Your Priorities Right

Workshop Experience:

Conducted a workshop for a major airline, aiming to lighten the mood.

Teased pilots, suggesting they are boring and focused on routine, which led to an engaging interaction with a pilot who pointed out their systematic approach.

Introduction to the Doability Method:

Introduces Doability Method Step 2, also known as the Doability Matrix.

Aims to assist in assessing AI projects for value and feasibility, focusing on preventing project failures.

Key Components of the Chapter:

AI Project Assessment Checklist:

Comprised of 21 questions—10 related to Value and 11 related to Doability.

Helps identify potential pitfalls in AI projects.

Doability Matrix:

Visual tool to evaluate responses from the checklist, crossing Value and Doability.

Identifies project viability based on count of 'Yes' answers.

A single ‘No’ response can significantly undermine the project's chances of success.

Types of Projects:

Sweet Spot: High Value and high Doability; ideal for scaling existing human tasks.

Deceptively Seductive: Risky projects with questionable value due to unrealistic expectations about AI capabilities.

Ambitious Initiatives: High-value projects that are difficult to execute, often requiring significant investment and resources.

Checklist Highlights:

Emphasizes understanding business problems, stakeholder alignment, moral acceptability, legal considerations, and data access.

Questions address operational effectiveness, stakeholder roles, bias in data, and the technical challenges of AI implementation.

Conclusion:

Continuous assessment using the checklist and matrix ensures that AI projects are aligned with business needs and capabilities, helping to avoid common pitfalls and ensuring stakeholder buy-in.

What are the AI project Phases?

AI Project Phases

1.Proof of Concept (PoC): Demonstrates feasibility of applying AI to specific tasks.

2.Prototyping: Tests integration and user interface requirements.

3.Implementation: Builds and deploys AI applications in real-world conditions.

4.Monitoring: Observes application performance for any deviations requiring intervention.

What are the business  requirements in AI application Cycle?

Summary of Business Requirements in AI Application Lifecycle

Critical Activity: Aligns application needs with AI capabilities, reflecting business goals, use cases, and performance expectations.

Requirements Inclusion: Should encompass multi-language support, accuracy, runtime performance, robustness, security, ethics, and system transparency.

Uncertainty Management: In scenarios with uncertain requirements (e.g., open world applications), define and characterize essential system properties (safety, security) using a three-phase framework:

Identify: Define uncertainties.

Assess: Evaluate uncertainty relevance to critical properties.

Address: Options to tolerate or reduce uncertainty.

Functional Decomposition

Task Evaluation: Determine suitable tasks for ML based on consequences and confidence levels.

Data Quality: Ensure the availability of adequate quality and quantity of data before starting ML model building.

Application Design

Component Consideration: Address all necessary components beyond ML, factoring in application complexity and architectural requirements.

User Interaction

Collaboration Support: Enhance user experiences through various interaction modes, like speech recognition. Ensure robust error handling and consider additional AI components for user interaction.

Non-AI Implementation & Testing

Infrastructure Needs: Recognize the significance of non-AI components (e.g., data pipelines, user interfaces) for AI applications integration.

Application DevOps

Iterative Development: Track versions of AI models alongside non-AI components, ensuring proper alignment throughout iterations.

Application Integration & Deployment

Implementation Activities: Integrate AI components and other application elements, potentially utilizing cloud services.

AI Application Testing

Black Box Testing: Requires good understanding of business data needs. Involves evaluating model performance using separate hold-out data sets.

Inevitability of Errors: Acknowledge that errors stem from statistical learning and the complexities of model behavior.

Monitoring AI Systems

Mandatory Activity: Critical for ensuring model performance during deployment, addressing potential data skew, effects of changes in context, social behavior, and adversarial attacks.

Continuous Learning

Practical Considerations: Emphasize the challenges of continuous learning in AI applications, including model verification, testing, and the risk of learning biases.

Sustenance

Ongoing Maintenance: Continuous monitoring and assessment required from deployment to application withdrawal.

Project Management

Dual Lifecycles: Manage both application lifecycle and AI model lifecycle effectively.

Data Governance: Attention to data quality, privacy, and compliance issues is essential.

Auditability & Explainability

Critical Aspects: Ensure retrospective reviews of decisions made by AI systems; maintain thorough logs for accountability.

Security Concerns

Emerging Risks: Address security challenges related to the sensitivity of AI training data and potential adversarial influences.

Conclusion

Uniqueness of AI Applications: Recognize the distinct challenges posed by ML components compared to traditional applications and the necessity for structured monitoring and management throughout the lifecycle.

How do the authors envision the future of AI?

Challenge of Future Predictions: Predicting the future, particularly regarding AI, is complex and fraught with uncertainty, as noted by Niels Bohr.

Diverse Perspectives: Various experts, including futurologists and prominent thinkers like Kurzweil, Hawking, and Musk, have made bold predictions about AI, notably concerning the Singularity.

Singularity Explained: The Singularity concept suggests a future point when AI can autonomously improve itself, leading to an intelligence surpassing human comprehension.

Human-AI Relationship: The essential question is: What will happen to humanity when AI reaches such a level of autonomy?

Current AI State: Presently, AI lacks true intelligence, functioning instead as a product of programmed capabilities; it can perform tasks effectively but does not possess self-awareness or consciousness.

AI in Business: The book emphasizes practical applications of AI in resolving real-world business problems rather than hypothetical scenarios.

Singularity Timeline Predictions: Historical predictions, like Turing's and Minsky's, have proven overly optimistic, with enhanced capabilities like reasoning still far from realization.

Data's Role: The importance of data in AI applications is highlighted; the future of AI research will likely focus on improving data management and application.

Big Data and AI Synergy: Combining big data analytics with AI offers significant advantages but also presents challenges around data management and processing complexity.

Challenges in Data Acquisition: Many enterprises struggle to acquire necessary data, often relying on external sources or facing privacy constraints.

Emerging Data Solutions:

Synthetic Data: Using anonymized or synthetic data to mitigate privacy concerns.

Federated Learning: Allowing model training without centralizing data, enhancing privacy and efficiency.

Advancements in Computing: The need for efficient computing to support increasingly complex AI workloads is critical as new algorithms demand more resources.

Algorithm Developments: There is a rapid growth in new algorithms targeting challenges in data management and ML effectiveness.

Zero-Shot and Few-Shot Learning: These approaches allow systems to learn with minimal training data, making AI applications more adaptable and efficient.

Neuro-Symbolic AI: Aiming to combine neural network learning with symbolic reasoning for enhanced comprehension and decision-making capabilities.

Future Outlook: Rapid advancements in AI technologies continue to evolve, potentially leading to transformative changes in various industries and everyday life.


Emergence of AI Engineering:

AI applications differ from conventional software applications.

Delivering real AI applications requires more than just understanding algorithms.

Key differences include the importance of training data, unpredictable behavior in unseen situations, and the necessity for trustworthy AI.

AI Lifecycle and Management:

AI lifecycle includes traditional software lifecycles along with model management during deployment.

It is essential to detect drifts in model performance and to have infrastructure for data, models, and code persistence.

AI Engineering Discipline:

The field of AI Engineering is emerging, focusing on designing and building trustworthy AI systems.

There is a push for standard practices in AI development, monitoring, and governance.

Human-Machine Interaction:

The evolution of human-machine interaction has progressed from command line interfaces to sophisticated multimodal interfaces.

There is a cultural shift from transactional to relational interactions with computers.

Augmented Intelligence:

AI is expected to work alongside humans to enhance decision-making and collaborative efforts.

Current applications show AI performing routine tasks, allowing humans to focus on more complex decisions.

Trust and Risk:

Building trustworthy AI is crucial, especially when decisions have serious consequences.

Transparency in AI decision-making processes is necessary to mitigate risks and biases.

Impact on Disabilities:

AI can significantly improve assistive technologies for people with disabilities.

However, careful consideration is required to avoid replicating existing biases.

Final Thoughts:

Emphasizes the importance of data in AI projects and ongoing advancements in algorithms and computing.

Encourages the maturation of AI Engineering to support trustworthy systems and effective human-machine collaboration.


Many Worlds Interpretation: Every decision creates alternate realities; each represents different outcomes.

CEO's Focus: Developing a corporate strategy for exploiting AI in response to media interest and competitor claims.

Team Composition: Assembling a knowledgeable team that understands AI's core capabilities and limitations without needing deep expertise.

Business Process Integration: Exploring AI applications within existing business processes, balancing innovation and feasibility.

Data Importance: Emphasizing the need for quality data and future availability to support AI initiatives.

Structured Approach: Aiming for a proper infrastructure and development process, contrasting competitors’ superficial efforts.

Skill Development: Assembling a skilled team including technical engineers and business specialists to create a reliable AI strategy and leadership position in the field.


Please Share your thoughts: What are your next big AI project?


































































































































































































.



Comments

Popular posts from this blog

Why does Curiosity matter?

What's hidden behind the door?

What is Destroy Your Business Method?