Incredible learning and knowledge enhancement platform

- MCQ Categories
- QNA Categories
- Sign-in / Sign-Up
- What's New ?
- Terms of Use
- Privacy Policy
- Refund & Subscription Policy

© 2019 - 2023 Copyright : helpdice.com

A Team that improve constantly to provide great service to their customers

## Latest Updates

MCQ Problems / Explanations

RELATED MCQ'S

POPULAR POSTS

## Aug. 26, 2022

How to install vmware on windows 10.

VMware Workstation Pro is a hosted hypervisor that runs on x64 versions of Windows and Linux operating systems. It enables users to set up virtual machines (VMs) on a single physical machine and…

## How to deploy PHP App on ubuntu, XAMPP,…

In this Post, you will learn how to create instance on AWS EC2 virtual server on the cloud. How to use AWS Elastic IP for instance launched. How to access your instance upon successful launching…

## How to install puppet architecture on L…

Puppet is an open source software configuration management and deployment tool. It's most commonly used on Linux and Windows to pull the strings on multiple application servers at once. Like oth…

## Test Internet Speed Using Python

Python provides various libraries for doing the same. One such library is speedtest-cli. This library is a command-line interface for testing internet bandwidth using speedtest.net

## Highlights ( August 2021 Update )

## Reinforcement Learning: What is, Algorithms, Types & Examples

- What is Reinforcement Learning?

Reinforcement Learning is defined as a Machine Learning method that is concerned with how software agents should take actions in an environment. Reinforcement Learning is a part of the deep learning method that helps you to maximize some portion of the cumulative reward.

This neural network learning method helps you to learn how to attain a complex objective or maximize a specific dimension over many steps.

In Reinforcement Learning tutorial, you will learn:

## Important Components of Deep Reinforcement Learning Method

How reinforcement learning works, reinforcement learning algorithms, characteristics of reinforcement learning, types of reinforcement learning, learning models of reinforcement, reinforcement learning vs. supervised learning, applications of reinforcement learning, why use reinforcement learning, when not to use reinforcement learning, challenges of reinforcement learning.

Here are some important terms used in Reinforcement AI:

- Agent: It is an assumed entity which performs actions in an environment to gain some reward.
- Environment (e): A scenario that an agent has to face.
- Reward (R): An immediate return given to an agent when he or she performs specific action or task.
- State (s): State refers to the current situation returned by the environment.
- Policy (π): It is a strategy which applies by the agent to decide the next action based on the current state.
- Value (V): It is expected long-term return with discount, as compared to the short-term reward.
- Value Function: It specifies the value of a state that is the total amount of reward. It is an agent which should be expected beginning from that state.
- Model of the environment: This mimics the behavior of the environment. It helps you to make inferences to be made and also determine how the environment will behave.
- Model based methods: It is a method for solving reinforcement learning problems which use model-based methods.
- Q value or action value (Q): Q value is quite similar to value. The only difference between the two is that it takes an additional parameter as a current action.

Let’s see some simple example which helps you to illustrate the reinforcement learning mechanism.

Consider the scenario of teaching new tricks to your cat

- As cat doesn’t understand English or any other human language, we can’t tell her directly what to do. Instead, we follow a different strategy.
- We emulate a situation, and the cat tries to respond in many different ways. If the cat’s response is the desired way, we will give her fish.
- Now whenever the cat is exposed to the same situation, the cat executes a similar action with even more enthusiastically in expectation of getting more reward(food).
- That’s like learning that cat gets from “what to do” from positive experiences.
- At the same time, the cat also learns what not do when faced with negative experiences.

## Example of Reinforcement Learning

In this case,

- Your cat is an agent that is exposed to the environment. In this case, it is your house. An example of a state could be your cat sitting, and you use a specific word in for cat to walk.
- Our agent reacts by performing an action transition from one “state” to another “state.”
- For example, your cat goes from sitting to walking.
- The reaction of an agent is an action, and the policy is a method of selecting an action given a state in expectation of better outcomes.
- After the transition, they may get a reward or penalty in return.

There are three approaches to implement a Reinforcement Learning algorithm.

## Value-Based:

In a value-based Reinforcement Learning method, you should try to maximize a value function V(s) . In this method, the agent is expecting a long-term return of the current states under policy π .

## Policy-based:

In a policy-based RL method, you try to come up with such a policy that the action performed in every state helps you to gain maximum reward in the future.

Two types of policy-based methods are:

- Deterministic: For any state, the same action is produced by the policy π.
- Stochastic: Every action has a certain probability, which is determined by the following equation.Stochastic Policy : n{a\s) = P\A, = a\S, =S]

## Model-Based:

In this Reinforcement Learning method, you need to create a virtual model for each environment. The agent learns to perform in that specific environment.

Here are important characteristics of reinforcement learning

- There is no supervisor, only a real number or reward signal
- Sequential decision making
- Time plays a crucial role in Reinforcement problems
- Feedback is always delayed, not instantaneous
- Agent’s actions determine the subsequent data it receives

Two types of reinforcement learning methods are:

It is defined as an event, that occurs because of specific behavior. It increases the strength and the frequency of the behavior and impacts positively on the action taken by the agent.

This type of Reinforcement helps you to maximize performance and sustain change for a more extended period. However, too much Reinforcement may lead to over-optimization of state, which can affect the results.

Negative Reinforcement is defined as strengthening of behavior that occurs because of a negative condition which should have stopped or avoided. It helps you to define the minimum stand of performance. However, the drawback of this method is that it provides enough to meet up the minimum behavior.

There are two important learning models in reinforcement learning:

## Markov Decision Process

The following parameters are used to get a solution:

- Set of actions- A
- Set of states -S

The mathematical approach for mapping a solution in reinforcement Learning is recon as a Markov Decision Process or (MDP).

Q learning is a value-based method of supplying information to inform which action an agent should take.

Let’s understand this method by the following example:

- There are five rooms in a building which are connected by doors.
- Each room is numbered 0 to 4
- The outside of the building can be one big outside area (5)
- Doors number 1 and 4 lead into the building from room 5

Next, you need to associate a reward value to each door:

- Doors which lead directly to the goal have a reward of 100
- Doors which is not directly connected to the target room gives zero reward
- As doors are two-way, and two arrows are assigned for each room
- Every arrow in the above image contains an instant reward value

Explanation:

In this image, you can view that room represents a state

Agent’s movement from one room to another represents an action

In the below-given image, a state is described as a node, while the arrows show the action.

For example, an agent traverse from room number 2 to 5

- Initial state = state 2
- State 2-> state 3
- State 3 -> state (2,1,4)
- State 4-> state (0,5,3)
- State 1-> state (5,3)
- State 0-> state 4

Here are applications of Reinforcement Learning:

- Robotics for industrial automation.
- Business strategy planning
- Machine learning and data processing
- It helps you to create training systems that provide custom instruction and materials according to the requirement of students.
- Aircraft control and robot motion control

Here are prime reasons for using Reinforcement Learning:

- It helps you to find which situation needs an action
- Helps you to discover which action yields the highest reward over the longer period.
- Reinforcement Learning also provides the learning agent with a reward function.
- It also allows it to figure out the best method for obtaining large rewards.

You can’t apply reinforcement learning model is all the situation. Here are some conditions when you should not use reinforcement learning model.

- When you have enough data to solve the problem with a supervised learning method
- You need to remember that Reinforcement Learning is computing-heavy and time-consuming. in particular when the action space is large.

Here are the major challenges you will face while doing Reinforcement earning:

- Feature/reward design which should be very involved
- Parameters may affect the speed of learning.
- Realistic environments can have partial observability.
- Too much Reinforcement may lead to an overload of states which can diminish the results.
- Realistic environments can be non-stationary.
- Reinforcement Learning is a Machine Learning method
- Three methods for reinforcement learning are 1) Value-based 2) Policy-based and Model based learning.
- Agent, State, Reward, Environment, Value function Model of the environment, Model based methods, are some important terms using in RL learning method
- The example of reinforcement learning is your cat is an agent that is exposed to the environment.
- The biggest characteristic of this method is that there is no supervisor, only a real number or reward signal
- Two types of reinforcement learning are 1) Positive 2) Negative
- Two widely used learning model are 1) Markov Decision Process 2) Q learning
- Reinforcement Learning method works on interacting with the environment, whereas the supervised learning method works on given sample data or example.
- Application or reinforcement learning methods are: Robotics for industrial automation and business strategy planning
- You should not use this method when you have enough data to solve the problem
- The biggest challenge of this method is that parameters may affect the speed of learning
- What is Artificial Intelligence? Introduction, History & Types of AI
- Deep Learning Tutorial for Beginners: Neural Network Basics
- TensorFlow vs Theano vs Torch vs Keras: Deep Learning Library
- 21 BEST Artificial Intelligence Books (2023 Update)
- Artificial Intelligence Tutorial for Beginners: Learn Basics of AI

This is your last free member-only story this month. Sign up for Medium and get an extra one.

Member-only story

## What Kinds of Controls are Possible in Reinforcement Learning Problems?

Caleb M. Bowyer, Ph.D. Candidate

Level Up Coding

## 1. What is a reinforcement learning problem and why are controls important for them?

Reinforcement learning is a problem where an agent tries to learn how to maximize its reward by interacting with its environment. Decisions, also known as controls, are important for reinforcement learning problems because they help agents learn which control in a given state is associated with the greatest rewards. There are many different reinforcement learning algorithms that help discover what are the best controls (those controls that lead to the greatest rewards); future tutorials, will cover these algorithms in greater depth.

Different reinforcement learning control algorithms can be combined to overcome some of their individual weaknesses. For example, reinforcement learning algorithms could encourage exploration early on during an agent’s training in order to avoid getting stuck in local optima or suboptimal policies. In addition, reinforcement learning control algorithms can be used to help an agent recover from mistakes and return to the task at hand. Properly tuning reinforcement learning control selection is an important part of solving reinforcement learning problems. After an agent has been trained sufficiently, it would switch to exploiting its knowledge to obtain the greatest possible sum of rewards from any given state during the implementation or testing phase. For more on state, read my previous post:

https://medium.com/@CalebMBowyer/what-is-state-in-reinforcement-learning-it-is-what-the-engineer-says-it-is-47add99a1121

The set of all possible controls is referred to as the control space. This space, in most modeled problems is finite, especially for games. However, the control space can easily be infinite in problems where continuous controls are required. In reinforcement learning, the state-control space is often too large for the agent to search exhaustively, so it must use some form of approximation in order to focus its search. There are two main types of reinforcement learning algorithms: value-based and policy-based. Value-based methods aim to directly estimate the value function, which gives the expected return for taking a given control in a given state. Policy-based methods aim to directly learn a policy, which is a mapping from states to controls. Also, policies can be either deterministic or stochastic. Both types of reinforcement learning algorithms have their advantages and disadvantages, and there is significant ongoing research into developing new and improved reinforcement learning control algorithms.

## 2. Some reinforcement learning control algorithms:

Reinforcement learning algorithms can be used to solve a variety of different types of control problems. Some of the most common reinforcement learning control algorithms include:

- Q-learning: this algorithm is used to find the optimal policy for an agent in a Markov Decision Process (MDP). Q-learning can be a model-free or a model-based reinforcement learning algorithm, meaning that it does not use a model of the environment in its Q-function update or it does.
- SARSA: the SARSA algorithm is similar to Q-learning, but it uses a slightly different update rule. SARSA is also a model-free reinforcement learning algorithm.
- Monte Carlo methods: Monte Carlo methods are reinforcement learning algorithms that can be used to solve both MDPs and Partially Observable MDPs (POMDPs). Monte Carlo methods are typically either on-policy or off-policy. On-policy reinforcement learning algorithms learn from experience sampled from the current policy, while off-policy reinforcement learning algorithms can learn from experience sampled from any arbitrary policy, even if that policy is not the optimal policy. Each of these reinforcement learning algorithms has its own strengths and weaknesses, so it is important to choose the right algorithm for the specific problem that you are trying to solve.

## 3. How to choose the right type of control algorithm for your RL problem?

There are many different types of reinforcement learning control algorithms, and the choice of which one to use depends on the specific optimization problem that needs to be solved. For example, if the goal is to find the optimal path through a maze, then an algorithm such as tabular Q-learning would be well suited as the problem can be broken down into a finite set of states and a finite set of controls. The states are the maze positions, and the controls could be move up, down, left, or right.

However, if the goal is to control a robotic arm to move objects around in a workspace, then a different reinforcement learning control algorithm, such as an inverse reinforcement learning algorithm, would be more appropriate. The choice of reinforcement learning control algorithm is an important one, and it is important to select the right algorithm for the specific optimization problem that needs to be solved.

Some reinforcement learning problems are more suited to tabular methods, while others are more suited to function approximation methods such as using neural networks for control. The second thing to consider when selecting a reinforcement learning control algorithm for your problem is the size of the state space and control space. Tabular methods are only feasible for small state spaces, while function approximation methods can be used for larger state spaces. The third thing to consider is the nature of the reward function. Some reinforcement learning problems have sparse rewards, while others have dense rewards. An example of sparse rewards, is when an agent is only rewarded for winning a game that carries on for many stages of decision making, and for every other stage receives a zero reward signal.

Finally, you should consider the computational resources available to you. Some reinforcement learning algorithms are more computationally intensive than others. If you have limited resources, you may need to use a less computationally intensive algorithm. For example, I have been training some very complex Deep Q-networks recently, and they take a very long time to train on my personal computer, so I am migrating my neural network code to run using Google’s Colab tool. That is one option for speeding up complex reinforcement learning algorithms that are computationally intensive. Google’s Colab tool is a very nice and FREE resource for deep learning engineers and hence deep RL engineers.

## 4. Examples of how different controls can be used in real-world reinforcement learning problems:

Reinforcement learning controls can be used in a variety of real-world problems. For example, consider a robotic arm that needs to be trained to reach for a specific object. In this case, reinforcement learning can be used to tune the arm’s control parameters so that it reaches the target with the highest probability. In robotic controls, reinforcement learning is used to optimize the control inputs of a robot so that it achieves some goal. For example, a robot might be programmed to wander around a room and collect all the objects in the room. The reinforcement learning algorithm would learn the best way to control the robot’s motors so that it can collect all the objects as quickly as possible.

In game playing, reinforcement learning is used to train an artificial intelligence (AI) to play a game such as chess or Go. Here, the control would be where to place what game piece, adhering to the rules of the game. The AI agent is programmed with a reinforcement learning algorithm, which allows it to learn from its mistakes and gradually get better at playing the game. Eventually, the AI agent will be able to beat human opponents at the game.

Another example of reinforcement learning controls in action is autonomous driving. Here, reinforcement learning can be used to optimize the control system so that the car drives safely and efficiently. Additionally for autonomous driving, the car could also learn the best way to navigate through traffic, based on information such as the current traffic conditions and the destination.

In an agricultural setting, reinforcement learning could be used to control irrigation systems. The system would learn the optimal amount of water to provide to the plants, based on factors such as weather conditions and the type of plant. Other applications of reinforcement learning in a gardening setting, could be where to place seeds and at what depth to plant those seeds.

In a factory setting, reinforcement learning could be used to control assembly line robots. The system would learn the most efficient way to assemble products, based on data such as the number of products to be assembled and the order in which they need to be assembled. Reinforcement learning is a powerful tool that can be used in a variety of different settings, including industrial settings.

In the medical space, reinforcement learning can also be used to control robotic prostheses. In this application, reinforcement learning can be used to adjust the control parameters of the prosthesis so that it functions optimally for the user. Thus, reinforcement learning controls can have a wide range of real-world applications. I am most excited about this for future research as it seems to be the least explored, but one of the most useful for helping humanity in myriad ways.

In neural network training, reinforcement learning is used to optimize the weights and biases of a neural network so that it performs well on some task. For example, a reinforcement learning algorithm might be used to train a neural network to recognize handwritten digits with high accuracy as is the case with the MNIST dataset. MNIST is a benchmark dataset, and while this is a toy problem at this point, the application of neural networks to harder problems is ever growing.

In conclusion, reinforcement learning controls can be used in a variety of ways to optimize performance or solve specific control problems for very challenging problems where the best strategy for control is probably not known currently. For instance, they can be used to calculate the optimal control inputs for any system given a set of constraints and objectives, or to find the best way to control a possibly nonlinear system. As such, reinforcement learning control algorithms represent a powerful tool that can be used in many different ways to improve the performance of highly complex systems that are nonlinear, noisy, or continuous.

Until next time,

Consider becoming a Medium member to never miss one of my stories. Gain unlimited access to my writing and the writing of other authors:

## Join Medium with my referral link - Caleb M. Bowyer

As a medium member, a portion of your membership fee goes to writers you read, and you get full access to every story….

## Written by Caleb M. Bowyer, Ph.D. Candidate

AI | Reinforcement Learning | Python | Finance | Value. Support my writing by joining Medium (unlimited access): https://medium.com/@CalebMBowyer/membership

## More from Caleb M. Bowyer, Ph.D. Candidate and Level Up Coding

## Research Trend #5 — Deep Reinforcement Learning

What is deep reinforcement learning.

Sanjay Priyadarshi

## I Spent 30 Days Studying A Programmer Who Built a $230 Billion Company After Quitting His 9–5 —…

Steal this programmer blueprint.

Prathamesh Gadekar

## Python Libraries for Lazy Data Scientists

Do you feel lethargic today use these five libraries to boost your productivity..

MLearning.ai

## Characteristics of Rewards in Reinforcement Learning

In previous articles, i described for beginners what reinforcement learning (rl) is…, recommended from medium.

Waleed Mousa

Artificial Intelligence in Plain English

## Building a Tic-Tac-Toe Game with Reinforcement Learning in Python: A Step-by-Step Tutorial

Welcome to this step-by-step tutorial on how to build a tic-tac-toe game using reinforcement learning in python. in this tutorial, we will….

Shubham Koli

## “Activation Functions” in Deep learning models. How to Choose?

Sigmoid, tanh, softmax, relu, leaky relu explained .

## AI Regulation

## ChatGPT prompts

## Predictive Modeling w/ Python

Hassan Muhammad Adam

## Reinforcement learning

Fellipe Marcellino

## Solving a maze using Dynamic Programming

An introduction to mdps, value functions, bellman equations and general policy iteration.

## Reinforcement Learning: (Atari Game) + Deep learning

Let’s play atari games, but with deep learning :).

Furkan Ayık

## Utilizing Deep Reinforcement Learning in Control: Unleashing the Power of Intelligent Systems

Deep reinforcement learning (drl) combines deep learning and reinforcement learning to create intelligent systems. drl has revolutionized….

Text to speech

Deep RL Course documentation

## Two main approaches for solving RL problems

Deep rl course.

and get access to the augmented documentation experience

to get started

In other words, how do we build an RL agent that can select the actions that maximize its expected cumulative reward?

## The Policy π: the agent’s brain

The Policy π is the brain of our Agent , it’s the function that tells us what action to take given the state we are in. So it defines the agent’s behavior at a given time.

This Policy is the function we want to learn , our goal is to find the optimal policy π*, the policy that maximizes expected return when the agent acts according to it. We find this π* through training.

There are two approaches to train our agent to find this optimal policy π*:

- Directly, by teaching the agent to learn which action to take, given the current state: Policy-Based Methods.
- Indirectly, teach the agent to learn which state is more valuable and then take the action that leads to the more valuable states : Value-Based Methods.

## Policy-Based Methods

In Policy-Based methods, we learn a policy function directly.

This function will define a mapping from each state to the best corresponding action. Alternatively, it could define a probability distribution over the set of possible actions at that state.

We have two types of policies:

- Deterministic : a policy at a given state will always return the same action.

- Stochastic : outputs a probability distribution over actions.

If we recap:

## Value-based methods

In value-based methods, instead of learning a policy function, we learn a value function that maps a state to the expected value of being at that state.

The value of a state is the expected discounted return the agent can get if it starts in that state, and then acts according to our policy.

“Act according to our policy” just means that our policy is “going to the state with the highest value”.

Here we see that our value function defined values for each possible state.

Thanks to our value function, at each step our policy will select the state with the biggest value defined by the value function: -7, then -6, then -5 (and so on) to attain the goal.

## Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

## What are the various problems RL is trying to solve?

I have read most of Sutton and Barto's introductory text on reinforcement learning. I thought I would try to apply some of the RL algorithms in the book to a previous assignment I had done on Sokoban , in which you are in a maze-like grid environment, trying to stack three snowballs into a snowman on a predefined location on the grid.

The basic algorithms (MC control, Q-learning, or Dyna-Q) seemed to all be based on solving whichever specific maze the agent was trained on. For example, the transition probabilities of going from coordinate (1,2) to (1,3) would be different for different mazes (since in one maze, we could have an obstacle at (1,3)). An agent that calculates its rewards based on one maze using these algorithms doesn't seem like it would know what to do given a totally different maze. It would have to retrain : 1) either take real life actions to relearn from scratch how to navigate a maze, or 2) be given the model of the maze, either exact or approximate (which seems infeasible in a real life setting) so that planning without taking actions is possible.

When I started learning RL, I thought that it would be more generalizable. This leads me to the question: Is this problem covered in multi-task RL? How would you categorize the various areas of RL in terms of the general problem that it is looking to solve?

- reinforcement-learning

The basic algorithms (MC control, Q-learning, or Dyna-Q) seemed to all be based on solving whichever specific maze the agent was trained on.

All RL algorithms are based on creating solutions to a defined state and action space. If you limit your state space representation and training to a single maze, then that is what will be learned. This is no different from other machine learning approaches - they learn the traits of a population by being shown samples from that population (not just one example). They also need to be built for the range of input parameters that you need them to solve.

In the case of RL, and your maze solver, that means the state representation needs to cover all possible mazes, not just a location in a single maze (there are ways to internalise some of the representation to the learning process such as using RNNs, but that is not relevant to the main answer here).

The toy environments in Sutton & Barto are often trivial to solve using non-RL approaches. They are not demonstrations of what RL can do, instead they have been chosen to explain how a particular issue related to learning works. Sutton & Barto does include a chapter on more interesting and advanced uses of RL - that is chapter 16 "Applications and Case Studies" in the second edition .

When I started learning RL, I thought that it would be more generalizable.

It is, but without some kind of pre-training to support generalisation from a low number of examples, you have to:

Model the general problem

Train the agent on the general problem

RL agents trained from scratch on new problems can seem very inefficient compared to learning by living creatures that RL roughly parallels. However, RL is not a model for general intelligence, but for learning through trial and error. Most examples start from no knowledge, not even basic priors for a maze such as the grid layout or the generalisable knowledge of movement and location.

If you do provide a more general problem definition and training examples, and use a function approximator that can generalise internally (such as a neural network), then an agent can learn to solve problems in a more general sense and may also generate internal representations that (approximately) match up to common factors in the general problem.

## You must log in to answer this question.

Not the answer you're looking for browse other questions tagged reinforcement-learning ..

- Featured on Meta
- Our Design Vision for Stack Overflow and the Stack Exchange network

## Hot Network Questions

- Contractor does work on the wrong house
- How does one differentiate between the logically possible and the impossible?
- Hide layers in preview map in QGIS Print composer
- Vector Sum of Pythagorean Triples
- Did the Z8000 ever hit the home computer market?
- Does every sequence of group epimorphisms (between finitely generated groups) contain a stable subsequence?
- FAA - Common Aircraft Categories
- Is this duplo train track under too much tension?
- Novel where character travels through hell and loses their memory
- Meaning of "retiring" in "free admission with retiring donations"
- Why is minor facial non-symmetry obvious in photos?
- Did a blown fuse in my multimeter damage my multimeter?
- DIY fiber optic network connection: 600ft through forest
- How to describe the location (North, South, East and West) of country/city with respect to other country/city (in German language)
- Making caramel sauce with yogurt instead of milk
- Does America, like non-democratic countries like China, also have factions?
- Were the other Babylon stations less technologically advanced compared to Babylon 5?
- Where does the negative signature case come from in the Pythagorean derivation of distances in spacetime?
- How to handle conversation between a druid and wild / tamed animals?
- Display array of objects in two columns
- Constant clicking noise, but I've replaced all the parts
- A Trivial Pursuit #02 (Entertainment 1/4): Wrong Superhero
- Determine whether a (1,2) tensor is Nijenhuis tensor
- Is electric field inside a conductor really always zero?

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .

- _Multi DropDown
- __DropDown 1
- __DropDown 2
- __DropDown 3
- Documentation

## MCQ Question of Machine learning

- What is Machine Learning (ML)?
- The autonomous acquisition of knowledge through the use of manual programs
- The selective acquisition of knowledge through the use of computer programs
- The selective acquisition of knowledge through the use of manual programs
- The autonomous acquisition of knowledge through the use of computer programs

Correct option is D

- Father of Machine Learning (ML)
- Geoffrey Chaucer
- Geoffrey Hill
- Geoffrey Everest Hinton
- None of the above

Correct option is C

- Which is FALSE regarding regression?
- It may be used for interpretation
- It is used for prediction
- It discovers causal relationships
- It relates inputs to outputs
- Choose the correct option regarding machine learning (ML) and artificial intelligence (AI)
- ML is a set of techniques that turns a dataset into a software
- AI is a software that can emulate the human mind
- ML is an alternate way of programming intelligent machines
- All of the above
- Which of the factors affect the performance of the learner system does not include?
- Good data structures
- Representation scheme used
- Training scenario
- Type of feedback

Correct option is A

- In general, to have a well-defined learning problem, we must identity which of the following
- The class of tasks
- The measure of performance to be improved
- The source of experience
- Successful applications of ML
- Learning to recognize spoken words
- Learning to drive an autonomous vehicle
- Learning to classify new astronomical structures
- Learning to play world-class backgammon

Correct option is E

- Which of the following does not include different learning methods
- Introduction
- Memorization
- Deduction

Correct option is B

- In language understanding, the levels of knowledge that does not include?
- Phonological
- Syntactic
- Designing a machine learning approach involves:-
- Choosing the type of training experience
- Choosing the target function to be learned
- Choosing a representation for the target function
- Choosing a function approximation algorithm
- Concept learning inferred a valued function from training examples of its input and output.
- Hexadecimal
- Which of the following is not a supervised learning?
- Naive Bayesian
- Linear Regression
- Decision Tree Answer
- What is Machine Learning?
- Artificial Intelligence
- Deep Learning
- Data Statistics

A. Only (i)

B. (i) and (ii)

C. All

D. None

- What kind of learning algorithm for “Facial identities or facial expressions”?
- Recognition Patterns
- Generating Patterns
- Recognizing Anomalies Answer
- Which of the following is not type of learning?
- Unsupervised Learning
- Supervised Learning
- Semi-unsupervised Learning
- Reinforcement Learning
- Real-Time decisions, Game AI, Learning Tasks, Skill Aquisition, and Robot Navigation are applications of which of the folowing
- Supervised Learning: Classification
- Reinforcement Learning
- Unsupervised Learning: Clustering
- Unsupervised Learning: Regression
- Targetted marketing, Recommended Systems, and Customer Segmentation are applications in which of the following
- Unsupervised Learning: Regression
- Fraud Detection, Image Classification, Diagnostic, and Customer Retention are applications in which of the following
- Which of the following is not function of symbolic in the various function representation of Machine Learning?
- Rules in propotional Logic
- Hidden-Markov Models (HMM)
- Rules in first-order predicate logic
- Decision Trees
- Which of the following is not numerical functions in the various function representation of Machine Learning?
- Neural Network
- Support Vector Machines
- Linear Regression
- FIND-S Algorithm starts from the most specific hypothesis and generalize it by considering only
- Negative or Positive
- FIND-S algorithm ignores
- The Candidate-Elimination Algorithm represents the .
- Solution Space
- Version Space
- Elimination Space
- All of the above
- Inductive learning is based on the knowledge that if something happens a lot it is likely to be generally
- False Answer
- Inductive learning takes examples and generalizes rather than starting with
- None of these
- A drawback of the FIND-S is that it assumes the consistency within the training set
- False
- What strategies can help reduce overfitting in decision trees?
- Enforce a maximum depth for the tree
- Enforce a minimum number of samples in leaf nodes
- Make sure each leaf node is one pure class

A. All

B. (i), (ii) and (iii)

C. (i), (iii), (iv)

D. None

- Which of the following is a widely used and effective machine learning algorithm based on the idea of bagging?
- Decision Tree
- Random Forest
- Classification
- To find the minimum or the maximum of a function, we set the gradient to zero because which of the following
- Depends on the type of problem
- The value of the gradient at extrema of a function is always zero
- Both (A) and (B)
- Which of the following is a disadvantage of decision trees?
- Decision trees are prone to be overfit
- Decision trees are robust to outliers
- Factor analysis
- None of the above
- What is perceptron?
- A single layer feed-forward neural network with pre-processing
- A neural network that contains feedback
- A double layer auto-associative neural network
- An auto-associative neural network
- Which of the following is true for neural networks?
- The training time depends on the size of the
- Neural networks can be simulated on a conventional
- Artificial neurons are identical in operation to biological

B. Only (ii)

C. (i) and (ii)

- What are the advantages of neural networks over conventional computers?
- They have the ability to learn by
- They are more fault
- They are more suited for real time operation due to their high „computational ‟

A. (i) and (ii)

B. (i) and (iii)

C. Only (i)

D. All

E. None

- What is Neuro software?
- It is software used by Neurosurgeon
- Designed to aid experts in real world
- It is powerful and easy neural network
- A software used to analyze neurons
- Which is true for neural networks?
- Each node computes it ‟ s weighted input
- Node could be in excited state or non-excited state
- It has set of nodes and connections
- What is the objective of backpropagation algorithm?
- To develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly
- To develop learning algorithm for multilayer feedforward neural network
- To develop learning algorithm for single layer feedforward neural network
- Which of the following is true?

Single layer associative neural networks do not have the ability to:-

- Perform pattern recognition
- Find the parity of a picture
- Determine whether two or more shapes in a picture are connected or not

A. (ii) and (iii)

- The backpropagation law is also known as generalized delta rule
- On average, neural networks have higher computational rates than conventional computers.
- Neural networks learn by
- Neural networks mimic the way the human brain

B. (ii) and (iii)

C. (i), (ii) and (iii)

- What is true regarding backpropagation rule?
- Error in output is propagated backwards only to determine weight updates
- There is no feedback of signal at nay stage
- It is also called generalized delta rule
- There is feedback in final stage of backpropagation
- An auto-associative network is
- A neural network that has only one loop
- A neural network that contains no loops
- A 3-input neuron has weights 1, 4 and 3. The transfer function is linear with the constant of proportionality being equal to 3. The inputs are 4, 8 and 5 respectively. What will be the output?
- What of the following is true regarding backpropagation rule?
- Hidden layers output is not all important, they are only meant for supporting input and output layers
- Actual output is determined by computing the outputs of units for each hidden layer
- It is a feedback neural network
- What is back propagation?
- It is another name given to the curvy function in the perceptron
- It is the transmission of error back through the network to allow weights to be adjusted so that the network can learn
- The general limitations of back propagation rule is/are
- Slow convergence
- Local minima problem
- What is the meaning of generalized in statement “backpropagation is a generalized delta rule” ?
- Because delta is applied to only input and output layers, thus making it more simple and generalized
- It has no significance
- Because delta rule can be extended to hidden layer units
- Neural Networks are complex functions with many parameter
- Exponential
- The general tasks that are performed with backpropagation algorithm
- Pattern mapping
- Function approximation
- Backpropagaion learning is based on the gradient descent along error surface.
- In backpropagation rule, how to stop the learning process?
- No heuristic criteria exist
- On basis of average gradient value
- There is convergence involved
- Applications of NN (Neural Network)
- Risk management
- Data validation
- Sales forecasting
- The network that involves backward links from output to the input and hidden layers is known as
- Recurrent neural network
- Self organizing maps
- Perceptrons
- Single layered perceptron
- Decision Tree is a display of an Algorithm?
- Which of the following is/are the decision tree nodes?
- Decision Nodes
- Chance Nodes
- End Nodes are represented by which of the following
- Solar street light
- Squares
- Decision Nodes are represented by which of the following
- Chance Nodes are represented by which of the following
- Advantage of Decision Trees
- Possible Scenarios can be added
- Use a white box model, if given result is provided by a model
- Worst, best and expected values can be determined for different scenarios
- terms are required for building a bayes model.
- Which of the following is the consequence between a node and its predecessors while creating bayesian network?
- Conditionally independent
- Functionally dependent
- Both Conditionally dependant & Dependant
- Dependent
- Why it is needed to make probabilistic systems feasible in the world?
- Feasibility
- Reliability
- Crucial robustness
- Bayes rule can be used for:-
- Solving queries
- Increasing complexity
- Answering probabilistic query
- Decreasing complexity
- provides way and means of weighing up the desirability of goals and the likelihood of achieving
- Utility theory
- Decision theory
- Bayesian networks
- Probability theory
- Which of the following provided by the Bayesian Network?
- Complete description of the problem
- Partial description of the domain
- Complete description of the domain

65. Probability provides a way of summarizing the that comes from our laziness and

- Uncertaintity
- Joint probability distributions
- Randomness
- The entries in the full joint probability distribution can be calculated as
- Using variables
- Both Using variables & information
- Using information
- Causal chain (For example, Smoking cause cancer) gives rise to:-
- Conditionally Independence
- Conditionally Dependence
- The bayesian network can be used to answer any query by using:-
- Full distribution
- Joint distribution
- Partial distribution
- Bayesian networks allow compact specification of:-
- Propositional logic statements
- The compactness of the bayesian network can be described by
- Fully structured
- Locally structured
- Partially structured
- The Expectation-Maximization Algorithm has been used to identify conserved domains in unaligned proteins only. State True or False.
- Which of the following is correct about the Naive Bayes?
- Assumes that all the features in a dataset are independent
- Assumes that all the features in a dataset are equally important
- Which of the following is false regarding EM Algorithm?
- The alignment provides an estimate of the base or amino acid composition of each column in the site
- The column-by-column composition of the site already available is used to estimate the probability of finding the site at any position in each of the sequences
- The row-by-column composition of the site already available is used to estimate the probability
- Naïve Bayes Algorithm is a learning algorithm.
- Reinforcement
- Unsupervised
- EM algorithm includes two repeated steps, here the step 2 is .
- The normalization
- The maximization step
- The minimization step
- Examples of Naïve Bayes Algorithm is/are
- Spam filtration
- Sentimental analysis
- Classifying articles
- In the intermediate steps of “EM Algorithm”, the number of each base in each column is determined and then converted to
- Naïve Bayes algorithm is based on and used for solving classification problems.
- Bayes Theorem
- Candidate elimination algorithm
- EM algorithm
- Types of Naïve Bayes Model:
- Multinomial
- Disadvantages of Naïve Bayes Classifier:
- Naive Bayes assumes that all features are independent or unrelated, so it cannot learn the relationship between
- It performs well in Multi-class predictions as compared to the other
- Naïve Bayes is one of the fast and easy ML algorithms to predict a class of
- It is the most popular choice for text classification problems.
- The benefit of Naïve Bayes:-
- It can be used for Binary as well as Multi-class
- In which of the following types of sampling the information is carried out under the opinion of an expert?
- Convenience sampling
- Judgement sampling
- Quota sampling
- Purposive sampling
- Full form of MDL?
- Minimum Description Length
- Maximum Description Length
- Minimum Domain Length
- For the analysis of ML algorithms, we need
- Computational learning theory
- Statistical learning theory
- Both A & B
- None of these
- PAC stand for
- Probably Approximate Correct
- Probably Approx Correct
- Probably Approximate Computation
- Probably Approx Computation

86. hypothesis h with respect to target concept c and distribution D , is the probability that h will misclassify an instance drawn at random according to D.

- Type 1 Error
- Type 2 Error
- Statement: True error defined over entire instance space, not just training data
- What are the area CLT comprised of?
- Sample Complexity
- Computational Complexity
- Mistake Bound
- All of these
- What area of CLT tells “How many examples we need to find a good hypothesis ?”?
- What area of CLT tells “How much computational power we need to find a good hypothesis ?”?
- What area of CLT tells “How many mistakes we will make before finding a good hypothesis ?”?
- (For question no. 9 and 10) Can we say that concept described by conjunctions of Boolean literals are PAC learnable?
- How large is the hypothesis space when we have n Boolean attributes?
- |H| = 3 n
- |H| = 2 n
- |H| = 1 n
- The VC dimension of hypothesis space H1 is larger than the VC dimension of hypothesis space H2. Which of the following can be inferred from this?
- The number of examples required for learning a hypothesis in H1 is larger than the number of examples required for H2
- The number of examples required for learning a hypothesis in H1 is smaller than the number of examples required for
- No relation to number of samples required for PAC learning.
- For a particular learning task, if the requirement of error parameter changes from 0.1 to 0.01. How many more samples will be required for PAC learning?
- 10 times
- Computational complexity of classes of learning problems depends on which of the following?
- The size or complexity of the hypothesis space considered by learner
- The accuracy to which the target concept must be approximated
- The probability that the learner will output a successful hypothesis
- The instance-based learner is a
- Lazy-learner
- Eager learner
- Can ‟ t say
- When to consider nearest neighbour algorithms?
- Instance map to point in k n
- Not more than 20 attributes per instance
- Lots of training data
- A, B & C
- What are the advantages of Nearest neighbour alogo?
- Training is very fast
- Can learn complex target functions
- Don ‟ t lose information
- What are the difficulties with k-nearest neighbour algo?
- Calculate the distance of the test case from all training cases
- Curse of dimensionality
- What if the target function is real valued in kNN algo?
- Calculate the mean of the k nearest neighbours
- Calculate the SD of the k nearest neighbour
- What is/are true about Distance-weighted KNN?
- The weight of the neighbour is considered
- The distance of the neighbour is considered
- What is/are advantage(s) of Distance-weighted k-NN over k-NN?
- Robust to noisy training data
- Quite effective when a sufficient large set of training data is provided
- What is/are advantage(s) of Locally Weighted Regression?
- Pointwise approximation of complex target function
- Earlier data has no influence on the new ones
- The quality of the result depends on (LWR)
- Choice of the function
- Choice of the kernel function K
- Choice of the hypothesis space H
- How many types of layer in radial basis function neural networks?

Correct option is A, Input layer, Hidden layer, and Output layer

- The neurons in the hidden layer contains Gaussian transfer function whose output are to the distance from the centre of the neuron.
- PNN/GRNN networks have one neuron for each point in the training file, While RBF network have a variable number of neurons that is usually
- less than the number of training
- greater than the number of training points
- equal to the number of training points
- Which network is more accurate when the size of training set between small to medium?
- K-means clustering
- What is/are true about RBF network?
- A kind of supervised learning
- Design of NN as curve fitting problem
- Use of multidimensional surface to interpolate the test data
- Application of CBR
- All of these
- What is/are advantages of CBR?
- A local approx. is found for each test case
- Knowledge is in a form understandable to human
- Fast to train

112 In k-NN algorithm, given a set of training examples and the value of k < size of training set (n), the algorithm predicts the class of a test example to be the. What is/are advantages of CBR?

- Least frequent class among the classes of k closest training
- Most frequent class among the classes of k closest training
- Class of the closest
- Most frequent class among the classes of the k farthest training examples.
- Which of the following statements is true about PCA?
- We must standardize the data before applying
- We should select the principal components which explain the highest variance
- We should select the principal components which explain the lowest variance
- We can use PCA for visualizing the data in lower dimensions

A. (i), (ii) and (iv).

B. (ii) and (iv)

C. (iii) and (iv)

D. (i) and (iii)

- Genetic algorithm is a
- Search technique used in computing to find true or approximate solution to optimization and search problem
- Sorting technique used in computing to find true or approximate solution to optimization and sort problem
- GA techniques are inspired by
- Evolutionary
- Ecology
- When would the genetic algorithm terminate?
- Maximum number of generations has been produced
- Satisfactory fitness level has been reached for the
- The algorithm operates by iteratively updating a pool of hypotheses, called the
- What is the correct representation of GA?
- GA(Fitness, Fitness_threshold, p)
- GA(Fitness, Fitness_threshold, p, r )
- GA(Fitness, Fitness_threshold, p, r, m)
- GA(Fitness, Fitness_threshold)
- Genetic operators includes
- Produces two new offspring from two parent string by copying selected bits from each parent is called
- Inheritance
- Each schema the set of bit strings containing the indicated as
- 0s, 1s, *s
- 0*10 represents the set of bit strings that includes exactly (A) 0010, 0110
- Correct ( h ) is the percent of all training examples correctly classified by hypothesis then Fitness function is equal to
- Fitness ( h) = (correct ( h)) 2
- Fitness ( h) = (correct ( h)) 3
- Fitness ( h) = (correct ( h))
- Fitness ( h) = (correct ( h)) 4
- Statement: Genetic Programming individuals in the evolving population are computer programs rather than bit
- evolution over many generations was directly influenced by the experiences of individual organisms during their lifetime
- Search through the hypothesis space cannot be characterized. Why?
- Hypotheses are created by crossover and mutation operators that allow radical changes between successive generations
- Hypotheses are not created by crossover and mutation
- ILP stand for
- Inductive Logical programming
- Inductive Logic Programming
- Inductive Logical Program
- Inductive Logic Program
- What is/are the requirement for the Learn-One-Rule method?
- Input, accepts a set of +ve and -ve training examples.
- Output, delivers a single rule that covers many +ve examples and few -ve.
- Output rule has a high accuracy but not necessarily a high
- A, B & C
- is any predicate (or its negation) applied to any set of terms.
- Ground literal is a literal that
- Contains only variables
- does not contains any functions
- does not contains any variables
- Contains only functions Answer
- emphasizes learning feedback that evaluates the learner’s performance without providing standards of correctness in the form of behavioural
- Reinforcement learning
- Features of Reinforcement learning
- Set of problem rather than set of techniques
- RL is training by reward and
- RL is learning from trial and error with the
- Which type of feedback used by RL?
- Purely Instructive feedback
- Purely Evaluative feedback
- What is/are the problem solving methods for RL?
- Dynamic programming
- Monte Carlo Methods
- Temporal-difference learning
- The FIND-S Algorithm

A. Starts with starts from the most specific hypothesis Answer

B. It considers negative examples

C. It considers both negative and positive

D. None of these Correct

136. The hypothesis space has a general-to-specific ordering of hypotheses, and the search can be efficiently organized by taking advantage of a naturally occurring structure over the hypothesis space

A. TRUE

B. FALSE

137. The Version space is:

- The subset of all hypotheses is called the version space with respect to the hypothesis space H and the training examples D, because it contains all plausible versions of the target
- The version space consists of only specific
- The Candidate-Elimination Algorithm

A. The key idea in the Candidate-Elimination algorithm is to output a description of the set of all hypotheses consistent with the training

B. Candidate-Elimination algorithm computes the description of this set without explicitly enumerating all of its

C. This is accomplished by using the more-general-than partial ordering and maintaining a compact representation of the set of consistent

D. All of these

- Concept learning is basically acquiring the definition of a general category from given sample positive and negative training examples of the

B. FALSE

- The hypothesis h1 is more-general-than hypothesis h2 ( h1 > h2) if and only if h1≥h2 is true and h2≥h1 is false. We also say h2 is more-specific-than h1

A. The statement is true

B. The statement is false

C. We cannot

D. None of these

- The List-Then-Eliminate Algorithm

A. The List-Then-Eliminate algorithm initializes the version space to contain all hypotheses in H, then eliminates any hypothesis found inconsistent with any training

B. The List-Then-Eliminate algorithm not initializes to the version

C. None of these Answer

- What will take place as the agent observes its interactions with the world?

A. Learning

B. Hearing

C. Perceiving

D. Speech

- Which modifies the performance element so that it makes better decision?Performance element

A. Performance element

B. Changing element

C. Learning element

D. None of the mentioned

- Any hypothesis found to approximate the target function well over a sufficiently large set of training examples will also approximate the target function well over other unobserved example is called:

A. Inductive Learning Hypothesis

B. Null Hypothesis

C. Actual Hypothesis

- Feature of ANN in which ANN creates its own organization or representation of information it receives during learning time is

A. Adaptive Learning

B. Self Organization

C. What-If Analysis

D. Supervised Learning

- How the decision tree reaches its decision?

A. Single test

B. Two test

C. Sequence of test

D. No test

· Factor analysis

· Decision trees are robust to outliers

· Decision trees are prone to be overfit

· None of the above

- Tree/Rule based classification algorithms generate which rule to perform the classification.

A. if-then.

B. then

C. do

D. Answer

- What is Gini Index?

A. It is a type of index structure

B. It is a measure of purity

C. None of the options

- What is not a RNN in machine learning?

A. One output to many inputs

B. Many inputs to a single output

C. RNNs for nonsequential input

D. Many inputs to many outputs

- Which of the following sentences are correct in reference to Information gain?

A. It is biased towards multi-valued attributes

B. ID3 makes use of information gain

C. The approach used by ID3 is greedy

- A Neural Network can answer

A. For Loop questions

B. what-if questions

C. IF-The-Else Analysis Questions

D. None of these Answer

- Artificial neural network used for

A. Pattern Recognition

B. Classification

C. Clustering

D. All Answer

- Which of the following are the advantage/s of Decision Trees?
- Use a white box model, If given result is provided by a model
- All of the mentioned
- What is the mathematical likelihood that something will occur?

A. Classification

B. Probability

C. Naïve Bayes Classifier

D. None of the other

- What does the Bayesian network provides?
- None of the mentioned
- Where does the Bayes rule can be used?

A. Solving queries

B. Increasing complexity

C. Decreasing complexity

D. Answering probabilistic query

- How many terms are required for building a Bayes model?

A. 2

B. 3

C. 4

D. 1

- What is needed to make probabilistic systems feasible in the world?

A. Reliability

B. Crucial robustness

C. Feasibility

- It was shown that the Naive Bayesian method

A. Can be much more accurate than the optimal Bayesian method

B. Is always worse off than the optimal Bayesian method

C. Can be almost optimal only when attributes are independent

D. Can be almost optimal when some attributes are dependent

- What is the consequence between a node and its predecessors while creating Bayesian network?

A. Functionally dependent

B. Dependant

C. Conditionally independent

D. Both Conditionally dependant & Dependant

- How the compactness of the Bayesian network can be described?

A. Locally structured

B. Fully structured

C. Partial structure

D. All of the mentioned

- How the entries in the full joint probability distribution can be calculated?

A. Using variables

B. Using information

C. Both Using variables & information

- How the Bayesian network can be used to answer any query?

A. Full distribution

B. Joint distribution

C. Partial distribution

D. All of the mentioned

- Sample Complexity is

A. The sample complexity is the number of training-samples that we need to supply to the algorithm, so that the function returned by the algorithm is within an arbitrarily small error of the best possible function, with probability arbitrarily close to 1

B. How many training examples are needed for learner to converge to a successful hypothesis.

C. All of these

- PAC stands for

A. Probability Approximately Correct

B. Probability Applied Correctly

C. Partition Approximately Correct

- Which of the following will be true about k in k-NN in terms of variance

A. When you increase the k the variance will increases

B. When you decrease the k the variance will increases

C. Can ‟ t say

D. None of these

- Which of the following option is true about k-NN algorithm?

A. It can be used for classification

B. It can be used for regression

C. It can be used in both classification and regression Answer

- In k-NN it is very likely to overfit due to the curse of dimensionality. Which of the following option would you consider to handle such problem? 1). Dimensionality Reduction 2). Feature selection
- When you find noise in data which of the following option would you consider in k- NN

A. I will increase the value of k

B. I will decrease the value of k

C. Noise can not be dependent on value of k

- Which of the following will be true about k in k-NN in terms of Bias?

A. When you increase the k the bias will be increases

B. When you decrease the k the bias will be increases

- What is used to mitigate overfitting in a test set?

A. Overfitting set

B. Training set

C. Validation dataset

D. Evaluation set

- A radial basis function is a

A. Activation function

B. Weight

C. Learning rate

D. none

- Mistake Bound is
- How many training examples are needed for learner to converge to a successful hypothesis.
- How much computational effort is needed for a learner to converge to a successful hypothesis
- How many training examples will the learner misclassify before conversing to a successful hypothesis
- All of the following are suitable problems for genetic algorithms EXCEPT

A. dynamic process control

B. pattern recognition with complex patterns

C. simulation of biological models

D. simple optimization with few variables

- Adding more basis functions in a linear model… (Pick the most probably option)

A. Decreases model bias

B. Decreases estimation bias

C. Decreases variance

D. Doesn ‟ t affect bias and variance

- Which of these are types of crossover

A. Single point

B. Two point

C. Uniform

- A feature F1 can take certain value: A, B, C, D, E, & F and represents grade of students from a college. Which of the following statement is true in following case?

A. Feature F1 is an example of nominal

B. Feature F1 is an example of ordinal

C. It doesn ‟ t belong to any of the above category.

- You observe the following while fitting a linear regression to the data: As you increase the amount of training data, the test error decreases and the training error increases. The train error is quite low (almost what you expect it to), while the test error is much higher than the train error. What do you think is the main reason behind this behaviour? Choose the most probable option.

A. High variance

B. High model bias

C. High estimation bias

D. None of the above Answer

- Genetic algorithms are heuristic methods that do not guarantee an optimal solution to a problem
- Which of the following statements about regularization is not correct?

A. Using too large a value of lambda can cause your hypothesis to underfit the

B. Using too large a value of lambda can cause your hypothesis to overfit the

C. Using a very large value of lambda cannot hurt the performance of your hypothesis.

D. None of the above

- Consider the following: (a) Evolution (b) Selection (c) Reproduction (d) Mutation Which of the following are found in genetic algorithms?

B. a, b, c

C. a, b

D. b, d

- Genetic Algorithm are a part of

A. Evolutionary Computing

B. inspired by Darwin’s theory about evolution – “survival of the fittest”

C. are adaptive heuristic search algorithm based on the evolutionary ideas of natural selection and genetics

D. All of the above

- Genetic algorithms belong to the family of methods in the

A. artificial intelligence area

B. optimization

C. complete enumeration family of methods

D. Non-computer based (human) solutions area

- For a two player chess game, the environment encompasses the opponent

A. True

B. False

- Which among the following is not a necessary feature of a reinforcement learning solution to a learning problem?

A. exploration versus exploitation dilemma

B. trial and error approach to learning

C. learning based on rewards

D. representation of the problem as a Markov Decision Process

- Which of the following sentence is FALSE regarding reinforcement learning

A. It relates inputs to

B. It is used for

C. It may be used for

D. It discovers causal relationships.

- The EM algorithm is guaranteed to never decrease the value of its objective function on any iteration

B. FALSE Answer

- Consider the following modification to the tic-tac-toe game: at the end of game, a coin is tossed and the agent wins if a head appears regardless of whatever has happened in the game.Can reinforcement learning be used to learn an optimal policy of playing Tic-Tac-Toe in this case?

A. Yes

B. No

190. Out of the two repeated steps in EM algorithm, the step 2 is _

- the maximization step
- the minimization step
- the optimization step
- the normalization step
- Suppose the reinforcement learning player was greedy, that is, it always played the move that brought it to the position that it rated the best. Might it learn to play better, or worse, than a non greedy player?

A. Worse

B. Better

- A chess agent trained by using Reinforcement Learning can be trained by playing against a copy of the same
- The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E
- Expectation–maximization (EM) algorithm is an

A. Iterative

B. Incremental

C. None

- Feature need to be identified by using Well Posed Learning Problem:

A. Class of tasks

B. Performance measure

C. Training experience

D. All of these

- A computer program that learns to play checkers might improve its performance as:

A. Measured by its ability to win at the class of tasks involving playing checkers

B. Experience obtained by playing games against

C. Both a & b

- Learning symbolic representations of concepts known as:

A. Artificial Intelligence

B. Machine Learning

- The field of study that gives computers the capability to learn without being explicitly programmed

A. Machine Learning

B. Artificial Intelligence

C. Deep Learning

D. Both a & b

- The autonomous acquisition of knowledge through the use of computer programs is called

C. Deep learning

- Learning that enables massive quantities of data is known as
- A different learning method does not include

A. Memorization

B. Analogy

C. Deduction

D. Introduction

- Types of learning used in machine

A. Supervised

B. Unsupervised

C. Reinforcement

- A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience

A. Supervised learning problem

B. Un Supervised learning problem

C. Well posed learning problem

A. Decision Tree

B. Regression

C. Classification

D. Random Forest

- How many types are available in machine learning?

A. 1

B. 2

C. 3

D. 4

- A model can learn based on the rewards it received for its previous action is known as:

A. Supervised learning

B. Unsupervised learning

C. Reinforcement learning

D. Concept learning

- A subset of machine learning that involves systems that think and learn like humans using artificial neural networks.
- A learning method in which a training data contains a small amount of labeled data and a large amount of unlabeled data is known as

A. Supervised Learning

B. Semi Supervised Learning

C. Unsupervised Learning

D. Reinforcement Learning

- Methods used for the calibration in Supervised Learning

A. Platt Calibration

B. Isotonic Regression

C. All of these

D. None of above

- The basic design issues for designing a learning

A. Choosing the Training Experience

B. Choosing the Target Function

C. Choosing a Function Approximation Algorithm

D. Estimating Training Values

E. All of these

- In Machine learning the module that must solve the given performance task is known as:

A. Critic

B. Generalizer

C. Performance system

- A learning method that is used to solve a particular computational program, multiple models such as classifiers or experts are strategically generated and combined is called as

D. Reinforcement Learning

E. Ensemble learning

- In a learning system the component that takes as takes input the current hypothesis (currently learned function) and outputs a new problem for the Performance System to explore.

D. Experiment generator

- Learning method that is used to improve the classification, prediction, function approximation etc of a model
- In a learning system the component that takes as input the history or trace of the game and produces as output a set of training examples of the target function is known as:
- The most common issue when using ML is

A. Lack of skilled resources

B. Inadequate Infrastructure

C. Poor Data Quality

- How to ensure that your model is not over fitting

A. Cross validation

B. Regularization

- A way to ensemble multiple classifications or regression

A. Stacking

B. Bagging

C. Blending

D. Boosting

- How well a model is going to generalize in new environment is known as

A. Data Quality

B. Transparent

C. Implementation

- Common classes of problems in machine learning is

B. Clustering

C. Regression

- Cost complexity pruning algorithm is used in?

A. CART

B. 5

C. ID3

D. All of

- Which one of these is not a tree based learner?

D. Bayesian Classifier

- Which one of these is a tree based learner?

A. Rule based

B. Bayesian Belief Network

C. Bayesian classifier

- What is the approach of basic algorithm for decision tree induction?

A. Greedy

B. Top Down

C. Procedural

D. Step by Step

- Which of the following classifications would best suit the student performance classification systems?

A. If-.then-analysis

B. Market-basket analysis

C. Regression analysis

D. Cluster analysis

- What are two steps of tree pruning work?

A. Pessimistic pruning and Optimistic pruning

B. Post pruning and Pre pruning

C. Cost complexity pruning and time complexity pruning

- How will you counter over-fitting in decision tree?

A. By pruning the longer rules

B. By creating new rules

C. Both By pruning the longer rules ‟ and „ By creating new rules ‟

D. None of Answer

- Which of the following sentences are true?

A. In pre-pruning a tree is ‘pruned’ by halting its construction early

B. A pruning set of class labeled tuples is used to estimate cost

C. The best pruned tree is the one that minimizes the number of encoding

A. Factor analysis

B. Decision trees are robust to outliers

C. Decision trees are prone to be over fit

- In which of the following scenario a gain ratio is preferred over Information Gain?

A. When a categorical variable has very large number of category

B. When a categorical variable has very small number of category

C. Number of categories is the not the reason

- Major pruning techniques used in decision tree are

A. Minimum error

B. Smallest tree

- What does the central limit theorem state?

A. If the sample size increases sampling distribution must approach normal distribution

B. If the sample size decreases then the sample distribution must approach normal distribution.

C. If the sample size increases then the sampling distributions much approach an exponential

D. If the sample size decreases then the sampling distributions much approach an exponential

- The difference between the sample value expected and the estimates value of the parameter is called as?

A. Bias

B. Error

C. Contradiction

D. Difference

A. Quota sampling

B. Convenience sampling

C. Purposive sampling

D. Judgment sampling

- Which of the following is a subset of population?

A. Distribution

B. Sample

C. Data

D. Set

- The sampling error is defined as?

A. Difference between population and parameter

B. Difference between sample and parameter

C. Difference between population and sample

D. Difference between parameter and sample

- Machine learning is interested in the best hypothesis h from some space H, given observed training data D. Here best hypothesis means

A. Most general hypothesis

B. Most probable hypothesis

C. Most specific hypothesis

- Practical difficulties with Bayesian Learning :

A. Initial knowledge of many probabilities is required

B. No consistent hypothesis

C. Hypotheses make probabilistic predictions

- Bayes’ theorem states that the relationship between the probability of the hypothesis before getting the evidence P(H) and the probability of the hypothesis after getting the evidence P(H ∣ E) is
- [P(E ∣ H)P(H)] / P(E)
- [P(E ∣ H) P(E) ] / P(H)
- [P(E) P(H) ] / P(E ∣ H)
- A doctor knows that Cold causes fever 50% of the time. Prior probability of any patient having cold is 1/50,000. Prior probability of any patient having fever is 1/20. If a patient has fever, what is the probability he/she has cold?
- P(C/F)= 0.0003
- P(C/F)=0.0004
- P(C/F)= 0.0002
- P(C/F)=0.0045
- Which of the following will be true about k in K-Nearest Neighbor in terms of Bias?
- When you find noise in data which of the following option would you consider in K- Nearest Neighbor?

C. Noise cannot be dependent on value of k

- In K-Nearest Neighbor it is very likely to overfit due to the curse of dimensionality. Which of the following option would you consider to handle such problem?
- Dimensionality Reduction
- Feature selection

C. 1 and 2

- Radial basis functions is closely related to distance-weighted regression, but it is

A. lazy learning

B. eager learning

C. concept learning

D. none of these

- Radial basis function networks provide a global approximation to the target function, represented by of many local kernel function.

A. a series combination

B. a linear combination

C. a parallel combination

D. a non linear combination

- The most significant phase in a genetic algorithm is

A. Crossover

B. Mutation

C. Selection

D. Fitness function

- The crossover operator produces two new offspring from

A. Two parent strings, by copying selected bits from each parent

B. One parent strings, by copying selected bits from selected parent

C. Two parent strings, by copying selected bits from one parent

- Mathematically characterize the evolution over time of the population within a GA based on the concept of

A. Schema

B. Crossover

C. Don ‟ t care

- In genetic algorithm process of selecting parents which mate and recombine to create off-springs for the next generation is known as:

A. Tournament selection

B. Rank selection

C. Fitness sharing

D. Parent selection

- Crossover operations are performed in genetic programming by replacing

A. Randomly chosen sub tree of one parent program by a sub tree from the other parent program.

B. Randomly chosen root node tree of one parent program by a sub tree from the other parent program

C. Randomly chosen root node tree of one parent program by a root node tree from the other parent program

## Posted by teachask

You may like these posts, post a comment, popular post.

## 1000 common words in english with hindi meaning

## Daily Use English Sentences SET-1 Useful 100+ Sentences

## हैकर कैसे बने, हैकिंग कैसे सीखे Hacking Tips In Hindi

## Recent Posts

- _Featured Posts
- _Post ShortCodes

## Featured Posts

- Privacy Policy

## Social Plugin

Recent in sports.

## Popular post

## Environment (Protection) Act, 1986

## SUMMER TRAINING REPORT ON JAVA

## SUMMER TRAINING REPORT ON CORE JAVA UNDERTAKEN

## Earth fact// Planets for Kids

Internship report computer science and engineering java, phython html.

## Data compression mcq

Subscribe us.

## Most Popular

## computer network mcq questions

## Daily Use English Sentences SET-2 Best 100+ Sentences

## FIGURE OF SPEECH

## How to Solve Cause & Effect question: Important Tips with examples!

## ONLINE COLLEGE WEBSITE software requirements specification

- green india
- hindi grammar
- important day
- IMPORTANT POST LINK
- Shortcut Keys
- Solar System
- blogging (17)
- business (3)
- chemistry (4)
- class 10 (2)
- computer (45)
- english (42)
- full form (10)
- govt.job (5)
- green india (11)
- hacking (1)
- hindi grammar (41)
- important day (2)
- IMPORTANT POST LINK (2)
- internet (6)
- physics (46)
- Reasoning (22)
- science (63)
- Shortcut Keys (3)
- Solar System (3)
- Teachask (582)
- teahask (1)
- techask (1)
- technology (8)
- youtube (16)

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's.

## Random Posts

Popular posts, footer menu widget, contact form.

- Machine Learning Tutorial
- Data Analysis Tutorial
- Python – Data visualization tutorial
- Machine Learning Projects
- Machine Learning Interview Questions
- Machine Learning Mathematics
- Deep Learning Tutorial
- Deep Learning Project
- Deep Learning Interview Questions
- Computer Vision Tutorial
- Computer Vision Projects
- NLP Project
- NLP Interview Questions
- Statistics with Python
- 100 Days of Machine Learning
- Write an Interview Experience
- Share Your Campus Experience
- Expert Systems
- Different Types of Clustering Algorithm
- ML | Stochastic Gradient Descent (SGD)
- tf-idf Model for Page Ranking
- Confusion Matrix in Machine Learning
- Pattern Recognition | Introduction
- Saving a machine learning Model
- Introduction To Machine Learning using Python
- RPA Life Cycle
- ML | Types of Learning – Part 2
- ML | Underfitting and Overfitting
- ML | Introduction to Data in Machine Learning
- ML | Using SVM to perform classification on a non-linear dataset
- What is Machine Learning?
- An introduction to Machine Learning
- Supervised Machine Learning
- Multiple Linear Regression using R
- Gaussian Mixture Model
- Introduction to ANN | Set 4 (Network Architectures)
- ML | Understanding Data Processing
- SDE SHEET - A Complete Guide for SDE Preparation
- Naive Bayes Classifiers
- Removing stop words with NLTK in Python
- Linear Regression (Python Implementation)
- Apriori Algorithm
- Software Engineering | Coupling and Cohesion
- Supervised and Unsupervised learning
- Linear Regression in Machine learning
- Decision Tree Introduction with example

## Reinforcement learning

Reinforcement learning is an area of Machine Learning. It is about taking suitable action to maximize reward in a particular situation. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation. Reinforcement learning differs from supervised learning in a way that in supervised learning the training data has the answer key with it so the model is trained with the correct answer itself whereas in reinforcement learning, there is no answer but the reinforcement agent decides what to do to perform the given task. In the absence of a training dataset, it is bound to learn from its experience.

Reinforcement Learning (RL) is the science of decision making. It is about learning the optimal behavior in an environment to obtain maximum reward. In RL, the data is accumulated from machine learning systems that use a trial-and-error method. Data is not part of the input that we would find in supervised or unsupervised machine learning.

Reinforcement learning uses algorithms that learn from outcomes and decide which action to take next. After each action, the algorithm receives feedback that helps it determine whether the choice it made was correct, neutral or incorrect. It is a good technique to use for automated systems that have to make a lot of small decisions without human guidance.

Reinforcement learning is an autonomous, self-teaching system that essentially learns by trial and error. It performs actions with the aim of maximizing rewards, or in other words, it is learning by doing in order to achieve the best outcomes.

Example:

The problem is as follows: We have an agent and a reward, with many hurdles in between. The agent is supposed to find the best possible path to reach the reward. The following problem explains the problem more easily.

The above image shows the robot, diamond, and fire. The goal of the robot is to get the reward that is the diamond and avoid the hurdles that are fired. The robot learns by trying all the possible paths and then choosing the path which gives him the reward with the least hurdles. Each right step will give the robot a reward and each wrong step will subtract the reward of the robot. The total reward will be calculated when it reaches the final reward that is the diamond. Main points in Reinforcement learning –

- Input: The input should be an initial state from which the model will start
- Output: There are many possible outputs as there are a variety of solutions to a particular problem
- Training: The training is based upon the input, The model will return a state and the user will decide to reward or punish the model based on its output.
- The model keeps continues to learn.
- The best solution is decided based on the maximum reward.

Difference between Reinforcement learning and Supervised learning:

Types of Reinforcement:

There are two types of Reinforcement:

Advantages of reinforcement learning are:

- Maximizes Performance
- Sustain Change for a long period of time
- Too much Reinforcement can lead to an overload of states which can diminish the results
- Increases Behavior
- Provide defiance to a minimum standard of performance
- It Only provides enough to meet up the minimum behavior

## Elements of Reinforcement Learning

Reinforcement learning elements are as follows:

- Reward function
- Value function
- Model of the environment

Policy: Policy defines the learning agent behavior for given time period. It is a mapping from perceived states of the environment to actions to be taken when in those states.

Reward function: Reward function is used to define a goal in a reinforcement learning problem.A reward function is a function that provides a numerical score based on the state of the environment

Value function: Value functions specify what is good in the long run. The value of a state is the total amount of reward an agent can expect to accumulate over the future, starting from that state.

Model of the environment: Models are used for planning.

⚫ Credit assignment problem: Reinforcement learning algorithms learn to generate an internal value for the intermediate states as to how good they are in leading to the goal. The learning decision maker is called the agent. The agent interacts with the environment that includes everything outside the agent.

The agent has sensors to decide on its state in the environment and takes action that modifies its state.

⚫ The reinforcement learning problem model is an agent continuously interacting with an environment. The agent and the environment interact in a sequence of time steps. At each time step t, the agent receives the state of the environment and a scalar numerical reward for the previous action, and then the agent then selects an action.

Reinforcement learning is a technique for solving Markov decision problems.

⚫ Reinforcement learning uses a formal framework defining the interaction between a learning agent and its environment in terms of states, actions, and rewards. This framework is intended to be a simple way of representing essential features of the artificial intelligence problem.

Various Practical Applications of Reinforcement Learning –

- RL can be used in robotics for industrial automation.
- RL can be used in machine learning and data processing
- RL can be used to create training systems that provide custom instruction and materials according to the requirement of students.

Application of Reinforcement Learnings

1. Robotics: Robots with pre-programmed behavior are useful in structured environments, such as the assembly line of an automobile manufacturing plant, where the task is repetitive in nature.

2. A master chess player makes a move. The choice is informed both by planning, anticipating possible replies and counter replies.

3. An adaptive controller adjusts parameters of a petroleum refinery’s operation in real time.

RL can be used in large environments in the following situations:

- A model of the environment is known, but an analytic solution is not available;
- Only a simulation model of the environment is given (the subject of simulation-based optimization)
- The only way to collect information about the environment is to interact with it.

## Advantages and Disadvantages of Reinforcement Learning

Advantages of Reinforcement learning

1. Reinforcement learning can be used to solve very complex problems that cannot be solved by conventional techniques.

2. The model can correct the errors that occurred during the training process.

3. In RL, training data is obtained via the direct interaction of the agent with the environment

4. Reinforcement learning can handle environments that are non-deterministic, meaning that the outcomes of actions are not always predictable. This is useful in real-world applications where the environment may change over time or is uncertain.

5. Reinforcement learning can be used to solve a wide range of problems, including those that involve decision making, control, and optimization.

6. Reinforcement learning is a flexible approach that can be combined with other machine learning techniques, such as deep learning, to improve performance.

Disadvantages of Reinforcement learning

1. Reinforcement learning is not preferable to use for solving simple problems.

2. Reinforcement learning needs a lot of data and a lot of computation

3. Reinforcement learning is highly dependent on the quality of the reward function. If the reward function is poorly designed, the agent may not learn the desired behavior.

4. Reinforcement learning can be difficult to debug and interpret. It is not always clear why the agent is behaving in a certain way, which can make it difficult to diagnose and fix problems.

## Implementation:

Please login to comment..., improve your coding skills with practice.

Reinforcement Learning

Control System

- Interview Q

## Related Tutorials

## Reinforcement Learning Applications

- RL is used in Robot navigation, Robo-soccer, walking, juggling , etc.
- RL can be used for adaptive control such as Factory processes, admission control in telecommunication, and Helicopter pilot is an example of reinforcement learning.
- RL can be used in Game playing such as tic-tac-toe, chess, etc.
- RL can be used for optimizing the chemical reactions.
- RL is now used for business strategy planning.
- In various automobile manufacturing companies, the robots use deep reinforcement learning to pick goods and put them in some containers.
- The RL is currently used in the finance sector for evaluating trading strategies.

## Conclusion:

From the above discussion, we can say that Reinforcement Learning is one of the most interesting and useful parts of Machine learning. In RL, the agent explores the environment by exploring it without any human intervention. It is the main learning algorithm that is used in Artificial Intelligence. But there are some cases where it should not be used, such as if you have enough data to solve the problem, then other ML algorithms can be used more efficiently. The main issue with the RL algorithm is that some of the parameters may affect the speed of the learning, such as delayed feedback.

- Send your Feedback to [email protected]

## Help Others, Please Share

## Learn Latest Tutorials

Transact-SQL

R Programming

React Native

Python Design Patterns

Python Pillow

Python Turtle

## Preparation

Verbal Ability

Interview Questions

Company Questions

## Trending Technologies

Artificial Intelligence

Cloud Computing

Data Science

Machine Learning

## B.Tech / MCA

Data Structures

Operating System

Computer Network

Compiler Design

Computer Organization

Discrete Mathematics

Ethical Hacking

Computer Graphics

Software Engineering

Web Technology

Cyber Security

C Programming

Data Mining

Data Warehouse

## Javatpoint Services

JavaTpoint offers too many high quality services. Mail us on h [email protected] , to get more information about given services.

- Website Designing
- Website Development
- Java Development
- PHP Development
- Graphic Designing
- Digital Marketing
- On Page and Off Page SEO
- Content Development
- Corporate Training
- Classroom and Online Training

## Training For College Campus

JavaTpoint offers college campus training on Core Java, Advance Java, .Net, Android, Hadoop, PHP, Web Technology and Python. Please mail your requirement at [email protected] . Duration: 1 week to 2 week

## IMAGES

## VIDEO

## COMMENTS

The six steps of problem solving involve problem definition, problem analysis, developing possible solutions, selecting a solution, implementing the solution and evaluating the outcome. Problem solving models are used to address issues that...

According to StateUniversity.com, the project method of teaching is an educational enterprise in which children solve a particular problem over a period of days or weeks. It offers teachers a way to develop in-depth thinking in young childr...

Maytag washers are reliable and durable machines, but like any appliance, they can experience problems from time to time. Fortunately, many of the most common issues can be solved quickly and easily. Here’s a look at how to troubleshoot som...

What is/are the problem solving methods for RL? A:Dynamic programming, B:Monte Carlo Methods.

'Solving' a Reinforcement Learning problem basically amounts to finding the Optimal Policy (or Optimal Value). There are many

Reinforcement learning (RL) is a powerful approach that can tackle certain problems that may be challenging for supervised or deep learning algorithms.

Model based methods: It is a method for solving reinforcement learning problems which use model-based methods.

Reinforcement learning is a problem where an agent tries to learn how ... of ways to optimize performance or solve specific control problems

(DP) — is a class of solution methods for solving sequential decision problems with a

There are two approaches to train our agent to find this optimal policy π*:.

The basic algorithms (MC control, Q-learning, or Dyna-Q) seemed to all be based on solving whichever specific maze the agent was trained on.

Designing a machine learning approach involves:- Choosing the type of training ... What is/are the problem solving methods for RL?

In RL, the data is accumulated from machine learning systems that use ... Reinforcement learning can be used to solve very complex problems

The approaches for solving the RL problems with the help of the model are