What you'll learn
- Basic fundamentals of Machine learning.
- Data exploration, data preprocessing, handling missing values.
- Feature engineering and exploratory data analysis.
- Data visualization techniques.
- Descriptive and inferential statistics .
- Cross – validation techniques.
- Model selection, model training, model evaluation and model prediction.
- Supervised learning, Unsupervised learning and Reinforcement learning.
- Regression, Classification, Clustering, Association rules.
- Linear regression, Logistic regression, Support vector machine, Naïve bias algorithm, Decision tree, Random forest, K-nearest neighbors and others.
- Ensemble learning – bagging and boosting.
- K-means, DBSCAN, Hierarchical clustering.
- Content based filtering and Collaborative filtering.
- Recommendation system and its working process.
- Adaboost, XGboost, Catboost, Gradient boosting, etc.
- Deep learning and Neural networks.
- Perceptron, Artificial neural networks, Feed forward neural network.
- Back-propagation algorithm.
- Weights, bias and tradeoff.
- Overfitting and underfitting.
- Activation functions, optimizers and loss / cost functions.
- Epochs, step per epochs, batch size, val epochs, learning rate, etc..
- Project management, development and deployment.
- Web scraping techniques.
- API development using FASTAPI framework.
- Working with sklearn, TensorFlow, Pandas, Numpy, Matplotlib, Seaborn, Plotly.
- Hands on experience in real world projects.
- Machine learning interview questions.
- Machine learning mock interview preparation.
- Helping resume creation.
Requirements
- Carry your own laptop with decent configurations
- Knowledge about Python programming language.
-
Course overview
-
Course outcome
-
Installing anaconda, jupyter notebook
-
Working with environments
-
Introduction of Machine learning
-
Importance of Machine learning
-
Industrial applications of Machine learning
-
Problem statement analysis
-
Numerical and categorical variables
-
Types of Machine learning
-
Machine learning pipeline
-
Explore various data exploration methods
-
Handling missing values methods
-
Working with Pandas, Numpy and Sklearn libraries
-
One-hot encoder and label encoder
-
Data normalization, data standardization and quantiles
-
Handling outliers
-
Descriptive statistics - mean, mode, median, standard deviation, variance, etc.
-
Data distributions, skewness and kurtosis
-
Inferential statistics - various feature selection techniques, statistical tastings, hypothesis testing
-
Dimensionality reduction techniques – PCA, LDA, etc.
-
Grid search, random search
-
Cross – validation techniques
-
Group by and pivot table
-
Perform exploratory data analysis (EDA)
-
Project 1
-
Project 2
-
Introduction of Data visualization
-
Importance of Data visualization
-
Explanation of various graphs / charts
-
Practical with Matplotlib, Seaborn and Plotly libraries
-
Project 1
-
Project 2
-
Getting started with Supervised learning
-
Types of Supervised learning
-
Explanation of Regression technique
-
Algorithms of Regression technique
-
Model evaluation methods for regression
-
Use cases of Regression technique
-
Explanation of Classification technique
-
Algorithms of Classification technique
-
Model evaluation methods for Classification
-
Use cases of Classification technique
-
Linear Regression
-
Logistic Regressionn
-
Support vector machine
-
Naïve bias algorithm
-
Decision tree
-
Random forest
-
K-nearest neighbors and others
-
Project 1
-
Project 2
-
Project 3
-
Introduction of Ensemble learning
-
Types of Ensemble learning
-
Workflow of Ensemble learning
-
Explanation of Bagging technique
-
Algorithms of Bagging technique
-
Explanation of Boosting technique
-
Types of Boosting technique
-
Algorithms of Boosting technique
-
Adaboost
-
XGboost
-
Catboost
-
Gradient boosting and others
-
Project 1
-
Project 2
-
Getting started with Unsupervised learning
-
Types of Unsupervised learning
-
Explanation of Clustering technique
-
Algorithms of Clustering technique
-
Model evaluation methods for Clustering
-
Use cases of Clustering technique
-
K-means, DBSCAN, Hierarchical clustering
-
Explanation of Association rules technique
-
Algorithms of Association rules technique
-
Model evaluation methods for Association rules
-
Use cases of Association rules technique
-
Content based filtering and Collaborative filtering
-
Recommendation system and its working process
-
Project 1
-
Project 2
-
Introduction of Reinforcement learning
-
Working process of Reinforcement learning
-
Algorithms of Reinforcement learning
-
Applications of Reinforcement learning
-
Practical
-
Introduction of Deep learning
-
Importance of Deep learning
-
Explanation about Neural networks
-
Types of Neural networks
-
Architecture of Neural networks
-
Workflow of Neural networks
-
Feed forword propagation
-
Back propagation
-
Weights and bias
-
Weights and bias initialization techniques
-
Handling overfitting and underfitting
-
Regularizations and dropouts
-
Batch normalization
-
Explanation on activation functions
-
Various types of activation functions
-
Explanation on loss / cost functions
-
Various types of loss / cost functions
-
Explanation on optimizers functions
-
Various types of optimizers functions
-
Learn about hyper parameters – epochs, step per epochs, batch size, val epochs, learning rate, etc.
-
Working with TensorFlow library
-
Building a custom Artificial neural networks
-
Project 1
-
Project 2
-
Introduction of Computer vision
-
Introduction of Natura Language Processing
-
Web scraping techniques
-
FASTAPI development
-
GitHub management
-
Project deployment
-
Level up your Kaggle profile
Download The File
- file name
- file name