Skip to content

Learn Machine Learning from basic to advance and develop Machine Learning Models from Scratch in Python

License

Notifications You must be signed in to change notification settings

ghimiresunil/Implementation-of-Machine-Learning-Algorithm-from-Scratch

Repository files navigation

Implementation of Machine Learning Algorithm from Scratch

Learn Machine Learning from basic to advance and develop Machine Learning Models from Scratch in Python

1. WHAT YOU WILL LEARN?

  • Obtain a solid understand of machine learning in general from basic to advance
  • Complete tutorial about basic packages like NumPy and Pandas
  • Data Preprocessing and Data Visualization
  • Have an understand of Machine Learning and how to apply it in your own programs
  • Understanding the concept behind the algorithms
  • Knowing how to optimize hyperparameters of your models
  • Learn how to develop models based on the requirement of your future business
  • Potential for a new job in the future

2. DESCRIPTION

Are you interested in Data Science and Machine Learning, but you don’t have any background, and you find the concepts confusing?
Are you interested in programming in Python, but you always afraid of coding?

😊I think this repo is for you!😊

Even you are familiar with machine learning, this repo can help you to review all the techniques and understand the concept behind each term. This repo is completely categorized, and I don’t start from the middle! I actually start the concept of every term, and then I try to implement it in Python step by step. The structure of the course is as follows:

3. WHO THIS REPO IS FOR:

  • Anyone with any background that interested in Data Science and Machine Learning with at least high school (+2) knowledge in mathematics
  • Beginners, intermediate, and even advanced students in the field of Artificial Intelligence (AI), Data Science (DS), and Machine Learning (ML)
  • Students in college that looking for securing their future jobs
  • Students that look forward to excel their Final Year Project by learning Machine Learning
  • Anyone who afraid of coding in Python but interested in Machine Learning concepts
  • Anyone who wants to create new knowledge on the different dataset using machine learning
  • Students who want to apply machine learning models in their projects

4. Contents

Useful Resources

Title Repository
USEFUL GIT COMMANDS FOR EVERYDAY USE 🔗
MOST USEFUL LINUX COMMANDS EVERYONE SHOULD KNOW 🔗
AWESOME ML TOOLBOX 🔗

Installation

Title Repository
INSTALL THE ANACONDA PYTHON ON WINDOWS AND LINUX 🔗

Reality vs Expectation

Title Repository
IS AI OVERHYPED? REALITY VS EXPECTATION 🔗

Machine Learning from Beginner to Advanced

Title Repository
HISTORY OF MATHEMATICS, AI & ML - HISTORY & MOTIVATION 🔗
INTRODUCTION TO ARTIFICIAL INTELLIGENCE & MACHINE LEARNING 🔗
KEY TERMS USED IN MACHINE LEARNING 🔗
PERFORMANCE METRICS IN MACHINE LEARNING CLASSIFICATION MODEL 🔗
PERFORMANCE METRICS IN MACHINE LEARNING REGRESSION MODEL 🔗

Scratch Implementation

Title Repository
LINEAR REGRESSION FROM SCRATCH 🔗
LOGISTIC REGRESSION FROM SCRATCH 🔗
NAIVE BAYES FROM SCRATCH 🔗
DECISION TREE FROM SCRATCH 🔗
RANDOM FOREST FROM SCRATCH 🔗
K NEAREST NEIGHBOUR 🔗
NAIVE BAYES 🔗
K MEANS CLUSTERING 🔗

Mathematical Implementation

Title Repository
CONFUSION MATRIX FOR YOUR MULTI-CLASS ML MODEL 🔗

Machine Learning Interview Questions with Answers

Title Repository
50 QUESTIONS ON STATISTICS & MACHINE LEARNING – CAN YOU ANSWER? 🔗

Essential Machine Learning Formulas

Title Repository
MOSTLY USED MACHINE LEARNING FORMULAS 🔗

Pratice Guide for Data Science Learning

Title Repository
Research Guide for FYP 🔗
The Intermediate Guide to 180 Days Data Science Learning Plan 🔗

Algorithm Pros and Cons

  • KN Neighbors
    ✔ Simple, No training, No assumption about data, Easy to implement, New data can be added seamlessly, Only one hyperparameter
    ✖ Doesn't work well in high dimensions, Sensitive to noisy data, missing values and outliers, Doesn't work well with large data sets — cost of calculating distance is high, Needs feature scaling, Doesn't work well on imbalanced data, Doesn't deal well with missing values

  • Decision Tree
    ✔ Doesn't require standardization or normalization, Easy to implement, Can handle missing values, Automatic feature selection
    ✖ High variance, Higher training time, Can become complex, Can easily overfit

  • Random Forest
    ✔ Left-out data can be used for testing, High accuracy, Provides feature importance estimates, Can handle missing values, Doesn't require feature scaling, Good performance on imbalanced datasets, Can handle large dataset, Outliers have little impact, Less overfitting
    ✖ Less interpretable, More computational resources, Prediction time high

  • Linear Regression
    ✔ Simple, Interpretable, Easy to Implement
    ✖ Assumes linear relationship between features, Sensitive to outliers

  • Logistic Regression
    ✔ Doesn’t assume linear relationship between independent and dependent variables, Output can be interpreted as probability, Robust to noise
    ✖ Requires more data, Effective when linearly separable

  • Lasso Regression (L1)
    ✔ Prevents overfitting, Selects features by shrinking coefficients to zero
    ✖ Selected features will be biased, Prediction can be worse than Ridge

  • Ridge Regression (L2)
    ✔ Prevents overfitting
    ✖ Increases bias, Less interpretability

  • AdaBoost
    ✔ Fast, Reduced bias, Little need to tune
    ✖ Vulnerable to noise, Can overfit

  • Gradient Boosting
    ✔ Good performance
    ✖ Harder to tune hyperparameters

  • XGBoost
    ✔ Less feature engineering required, Outliers have little impact, Can output feature importance, Handles large datasets, Good model performance, Less prone to overfitting \​ ✖ Difficult to interpret, Harder to tune as there are numerous hyperparameters

  • SVM
    ✔ Performs well in higher dimensions, Excellent when classes are separable, Outliers have less impact
    ✖ Slow, Poor performance with overlapping classes, Selecting appropriate kernel functions can be tricky

  • Naïve Bayes
    ✔ Fast, Simple, Requires less training data, Scalable, Insensitive to irrelevant features, Good performance with high-dimensional data
    ✖ Assumes independence of features

  • Deep Learning
    ✔ Superb performance with unstructured data (images, video, audio, text)
    ✖ (Very) long training time, Many hyperparameters, Prone to overfitting



AI/ML dataset

Source Link
Google Dataset Search – A search engine for datasets: 🔗
IBM’s collection of datasets for enterprise applications 🔗
Kaggle Datasets 🔗
Huggingface Datasets – A Python library for loading NLP datasets 🔗
A large list organized by application domain 🔗
Computer Vision Datasets (a really large list) 🔗
Datasetlist – Datasets by domain 🔗
OpenML – A search engine for curated datasets and workflows 🔗
Papers with Code – Datasets with benchmarks 🔗
Penn Machine Learning Benchmarks 🔗
VisualDataDiscovery (for Computer Vision) 🔗
UCI Machine Learning Repository 🔗
Roboflow Public Datasets for computer vision 🔗

About

Learn Machine Learning from basic to advance and develop Machine Learning Models from Scratch in Python

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published