Skip to content
/ maro Public

Multi-Agent Resource Optimization (MARO) platform is an instance of Reinforcement Learning as a Service (RaaS) for real-world resource optimization problems.

License

Notifications You must be signed in to change notification settings

microsoft/maro

Repository files navigation

License Platform Python Versions Code Size Docker Size Issues Pull Requests Dependencies test build docker docs PypI Versions Wheel Citi Bike CIM VM Scheduling Gitter Stack Overflow Releases Commits Vulnerability Scan Lint Coverage Downloads Docker Pulls Play with MARO

MARO LOGO

Multi-Agent Resource Optimization (MARO) platform is an instance of Reinforcement learning as a Service (RaaS) for real-world resource optimization. It can be applied to many important industrial domains, such as container inventory management in logistics, bike repositioning in transportation, virtual machine provisioning in data centers, and asset management in finance. Besides Reinforcement Learning (RL), it also supports other planning/decision mechanisms, such as Operations Research.

Key Components of MARO:

  • Simulation toolkit: it provides some predefined scenarios, and the reusable wheels for building new scenarios.
  • RL toolkit: it provides a full-stack abstraction for RL, such as agent manager, agent, RL algorithms, learner, actor, and various shapers.
  • Distributed toolkit: it provides distributed communication components, interface of user-defined functions for message auto-handling, cluster provision, and job orchestration.

MARO Key Components

Contents

File/folder Description
maro MARO source code.
docs MARO docs, it is host on readthedocs.
examples Showcase of MARO.
notebooks MARO quick-start notebooks.

Try MARO playground to have a quick experience.

Install MARO from PyPI

Notes: The CLI commands (including the visualization tool) are not included in pymaro package. To enable these support, you need to install from source.

  • Mac OS / Linux

    pip install pymaro
  • Windows

    # Install torch first, if you don't have one.
    pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
    
    pip install pymaro

Install MARO from Source

Notes: Install from source if you want to use the CLI commands (including the visualization tool).

  • Prerequisites

  • Enable Virtual Environment

    • Mac OS / Linux

      # If your environment is not clean, create a virtual environment firstly.
      python -m venv maro_venv
      source ./maro_venv/bin/activate
    • Windows

      # If your environment is not clean, create a virtual environment firstly.
      python -m venv maro_venv
      
      # You may need this for SecurityError in PowerShell.
      Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted
      
      # Activate the virtual environment.
      .\maro_venv\Scripts\activate
  • Install MARO

    # Git Clone the whole source code.
    git clone https://github.com/microsoft/maro.git
    • Mac OS / Linux

      # Install MARO from source.
      bash scripts/install_maro.sh;
      pip install -r ./requirements.dev.txt;
    • Windows

      # Install MARO from source.
      .\scripts\install_maro.bat;
      pip install -r ./requirements.dev.txt;
  • Notes: If your package is not found, remember to set your PYTHONPATH

    • Mac OS / Linux
    export PYTHONPATH=PATH-TO-MARO
    • Windows
    $Env:PYTHONPATH=PATH-TO-MARO

Quick Example

from maro.simulator import Env

env = Env(scenario="cim", topology="toy.5p_ssddd_l0.0", start_tick=0, durations=100)

metrics, decision_event, is_done = env.step(None)

while not is_done:
    metrics, decision_event, is_done = env.step(None)

print(f"environment metrics: {env.metrics}")
# Enable environment dump feature, when initializing the environment instance
env = Env(scenario="cim",
          topology="toy.5p_ssddd_l0.0",
          start_tick=0,
          durations=100,
          options={"enable-dump-snapshot": "./dump_data"})

# Inspect environment with the dump data
maro inspector dashboard --source_path ./dump_data/YOUR_SNAPSHOT_DUMP_FOLDER

Show Cases

  • Case I - Container Inventory Management CIM Inter Epoch CIM Intra Epoch

  • Case II - Citi Bike Citi Bike Inter Epoch Citi Bike Intra Epoch

Run Playground

  • Pull from Docker Hub

    # Pull the docker image from docker hub
    docker pull maro2020/playground
    
    # Run playground container.
    # Redis commander (GUI for redis) -> http://127.0.0.1:40009
    # Jupyter lab with maro -> http://127.0.0.1:40010
    docker run -p 40009:40009 -p 40010:40010 maro2020/playground
  • Build from source

    • Mac OS / Linux

      # Build playground image.
      bash ./scripts/build_playground.sh
      
      # Run playground container.
      # Redis commander (GUI for redis) -> http://127.0.0.1:40009
      # Jupyter lab with maro -> http://127.0.0.1:40010
      docker run -p 40009:40009 -p 40010:40010 maro2020/playground
    • Windows

      # Build playground image.
      .\scripts\build_playground.bat
      
      # Run playground container.
      # Redis commander (GUI for redis) -> http://127.0.0.1:40009
      # Jupyter lab with maro -> http://127.0.0.1:40010
      docker run -p 40009:40009 -p 40010:40010 maro2020/playground

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Related Papers

CIM Vis

Wenlei Shi, Xinran Wei, Jia Zhang, Xiaoyuan Ni, Arthur Jiang, Jiang Bian, Tie-Yan Liu. "Cooperative Policy Learning with Pre-trained Heterogeneous Observation Representations". AAMAS 2021

Xihan Li, Jia Zhang, Jiang Bian, Yunhai Tong, Tie-Yan Liu. "A Cooperative Multi-Agent Reinforcement Learning Framework for Resource Balancing in Complex Logistics Network". AAMAS 2019

Related News

MSRA Top-10 Hack-Techs in 2021

Open Source Platform MARO: Anywhere Door for Resource Optimization

AI from "Point" to "Surface"

License

Copyright (c) Microsoft Corporation. All rights reserved.

Licensed under the MIT License.