Browse Source

初始化仓库

王波 3 weeks ago
commit
9658719b43
100 changed files with 11885 additions and 0 deletions
  1. 2 0
      .coveragerc
  2. 1 0
      .flaskenv
  3. 36 0
      .github/PULL_REQUEST_TEMPLATE.md
  4. 2 0
      .github/issue-branch.yml
  5. 60 0
      .github/workflows/build.yml
  6. 33 0
      .github/workflows/deploy.yml
  7. 32 0
      .github/workflows/generate-dummy-price.sh
  8. 91 0
      .github/workflows/lint-and-test.yml
  9. 19 0
      .pre-commit-config.yaml
  10. 27 0
      .readthedocs.yaml
  11. 40 0
      Dockerfile
  12. 174 0
      LICENSE
  13. 147 0
      Makefile
  14. 3 0
      NOTICE
  15. 85 0
      README.md
  16. 10 0
      ci/DEPLOY.sh
  17. 17 0
      ci/Dockerfile.update
  18. 4 0
      ci/Readme.md
  19. 34 0
      ci/install-cbc-from-source.sh
  20. 2 0
      ci/load-psql-extensions.sql
  21. 13 0
      ci/run_mypy.sh
  22. 30 0
      ci/setup-postgres.sh
  23. 81 0
      ci/update-packages.sh
  24. 122 0
      docker-compose.yml
  25. 20 0
      documentation/Makefile
  26. 42 0
      documentation/_static/css/custom.css
  27. 67 0
      documentation/_templates/custom-module-template.rst
  28. 66 0
      documentation/api/aggregator.rst
  29. 607 0
      documentation/api/change_log.rst
  30. 22 0
      documentation/api/dev.rst
  31. 209 0
      documentation/api/introduction.rst
  32. 17 0
      documentation/api/mdc.rst
  33. 370 0
      documentation/api/notation.rst
  34. 71 0
      documentation/api/prosumer.rst
  35. 39 0
      documentation/api/supplier.rst
  36. 7 0
      documentation/api/v1.rst
  37. 7 0
      documentation/api/v1_1.rst
  38. 7 0
      documentation/api/v1_2.rst
  39. 7 0
      documentation/api/v1_3.rst
  40. 6 0
      documentation/api/v2_0.rst
  41. 20 0
      documentation/api/v3_0.rst
  42. 1158 0
      documentation/changelog.rst
  43. 172 0
      documentation/cli/change_log.rst
  44. 123 0
      documentation/cli/commands.rst
  45. 137 0
      documentation/concepts/algorithms.rst
  46. 82 0
      documentation/concepts/data-model.rst
  47. 195 0
      documentation/concepts/device_scheduler.rst
  48. 164 0
      documentation/concepts/flexibility.rst
  49. 97 0
      documentation/concepts/security_auth.rst
  50. 34 0
      documentation/concepts/users.rst
  51. 258 0
      documentation/conf.py
  52. 666 0
      documentation/configuration.rst
  53. 153 0
      documentation/dev/api.rst
  54. 75 0
      documentation/dev/auth.rst
  55. 111 0
      documentation/dev/ci.rst
  56. 43 0
      documentation/dev/dependency-management.rst
  57. 183 0
      documentation/dev/docker-compose.rst
  58. 86 0
      documentation/dev/note-on-datamodel-transition.rst
  59. 322 0
      documentation/dev/setup-and-guidelines.rst
  60. 47 0
      documentation/dev/why.rst
  61. 134 0
      documentation/features/forecasting.rst
  62. 117 0
      documentation/features/reporting.rst
  63. 338 0
      documentation/features/scheduling.rst
  64. 13 0
      documentation/get-in-touch.rst
  65. 71 0
      documentation/getting-started.rst
  66. 416 0
      documentation/host/data.rst
  67. 103 0
      documentation/host/deployment.rst
  68. 83 0
      documentation/host/docker.rst
  69. 64 0
      documentation/host/error-monitoring.rst
  70. 369 0
      documentation/host/installation.rst
  71. 26 0
      documentation/host/modes.rst
  72. 101 0
      documentation/host/queues.rst
  73. 285 0
      documentation/index.rst
  74. 43 0
      documentation/make.bat
  75. 4 0
      documentation/notes/macOS-docker-port-note.rst
  76. 10 0
      documentation/notes/macOS-port-note.rst
  77. 314 0
      documentation/plugin/customisation.rst
  78. 35 0
      documentation/plugin/introduction.rst
  79. 158 0
      documentation/plugin/showcase.rst
  80. 294 0
      documentation/tut/building_uis.rst
  81. 148 0
      documentation/tut/flex-model-v2g.rst
  82. 214 0
      documentation/tut/forecasting_scheduling.rst
  83. 298 0
      documentation/tut/posting_data.rst
  84. 24 0
      documentation/tut/scripts/Readme.md
  85. 42 0
      documentation/tut/scripts/run-tutorial-in-docker.sh
  86. 50 0
      documentation/tut/scripts/run-tutorial2-in-docker.sh
  87. 25 0
      documentation/tut/scripts/run-tutorial3-in-docker.sh
  88. 99 0
      documentation/tut/scripts/run-tutorial4-in-docker.sh
  89. 129 0
      documentation/tut/toy-example-expanded.rst
  90. 82 0
      documentation/tut/toy-example-from-scratch.rst
  91. 129 0
      documentation/tut/toy-example-process.rst
  92. 257 0
      documentation/tut/toy-example-reporter.rst
  93. 348 0
      documentation/tut/toy-example-setup.rst
  94. 26 0
      documentation/views/account.rst
  95. 34 0
      documentation/views/admin.rst
  96. 163 0
      documentation/views/asset-data.rst
  97. 45 0
      documentation/views/dashboard.rst
  98. 36 0
      documentation/views/sensors.rst
  99. 3 0
      flexmeasures/Readme.md
  100. 0 0
      flexmeasures/__init__.py

+ 2 - 0
.coveragerc

@@ -0,0 +1,2 @@
+[run]
+omit = */documentation/*, */tests/*, */scripts/*, **/*.jinja, **/*.html, **/*.txt, flexmeasures/data/migrations/versions/*

+ 1 - 0
.flaskenv

@@ -0,0 +1 @@
+FLASK_APP=flexmeasures.app:create

+ 36 - 0
.github/PULL_REQUEST_TEMPLATE.md

@@ -0,0 +1,36 @@
+## Description
+
+Summary of the changes introduced in this PR. Try to use bullet points as much as possible.
+
+- [ ] ...
+- [ ] Added changelog item in `documentation/changelog.rst`
+
+<!--
+Note regarding our changelog:
+- The 'New features' section targets API / CLI / UI users.
+- The 'Infrastructure / Support' section targets plugin developers and hosts.
+-->
+
+## Look & Feel
+
+This section can contain example pictures for UI, Input/Output for CLI, Request / Response for API endpoint, etc.
+
+## How to test
+
+Steps to test it or name of the tests functions.
+
+The library [flexmeasures-client](https://github.com/FlexMeasures/flexmeasures-client/) can be useful to showcase new features. For example,
+it can be used to set some example data to be used in a new UI feature.
+
+## Further Improvements
+
+Potential improvements to be done in the same PR or follow-up Issues/Discussions/PRs.
+
+## Related Items
+
+Mention if this PR closes an Issue or Project.
+
+---
+
+- [ ] I agree to contribute to the project under Apache 2 License. 
+- [ ] To the best of my knowledge, the proposed patch is not based on code under GPL or other license that is incompatible with FlexMeasures

+ 2 - 0
.github/issue-branch.yml

@@ -0,0 +1,2 @@
+openDraftPR: true
+autoCloseIssue: true

+ 60 - 0
.github/workflows/build.yml

@@ -0,0 +1,60 @@
+---
+name: build-docker-image
+
+on:
+  pull_request:
+    types:
+      - opened
+      - synchronize
+  push:
+    branches:
+      - main
+
+jobs:
+  build:
+    name: Build Docker Image
+    runs-on: ubuntu-latest
+    services:
+      postgres:
+        env:
+          POSTGRES_DB: flexmeasures_test
+          POSTGRES_PASSWORD: flexmeasures_test
+          POSTGRES_USER: flexmeasures_test
+        image: postgres:latest
+        options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s
+          --health-retries 5
+        ports:
+          - 5432:5432
+    steps:
+      - name: Checkout
+        uses: actions/checkout@v3
+      - name: Build Docker Image
+        run: docker build -t flexmeasures:latest -f Dockerfile .
+      - name: Generate random secret key
+        run: echo "SECRET_KEY=$(python3 -c 'import secrets; print(secrets.token_hex(24))')"
+          >> .env
+      - name: Export SQLALCHEMY_DATABASE_URI
+        run: echo "SQLALCHEMY_DATABASE_URI=postgresql://flexmeasures_test:flexmeasures_test@127.0.0.1:5432/flexmeasures_test"
+          >> .env
+      - name: Keep running flexmeasures container in background
+        run: docker run -t -d --env-file .env --network=host --name fm-container flexmeasures:latest
+      - name: Execute database upgrade
+        run: docker exec --env-file .env fm-container flexmeasures
+          db upgrade
+      - name: Add toy user
+        run: docker exec --env-file .env fm-container flexmeasures
+          add toy-account
+      - name: Generate prices dummy data
+        run: .github/workflows/generate-dummy-price.sh
+      - name: Copy prices dummy data
+        run: docker cp prices-tomorrow.csv fm-container:/app/prices-tomorrow.csv
+      - name: Add beliefs
+        run: docker exec --env-file .env fm-container flexmeasures
+          add beliefs --sensor 1 --source toy-user prices-tomorrow.csv --timezone Europe/Amsterdam
+      - name: Export TOMORROW
+        run: echo "TOMORROW=$(date --date="next day" '+%Y-%m-%d')"
+          >> $GITHUB_ENV
+      - name: Add schedule
+        run: docker exec --env-file .env fm-container flexmeasures
+          add schedule for-storage --sensor 2 --start ${TOMORROW}T07:00+01:00 
+          --duration PT12H --soc-at-start 50% --roundtrip-efficiency 90%

+ 33 - 0
.github/workflows/deploy.yml

@@ -0,0 +1,33 @@
+name: deploy-to-staging
+
+on: 
+  push:
+    branches:
+      - main
+
+jobs:
+  deploy:
+    name: "Deploy (main to staging)"
+    runs-on: ubuntu-latest
+    steps:
+      - name: Wait for tests to pass
+        uses: lewagon/wait-on-check-action@v0.2
+        with:
+          ref: ${{ github.ref }}
+          # check-name: "Test (on Python3.8)" # name of the job we wait for (omit to wait for all checks)
+          running-workflow-name: "Deploy (main to staging)"  # name of the check that will wait for other checks
+          repo-token: ${{ secrets.GITHUB_TOKEN }}
+          wait-interval: 20 # seconds
+      - uses: actions/checkout@v3
+        with:
+          fetch-depth: '0'
+          ref: 'main'
+      - name: Install SSH key
+        uses: shimataro/ssh-key-action@v2
+        with:
+          key: ${{ secrets.SSH_DEPLOYMENT_KEY }}  # private ssh key
+          known_hosts: ${{ secrets.KNOWN_DEPLOYMENT_HOSTS }}  # make via ssh-keyscan -t rsa <your host>
+      - run: ci/DEPLOY.sh
+    env:
+      BRANCH_NAME: main
+      STAGING_REMOTE_REPO: ${{ secrets.STAGING_REMOTE_REPO }}

+ 32 - 0
.github/workflows/generate-dummy-price.sh

@@ -0,0 +1,32 @@
+#!/bin/bash
+
+set -e
+set -x
+
+TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+
+echo "Hour,Price
+${TOMORROW}T00:00:00,10
+${TOMORROW}T01:00:00,11
+${TOMORROW}T02:00:00,12
+${TOMORROW}T03:00:00,15
+${TOMORROW}T04:00:00,18
+${TOMORROW}T05:00:00,17
+${TOMORROW}T06:00:00,10.5
+${TOMORROW}T07:00:00,9
+${TOMORROW}T08:00:00,9.5
+${TOMORROW}T09:00:00,9
+${TOMORROW}T10:00:00,8.5
+${TOMORROW}T11:00:00,10
+${TOMORROW}T12:00:00,8
+${TOMORROW}T13:00:00,5
+${TOMORROW}T14:00:00,4
+${TOMORROW}T15:00:00,4
+${TOMORROW}T16:00:00,5.5
+${TOMORROW}T17:00:00,8
+${TOMORROW}T18:00:00,12
+${TOMORROW}T19:00:00,13
+${TOMORROW}T20:00:00,14
+${TOMORROW}T21:00:00,12.5
+${TOMORROW}T22:00:00,10
+${TOMORROW}T23:00:00,7" > prices-tomorrow.csv

+ 91 - 0
.github/workflows/lint-and-test.yml

@@ -0,0 +1,91 @@
+name: lint-and-test
+
+on:
+  push:
+  pull_request:
+    types:
+      - opened
+jobs:
+  check:
+    runs-on: ubuntu-latest
+    name: Check (on Python 3.9)
+    steps:
+      - uses: actions/setup-python@v4
+        with:
+          python-version: 3.9
+      - uses: actions/checkout@v3
+      - uses: pre-commit/action@v3.0.0
+
+  test:
+    needs: check
+    runs-on: ubuntu-latest
+    strategy:
+      fail-fast: false
+      matrix:
+        py_version: [ "3.8", "3.9", "3.10", "3.11", "3.12" ]
+        include:
+          - python-version: "3.9"
+            coverage: yes
+    name: "Test (on Python ${{ matrix.py_version }})"
+    steps:
+      - uses: actions/setup-python@v4
+        with:
+          python-version: ${{ matrix.py_version }}
+      - name: Check out src from Git
+        uses: actions/checkout@v3
+      - name: Get history and tags for SCM versioning to work
+        run: |
+          git fetch --prune --unshallow
+          git fetch --depth=1 origin +refs/tags/*:refs/tags/*
+      - name: Upgrade pip
+        run: |
+          pip3 install --upgrade pip
+      - name: "Caching for dependencies (.txt) - restore existing or ensure new cache will be made"
+        uses: actions/cache@v4
+        id: cache
+        with:
+          path: ${{ env.pythonLocation }}
+          # manually disable a cache if needed by (re)setting CACHE_DATE
+          key: ${{ runner.os }}-pip-${{ env.pythonLocation }}-${{ SECRETS.CACHE_DATE }}-${{ hashFiles('**/requirements/**/*.txt') }}
+          restore-keys: |
+            ${{ runner.os }}-pip-
+      - run: |
+          ci/setup-postgres.sh
+          sudo apt-get -y install coinor-cbc
+      - name: Install FlexMeasures & exact dependencies for tests
+        run: make install-for-test
+        if: github.event_name == 'push' && steps.cache.outputs.cache-hit != 'true'
+      - name: Install FlexMeasures & latest dependencies for tests
+        run: make install-for-test pinned=no
+        if: github.event_name == 'pull_request'
+      - name: Run all doctests in the data/schemas subpackage
+        run: pytest flexmeasures/data/schemas --doctest-modules --ignore flexmeasures/data/schemas/tests
+      - name: Run all doctests in the utils subpackage
+        run: pytest flexmeasures/utils --doctest-modules --ignore flexmeasures/utils/tests
+      - name: Run all tests except those marked to be skipped by GitHub AND record coverage
+        run: pytest -v -m "not skip_github" --cov=flexmeasures --cov-branch --cov-report=lcov
+      - name: Coveralls
+        uses: coverallsapp/github-action@v2
+        with:
+          fail-on-error: false
+        if: ${{ matrix.coverage == 'yes' }}
+    env:
+      PGHOST: 127.0.0.1
+      PGPORT: 5432
+      PGUSER: flexmeasures_test
+      PGDB: flexmeasures_test
+      PGPASSWORD: flexmeasures_test
+
+    services:
+      # Label used to access the service container
+      postgres:
+        # Docker Hub image
+        image: postgres:12.5 
+        env:
+          POSTGRES_USER: flexmeasures_test
+          POSTGRES_PASSWORD: flexmeasures_test
+          POSTGRES_DB: flexmeasures_test
+        ports:
+          - 5432:5432
+        # needed because the postgres container does not provide a healthcheck
+        options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5

+ 19 - 0
.pre-commit-config.yaml

@@ -0,0 +1,19 @@
+repos:
+-   repo: https://github.com/pycqa/flake8
+    rev: 7.1.1  # New version tags can be found here: https://github.com/pycqa/flake8/tags
+    hooks:
+    - id: flake8
+      name: flake8 (code linting)
+-   repo: https://github.com/psf/black
+    rev: 24.8.0  # New version tags can be found here: https://github.com/psf/black/tags
+    hooks:
+    - id: black
+      name: black (code formatting)
+-   repo: local
+    hooks:
+    - id: mypy
+      name: mypy (static typing)
+      pass_filenames: false
+      language: script
+      entry: ci/run_mypy.sh
+      verbose: true

+ 27 - 0
.readthedocs.yaml

@@ -0,0 +1,27 @@
+# Read the Docs configuration file
+# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
+
+# Required
+version: 2
+
+sphinx:
+  configuration: documentation/conf.py
+  fail_on_warning: true
+
+# Optionally build your docs in additional formats such as PDF
+#formats:
+#  - pdf  # stopped working (e.g. https://readthedocs.org/projects/flexmeasures/builds/26604114/, also not required)
+
+build:
+  os: ubuntu-20.04
+  tools:
+    python: "3.9"
+  jobs:
+    post_create_environment:
+      - pip install . --no-deps  # as python install step, RTD installs deps eagerly
+
+python: 
+  install:
+    - requirements: requirements/3.9/app.txt
+    - requirements: requirements/3.9/docs.txt
+

+ 40 - 0
Dockerfile

@@ -0,0 +1,40 @@
+FROM amd64/ubuntu:22.04
+
+ENV DEBIAN_FRONTEND noninteractive
+ENV LC_ALL C.UTF-8
+ENV LANG C.UTF-8
+
+# pre-requisites
+RUN apt-get update && apt-get install --no-install-recommends -y --upgrade python3 python3-pip git curl gunicorn coinor-cbc postgresql-client && apt-get clean
+
+WORKDIR /app
+# requirements - doing this earlier, so we don't install them each time. Use --no-cache to refresh them.
+COPY requirements /app/requirements
+
+# py dev tooling
+RUN python3 -m pip install --no-cache-dir --upgrade pip && python3 --version && \
+    pip3 install --no-cache-dir --upgrade setuptools && pip3 install highspy && \
+    PYV=$(python3 -c "import sys;t='{v[0]}.{v[1]}'.format(v=list(sys.version_info[:2]));sys.stdout.write(t)") && \
+    pip3 install --no-cache-dir -r requirements/$PYV/app.txt -r requirements/$PYV/dev.txt -r requirements/$PYV/test.txt
+
+# Copy code and meta/config data
+COPY setup.* pyproject.toml .flaskenv wsgi.py /app/
+COPY flexmeasures/ /app/flexmeasures
+RUN find . | grep -E "(__pycache__|\.pyc|\.pyo$)" | xargs rm -rf
+COPY .git/ /app/.git
+
+RUN pip3 install --no-cache-dir .
+
+EXPOSE 5000
+
+CMD [ \
+    "gunicorn", \
+    "--bind", "0.0.0.0:5000", \
+    # This is set to /tmp by default, but this is part of the Docker overlay filesystem, and can cause stalls.
+    # http://docs.gunicorn.org/en/latest/faq.html#how-do-i-avoid-gunicorn-excessively-blocking-in-os-fchmod
+    "--worker-tmp-dir", "/dev/shm", \
+    # Ideally you'd want one worker per container, but we don't want to risk the health check timing out because
+    # another request is taking a long time to complete.
+    "--workers", "2", "--threads", "4", \
+    "wsgi:application" \
+    ]

+ 174 - 0
LICENSE

@@ -0,0 +1,174 @@
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.

+ 147 - 0
Makefile

@@ -0,0 +1,147 @@
+# Check Python major and minor version
+# For more information, see https://stackoverflow.com/a/22105036
+PYV = $(shell python -c "import sys;t='{v[0]}.{v[1]}'.format(v=list(sys.version_info[:2]));sys.stdout.write(t)")
+HIGHS_DIR = "../HiGHS"
+
+# Note: use tabs
+# actions which are virtual, i.e. not a script
+.PHONY: install install-for-dev install-for-test install-deps install-flexmeasures run-local test freeze-deps upgrade-deps update-docs update-docs-pdf show-file-space show-data-model clean-db cli-autocomplete build-highs-macos install-highs-macos
+
+
+# ---- Development ---
+
+run-local:
+	python run-local.py
+
+test:
+	make install-for-test
+	pytest
+
+# ---- Documentation ---
+
+gen_code_docs := False # by default code documentation is not generated
+
+update-docs:
+	@echo "Creating docs environment ..."
+	make install-docs-dependencies
+	@echo "Creating documentation ..."
+	export GEN_CODE_DOCS=${gen_code_docs}; cd documentation; make clean; make html SPHINXOPTS="-W --keep-going -n"; cd ..
+
+update-docs-pdf:
+	@echo "NOTE: PDF documentation requires packages (on Debian: latexmk texlive-latex-recommended texlive-latex-extra texlive-fonts-recommended)"
+	make install-docs-dependencies
+
+	export GEN_CODE_DOCS=${gen_code_docs}; cd documentation; make clean; make latexpdf; make latexpdf; cd ..  # make latexpdf can require two passes
+
+# ---- Installation ---
+
+install: install-deps install-flexmeasures
+
+install-for-dev:
+	make freeze-deps
+	make ensure-deps-folder
+	pip-sync requirements/${PYV}/app.txt requirements/${PYV}/dev.txt requirements/${PYV}/test.txt
+	make install-flexmeasures
+# Locally install HiGHS on macOS
+	@if [ "$(shell uname)" = "Darwin" ]; then \
+		make install-highs-macos; \
+	fi
+
+install-for-test:
+	make install-pip-tools
+# Pass pinned=no if you want to test against latest stable packages, default is our pinned dependency set
+ifneq ($(pinned), no)
+	pip-sync requirements/${PYV}/app.txt requirements/${PYV}/test.txt
+else
+	pip install --upgrade -r requirements/app.in -r requirements/test.in
+endif
+	make install-flexmeasures
+# Locally install HiGHS on macOS
+	@if [ "$(shell uname)" = "Darwin" ]; then \
+		make install-highs-macos; \
+	fi
+
+$(HIGHS_DIR):
+	@if [ ! -d $(HIGHS_DIR) ]; then \
+		git clone https://github.com/ERGO-Code/HiGHS.git $(HIGHS_DIR); \
+	fi
+	brew install cmake;
+
+build-highs-macos: $(HIGHS_DIR)
+	cd $(HIGHS_DIR); \
+	git checkout latest; \
+	mkdir -p build; \
+	cd build; \
+	cmake ..; \
+	make; \
+	make install; \
+	cd ../../flexmeasures;
+
+install-highs-macos: build-highs-macos
+	pip install $(HIGHS_DIR) ; \
+
+install-deps:
+	make install-pip-tools
+	make freeze-deps
+# Pass pinned=no if you want to test against latest stable packages, default is our pinned dependency set
+ifneq ($(pinned), no)
+	pip-sync requirements/${PYV}/app.txt
+else
+	pip install --upgrade -r requirements/app.in
+endif
+
+install-flexmeasures:
+	pip install -e .
+
+install-pip-tools:
+	pip3 install -q "pip-tools>=7.2"
+
+install-docs-dependencies:
+	pip install -r requirements/${PYV}/docs.txt
+
+freeze-deps:
+	make ensure-deps-folder
+	make install-pip-tools
+	pip-compile -o requirements/${PYV}/app.txt requirements/app.in
+	pip-compile -c requirements/${PYV}/app.txt -o requirements/${PYV}/test.txt requirements/test.in
+	pip-compile -c requirements/${PYV}/app.txt -c requirements/${PYV}/test.txt -o requirements/${PYV}/dev.txt requirements/dev.in
+	pip-compile -c requirements/${PYV}/app.txt -o requirements/${PYV}/docs.txt requirements/docs.in
+
+upgrade-deps:
+	make ensure-deps-folder
+	make install-pip-tools
+	pip-compile --upgrade -o requirements/${PYV}/app.txt requirements/app.in
+	pip-compile --upgrade -c requirements/${PYV}/app.txt -o requirements/${PYV}/test.txt requirements/test.in
+	pip-compile --upgrade -c requirements/${PYV}/app.txt -c requirements/${PYV}/test.txt -o requirements/${PYV}/dev.txt requirements/dev.in
+	pip-compile --upgrade -c requirements/${PYV}/app.txt -o requirements/${PYV}/docs.txt requirements/docs.in
+
+ifneq ($(skip-test), yes)
+	make test
+endif
+
+# ---- Data ----
+
+show-file-space:
+	# Where is our file space going?
+	du --summarize --human-readable --total ./* ./.[a-zA-Z]* | sort -h
+
+upgrade-db:
+	flask db current
+	flask db upgrade
+	flask db current
+
+show-data-model:
+	# This generates the data model, as currently written in code, as a PNG picture.
+	# Also try with --schema for the database model. 
+	# With --deprecated, you'll see the legacy models, and not their replacements.
+	# Use --help to learn more. 
+	./flexmeasures/data/scripts/visualize_data_model.py --uml
+
+ensure-deps-folder:
+	mkdir -p requirements/${PYV}
+
+clean-db:
+	./flexmeasures/data/scripts/clean_database.sh ${db_name} ${db_user}
+
+cli-autocomplete:
+	./flexmeasures/cli/scripts/add_scripts_path.sh ${extension}

+ 3 - 0
NOTICE

@@ -0,0 +1,3 @@
+FlexMeasures
+Copyright 2018-2025 Seita Energy Flexibility
+Apache 2.0 license

+ 85 - 0
README.md

@@ -0,0 +1,85 @@
+![FlexMeasures Logo Light](https://github.com/FlexMeasures/screenshots/blob/main/logo/flexmeasures-horizontal-color.svg#gh-light-mode-only)
+![FlexMeasures Logo Dark](https://github.com/FlexMeasures/screenshots/blob/main/logo/flexmeasures-horizontal-dark.svg#gh-dark-mode-only)
+
+[![License](https://img.shields.io/github/license/seitabv/flexmeasures?color=blue)](https://github.com/FlexMeasures/flexmeasures/blob/main/LICENSE)
+![lint-and-test](https://github.com/FlexMeasures/flexmeasures/workflows/lint-and-test/badge.svg)
+[![Pypi Version](https://img.shields.io/pypi/v/flexmeasures.svg)](https://pypi.python.org/pypi/flexmeasures)
+[![](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
+[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
+[![Documentation Status](https://readthedocs.org/projects/flexmeasures/badge/?version=latest)](https://flexmeasures.readthedocs.io/en/latest/?badge=latest)
+[![Coverage](https://coveralls.io/repos/github/FlexMeasures/flexmeasures/badge.svg)](https://coveralls.io/github/FlexMeasures/flexmeasures)
+[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/6095/badge)](https://bestpractices.coreinfrastructure.org/projects/6095)
+
+*FlexMeasures* is an intelligent EMS (energy management system) to optimize behind-the-meter energy flexibility.
+Build your smart energy apps & services with FlexMeasures as backend for real-time orchestration! 
+
+In a nutshell, FlexMeasures turns data into optimized schedules for flexible assets like batteries and heat pumps, or for flexible industry processes:
+
+![The most simple view of FlexMeasures, turning data into schedules](https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/simple-flexEMS.png)
+
+
+Here is why using FlexMeasures is a great idea:
+
+- Developing energy flexibility apps & services (e.g. to enable demand response) is crucial, but expensive.
+- FlexMeasures reduces development costs with real-time data intelligence & integrations, uncertainty models and developer support such as API/UI and plugins.
+
+![High-level overview of FlexMeasures as an EMS for energy flexibility apps, using plugins to fit a given use case](https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/overview-flexEMS.png)
+
+
+So why optimise the schedules of flexible assets? Because planning ahead allows flexible assets to serve the whole system with their flexibility, e.g. by shifting energy consumption to other times.
+For the asset owners, this creates CO₂ savings but also monetary value (e.g. through self-consumption, dynamic tariffs and grid incentives). FlexMeasures thrives to be applicable in cases with multiple sources of value ("value stacking") and multiple types of assets (e.g. home/office/factory).
+
+As possible users, we see energy service companies (ESCOs) who want to build real-time apps & services around energy flexibility for their customers, or medium/large industrials who are looking for support in their internal digital tooling. However, even small companies and hobby projects might find FlexMeasures useful!
+
+## How does FlexMeasures enable rapid development of energy flexibility apps?
+
+FlexMeasures is designed to help with three basic needs of developers in the energy flexibility domain:
+
+### I need help with integrating real-time data and continuously computing new data
+
+FlexMeasures is designed to make decisions based on data in an automated way. Data pipelining and dedicated machine learning tooling is crucial.
+
+- API/CLI functionality to read in time series data
+- Extensions for integrating 3rd party data, e.g. from [ENTSO-E](https://github.com/SeitaBV/flexmeasures-entsoe) or [OpenWeatherMap](https://github.com/SeitaBV/flexmeasures-openweathermap)
+- Forecasting for the upcoming hours
+- Schedule optimization for flexible assets
+
+
+### It's hard to correctly model data with different sources, resolutions, horizons and even uncertainties
+
+Much developer time is spent correcting data and treating it correctly, so that you know you are computing on the right knowledge.
+
+FlexMeasures is built on the [timely-beliefs framework](https://github.com/SeitaBV/timely-beliefs), so we model this real-world aspect accurately:
+
+- Expected data properties are explicit (e.g. unit, time resolution)
+- Incoming data is converted to fitting unit and time resolution automatically
+- FlexMeasures also stores who thought that something happened (or that it will happen), and when they thought so
+- Uncertainty can be modelled (useful for forecasting)
+
+
+### I want to build new features quickly, not spend days solving basic problems
+
+Building customer-facing apps & services is where developers make impact. We make their work easy.
+
+- FlexMeasures has well-documented API endpoints and CLI commands to interact with its model and data
+- You can extend it easily with your own logic by writing plugins
+- A backend UI shows you your assets in maps and your data in plots. There is also support for plots to be available per API, for integration in your own frontend
+- Multi-tenancy ― model multiple accounts on one server. Data is only seen/editable by authorized users in the right account
+
+
+## Getting started
+
+Head over to our [documentation](https://flexmeasures.readthedocs.io), e.g. the [getting started guide](https://flexmeasures.readthedocs.io/en/latest/getting-started.html) or the [5-minute tutorial](https://flexmeasures.readthedocs.io/en/latest/tut/toy-example-from-scratch.html). Or find more information on [FlexMeasures.io](https://flexmeasures.io).
+
+See also [Seita's Github profile](https://github.com/SeitaBV), e.g. for FlexMeasures plugin examples.
+
+
+## Development & community
+
+FlexMeasures was initiated by [Seita BV](https://www.seita.nl) in The Netherlands in order to make sure that smart backend software is available to all parties working with energy flexibility, no matter where they are working on their local energy transition.
+
+We made FlexMeasures freely available under the Apache2.0 licence and it is now [an incubation project at the Linux Energy Foundation](https://www.lfenergy.org/projects/flexmeasures/).
+
+Within the FlexMeasures project, [we welcome contributions](https://github.com/FlexMeasures/tsc/blob/main/CONTRIBUTING.md). You can also [learn more about our governance](https://github.com/Flexmeasures/tsc/blob/main/GOVERNANCE.md).
+
+You can connect with the community here on GitHub (e.g. by creating an issue), on [the mailing list](https://lists.lfenergy.org/g/flexmeasures), on [the FlexMeasures channel within the LF Energy Slack](https://slack.lfenergy.org/) or [by contacting the current maintainers](https://seita.nl/who-we-are/#contact).

+ 10 - 0
ci/DEPLOY.sh

@@ -0,0 +1,10 @@
+#!/bin/bash -e
+
+# The purpose of this script is to deploy built and tested code to the staging server.
+# You can use a git post-receive hook to update your app afterwards (see ci/Readme.md)
+
+# Add a git remote (see developer docs on continuous integration for help)
+git remote add staging $STAGING_REMOTE_REPO
+
+# Push the branch being deployed to the git remote. Also push any annotated tags (with a -m message).
+git push --follow-tags --set-upstream staging $BRANCH_NAME

+ 17 - 0
ci/Dockerfile.update

@@ -0,0 +1,17 @@
+ARG PYTHON_VERSION
+
+FROM python:${PYTHON_VERSION}-slim-bookworm as update
+
+# Install dependencies
+
+RUN apt-get update && apt-get install -y --no-install-recommends \
+    build-essential \
+    git \
+    && rm -rf /var/lib/apt/lists/*
+
+# Copy the source code
+
+COPY . /app
+WORKDIR /app
+
+CMD ["python", "--version"]

+ 4 - 0
ci/Readme.md

@@ -0,0 +1,4 @@
+# Continuous integration
+
+Here are some useful script for CI.
+

+ 34 - 0
ci/install-cbc-from-source.sh

@@ -0,0 +1,34 @@
+#!/bin/bash
+
+#################################################################################
+# This script installs the Cbc solver from source
+# (for cases where you can't install the coinor-cbc package via package managers)
+# Note: We use 2.9 here, but 2.10 has also been working well in our CI pipeline.
+#################################################################################
+
+# Install to this dir
+SOFTWARE_DIR=/home/seita/software
+if [ "$1" != "" ]; then
+  SOFTWARE_DIR=$1
+fi
+echo "Attempting to install Cbc-2.9 to $SOFTWARE_DIR ..."
+
+mkdir -p $SOFTWARE_DIR
+cd $SOFTWARE_DIR
+
+# Getting Cbc and its build tools
+git clone --branch=stable/2.9 https://github.com/coin-or/Cbc Cbc-2.9
+cd Cbc-2.9
+git clone --branch=stable/0.8 https://github.com/coin-or-tools/BuildTools/
+BuildTools/get.dependencies.sh fetch
+
+# Configuring, installing
+./configure
+make
+make install
+
+# adding new binaries to PATH
+# NOTE: This line might need to be added to your ~/.bashrc or the like
+export PATH=$PATH:$SOFTWARE_DIR/Cbc-2.9/bin
+
+echo "Done. The command 'cbc' should now work on this machine."

+ 2 - 0
ci/load-psql-extensions.sql

@@ -0,0 +1,2 @@
+CREATE EXTENSION IF NOT EXISTS cube;
+CREATE EXTENSION IF NOT EXISTS earthdistance;

+ 13 - 0
ci/run_mypy.sh

@@ -0,0 +1,13 @@
+#!/bin/bash
+set -e
+pip install --upgrade 'mypy>=0.902'
+pip install types-pytz types-requests types-Flask types-click types-redis types-tzlocal types-python-dateutil types-setuptools types-tabulate types-PyYAML
+# We are checking python files which have type hints, and leave out bigger issues we made issues for
+# * data/scripts: We'll remove legacy code: https://trello.com/c/1wEnHOkK/7-remove-custom-data-scripts
+# * data/models and data/services: https://trello.com/c/rGxZ9h2H/540-makequery-call-signature-is-incoherent
+    files=$(find flexmeasures \
+    -not \( -path flexmeasures/data/scripts -prune \) \
+    -not \( -path flexmeasures/data/models -prune \) \
+    -not \( -path flexmeasures/data/services -prune \) \
+    -name \*.py | xargs grep -l "from typing import")
+mypy --follow-imports skip --ignore-missing-imports $files 

+ 30 - 0
ci/setup-postgres.sh

@@ -0,0 +1,30 @@
+#!/bin/bash
+
+######################################################################
+# This script sets up a new Postgres instance in a CI environment
+######################################################################
+
+
+# Install dependencies
+sudo apt-get update
+sudo apt-get -y install postgresql-client
+
+# Wait for the DB service to be up.
+
+statusFile=/tmp/postgres-status
+while [[ true ]]; do
+  telnet $PGHOST $PGPORT &> ${statusFile}
+  status=$(grep "Connection refused" ${statusFile} | wc -l)
+  echo "Status: $status"
+
+  if [[ "${status}" -eq 1 ]]; then
+    echo "Postgres not running, waiting."
+    sleep 1
+  else
+    rm ${statusFile}
+    echo "Postgres running, ready to proceed."
+    break;
+  fi
+done
+
+psql -h $PGHOST -p $PGPORT --file ci/load-psql-extensions.sql -U $PGUSER $PGDB;

+ 81 - 0
ci/update-packages.sh

@@ -0,0 +1,81 @@
+#! /bin/bash
+
+######################################################################
+# This script sets up docker environments for supported Python versions
+# for updating packages in each of them.
+#
+# To upgrade, add "upgrade" as parameter.
+#
+# To execute this script, cd into the `ci` directory, then call from there.
+######################################################################
+
+set -e
+set -x
+
+PYTHON_VERSIONS=(3.8 3.9 3.10 3.11 3.12)
+
+# check if we will upgrade or just freeze
+UPDATE_CMD=freeze-deps
+if [ "$1" == "upgrade" ]; then
+  UPDATE_CMD=upgrade-deps
+  echo "Going to upgrade dependencies with make $UPDATE_CMD ..."
+else
+  echo "Going to freeze dependencies with make $UPDATE_CMD..."
+fi
+
+# Check if docker is installed
+if ! [ -x "$(command -v docker)" ]; then
+  echo "Docker is not installed. Please install docker and try again."
+  exit 1
+fi
+
+# Check if we can run docker without sudo (check is not needed for Macos system)
+if ! docker ps > /dev/null 2>&1 && [[ "$(uname)" != "Darwin" ]]; then
+  echo "Docker is not running without sudo. Please add your user to the docker group and try again."
+  echo "You may use the following command to do so:"
+  echo "sudo usermod -aG docker $USER"
+  echo "You will need to log out and log back in for this to take effect."
+  exit 1
+fi
+
+SOURCE_DIR=$(pwd)/../
+
+TEMP_DIR=$(mktemp -d)
+
+# Copy the build files to the temp directory
+cp -r ../ci $TEMP_DIR/ci
+cp -r ../requirements $TEMP_DIR/requirements
+cp -r ../Makefile $TEMP_DIR
+
+cd $TEMP_DIR
+
+
+for PYTHON_VERSION in "${PYTHON_VERSIONS[@]}"
+do
+    echo "Working on dependencies for Python $PYTHON_VERSION ..."
+    # Check if container exists and remove it
+    docker container inspect flexmeasures-update-packages-$PYTHON_VERSION > /dev/null 2>&1 && docker rm --force flexmeasures-update-packages-$PYTHON_VERSION
+    # Build the docker image
+    docker build --build-arg=PYTHON_VERSION=$PYTHON_VERSION -t flexmeasures-update-packages:$PYTHON_VERSION . -f ci/Dockerfile.update
+    # Build flexmeasures
+    # We are disabling running tests here, because we only want to update the packages. Running tests would require us to setup a postgres database,
+    # which is not necessary for updating packages.
+    docker run --name flexmeasures-update-packages-$PYTHON_VERSION -it flexmeasures-update-packages:$PYTHON_VERSION make $UPDATE_CMD skip-test=yes
+    # Copy the requirements to the source directory
+    docker cp flexmeasures-update-packages-$PYTHON_VERSION:/app/requirements/$PYTHON_VERSION $SOURCE_DIR/requirements/
+    # Remove the container
+    docker rm flexmeasures-update-packages-$PYTHON_VERSION
+    # Remove the image
+    docker rmi flexmeasures-update-packages:$PYTHON_VERSION
+done
+
+# Clean up docker builder cache
+echo "You can clean up the docker builder cache with the following command:"
+echo "docker builder prune --all -f"
+
+# Remove the temp directory
+rm -rf $TEMP_DIR
+
+# Return to the ci directory (in case you want to rerun this script)
+cd $SOURCE_DIR
+cd ci

+ 122 - 0
docker-compose.yml

@@ -0,0 +1,122 @@
+# ------------------------------------------------------------------
+# This runs your local FlexMeasures code in a docker compose stack.
+# Two FlexMeasures instances are spun up, one for serving the web
+# UI & API, one to work on computation jobs.
+# The server is adding a toy account when it starts.
+# (user: toy-user@flexmeasures.io, password: toy-password)
+# 
+# Instead of local code (which is useful for development purposes),
+# you can also use the official (and stable) FlexMeasures docker image
+# (lfenergy/flexmeasures). Replace the two `build` directives with
+# an `image` directive.
+# ------------------------------------------------------------------
+
+services:
+  dev-db:
+    image: postgres
+    expose:
+      - 5432
+    restart: always
+    environment:
+      POSTGRES_DB: fm-dev-db
+      POSTGRES_USER: fm-dev-db-user
+      POSTGRES_PASSWORD: fm-dev-db-pass
+    volumes:
+      - ./ci/load-psql-extensions.sql:/docker-entrypoint-initdb.d/load-psql-extensions.sql
+  queue-db:
+    image: redis
+    restart: always
+    command: redis-server --loglevel warning --requirepass fm-redis-pass
+    expose:
+      - 6379
+    volumes:
+      - redis-cache:/data
+    environment:
+     - REDIS_REPLICATION_MODE=master
+  mailhog:
+    image: mailhog/mailhog
+    ports:
+      - "1025:1025"
+      - "8025:8025"
+    restart: always
+  server:
+    build:
+      context: .
+      dockerfile: Dockerfile
+    ports:
+      - 5000:5000
+    depends_on:
+      - dev-db
+      - test-db  # use -e SQLALCHEMY_TEST_DATABASE_URI=... to exec pytest
+      - queue-db
+      - mailhog  
+    restart: on-failure
+    healthcheck:
+      test: ["CMD", "curl", "-f", "http://localhost:5000/api/v3_0/health/ready"]
+      start_period: 10s
+      interval: 20s
+      timeout: 10s
+      retries: 6
+    environment:
+      SQLALCHEMY_DATABASE_URI: "postgresql://fm-dev-db-user:fm-dev-db-pass@dev-db:5432/fm-dev-db"
+      SECRET_KEY: notsecret
+      FLEXMEASURES_ENV: development
+      FLEXMEASURES_REDIS_URL: queue-db
+      FLEXMEASURES_REDIS_PASSWORD: fm-redis-pass
+      MAIL_SERVER: mailhog   # MailHog mail server
+      MAIL_PORT: 1025        # MailHog mail port
+      LOGGING_LEVEL: INFO
+    volumes:
+      # a place for config and plugin code, and custom requirements.txt
+      # the 1st mount point is for running the FlexMeasures CLI, the 2nd for gunicorn
+      - ./flexmeasures-instance/:/usr/var/flexmeasures-instance/:ro
+      - ./flexmeasures-instance/:/app/instance/:ro
+    entrypoint: ["/bin/sh", "-c"]
+    command:
+    - |
+      pip install -r /usr/var/flexmeasures-instance/requirements.txt
+      flexmeasures db upgrade
+      flexmeasures add toy-account --name 'Docker Toy Account'
+      gunicorn --bind 0.0.0.0:5000 --worker-tmp-dir /dev/shm --workers 2 --threads 4 wsgi:application
+  worker:
+    build:
+      context: .
+      dockerfile: Dockerfile
+    depends_on:
+      - dev-db
+      - queue-db
+      - mailhog  
+    restart: on-failure
+    environment:
+      SQLALCHEMY_DATABASE_URI: "postgresql://fm-dev-db-user:fm-dev-db-pass@dev-db:5432/fm-dev-db"
+      FLEXMEASURES_REDIS_URL: queue-db
+      FLEXMEASURES_REDIS_PASSWORD: fm-redis-pass
+      SECRET_KEY: notsecret
+      FLEXMEASURES_ENV: development
+      MAIL_SERVER: mailhog   # MailHog mail server
+      MAIL_PORT: 1025        # MailHog mail port
+      LOGGING_LEVEL: INFO 
+    volumes:
+      # a place for config and plugin code, and custom requirements.txt
+      - ./flexmeasures-instance/:/usr/var/flexmeasures-instance/:ro
+    entrypoint: ["/bin/sh", "-c"]
+    command: 
+    - |
+      pip install -r /usr/var/flexmeasures-instance/requirements.txt
+      flexmeasures jobs run-worker --name flexmeasures-worker --queue forecasting\|scheduling
+  test-db:
+    image: postgres
+    expose:
+      - 5432
+    restart: always
+    environment:
+      POSTGRES_DB: fm-test-db
+      POSTGRES_USER: fm-test-db-user
+      POSTGRES_PASSWORD: fm-test-db-pass
+    volumes:
+      - ./ci/load-psql-extensions.sql:/docker-entrypoint-initdb.d/load-psql-extensions.sql
+
+volumes:
+  redis-cache:
+    driver: local
+  flexmeasures-instance:

+ 20 - 0
documentation/Makefile

@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS    =
+SPHINXBUILD   = sphinx-build
+SPHINXPROJ    = FLEXMEASURES
+SOURCEDIR     = .
+BUILDDIR      = ../flexmeasures/ui/static/documentation
+
+# Put it first so that "make" without argument is like "make help".
+help:
+	@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+	@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

+ 42 - 0
documentation/_static/css/custom.css

@@ -0,0 +1,42 @@
+div section {
+    text-align: justify;
+}
+#table-of-contents {
+    text-align: left;
+}
+
+div .toctree-wrapper > ul {
+    column-count: 2;
+    margin: 0;
+}
+
+ul .toctree-l1 {
+    margin: 0;
+    -webkit-column-break-inside: avoid;
+    page-break-inside: avoid;
+    break-inside: avoid-column;
+}
+
+div .contents > ul {
+    column-count: 2;
+    margin: 0;
+}
+
+div .contents li {
+    margin: 0;
+    -webkit-column-break-inside: avoid;
+    page-break-inside: avoid;
+    break-inside: avoid-column;
+}
+
+div.admonition.info-icon > .admonition-title:before {
+  content: "\f05a"; /* the fa-circle-info icon */
+}
+
+/* Fix white-space wrapping in tables.
+ * See https://github.com/readthedocs/sphinx_rtd_theme/issues/1505
+ * This is included via html_static_path and html_style in conf.py
+ */
+.wy-table-responsive table td {
+    white-space: normal;
+}

+ 67 - 0
documentation/_templates/custom-module-template.rst

@@ -0,0 +1,67 @@
+.. Adapted from https://stackoverflow.com/a/62613202
+{{ fullname | escape | underline}}
+
+{% block modules %}
+{% if modules %}
+.. rubric:: Modules
+
+.. autosummary::
+   :toctree:
+   :template: custom-module-template.rst                
+   :recursive:
+{% for item in modules %}
+   {% if "test" not in item %}
+   {{ item }}
+   {% endif %}
+{%- endfor %}
+{% endif %}
+{% endblock %}
+
+.. automodule:: {{ fullname }}
+  
+   {% block attributes %}
+   {% if attributes %}
+   .. rubric:: Module Attributes
+
+
+   {% for item in attributes %}
+   .. autoattribute::
+      {{ item }}
+   {%- endfor %}
+   {% endif %}
+   {% endblock %}
+
+   {% block functions %}
+   {% if functions %}
+   .. rubric:: {{ _('Functions') }}
+
+   {% for item in functions %}
+   .. autofunction::
+      {{ item }}
+   {%- endfor %}
+   {% endif %}
+   {% endblock %}
+
+   {% block classes %}
+   {% if classes %}
+   .. rubric:: {{ _('Classes') }}
+
+   {% for item in classes %}     
+   .. autoclass:: {{ item }}
+      :members:
+      :special-members: __init__
+      :private-members:
+   {%- endfor %}
+   {% endif %}
+   {% endblock %}
+
+   {% block exceptions %}
+   {% if exceptions %}
+   .. rubric:: {{ _('Exceptions') }}
+      
+   {% for item in exceptions %}
+   .. autoexception::
+      {{ item }}
+   {%- endfor %}
+   {% endif %}
+   {% endblock %}

+ 66 - 0
documentation/api/aggregator.rst

@@ -0,0 +1,66 @@
+.. _aggregator:
+
+Aggregator
+==========
+
+The Aggregator organises the interaction between the Supplier and Prosumers/ESCos.
+
+An Aggregator can access the following services:
+
+- *postPrognosis* :ref:`(example) <post_prognosis_aggregator>`
+- *postPriceData* :ref:`(example) <post_price_data_aggregator>`
+- *getFlexRequest*
+- *postFlexOffer*
+- *getFlexOrder*
+- *getMeterData* :ref:`(example) <get_meter_data_aggregator>`
+- *getPrognosis* :ref:`(example) <get_prognosis_aggregator>`
+- *getUdiEvent*
+- *postDeviceMessage*
+
+.. _post_prognosis_aggregator:
+
+Post prognosis
+--------------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.post_prognosis
+
+.. _post_price_data_aggregator:
+
+Post price data
+---------------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.post_price_data
+
+
+..  .. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.get_flex_request
+
+..  .. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.post_flex_offer
+
+..  .. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.get_flex_order
+
+.. _get_meter_data_aggregator:
+
+Get meter data
+--------------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.get_meter_data
+
+.. _get_prognosis_aggregator:
+
+Get prognosis
+-------------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.get_prognosis
+
+..  .. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.get_udi_event
+
+..  .. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.post_device_message

+ 607 - 0
documentation/api/change_log.rst

@@ -0,0 +1,607 @@
+.. _api_change_log:
+
+API change log
+===============
+
+.. note:: The FlexMeasures API follows its own versioning scheme. This is also reflected in the URL (e.g. `/api/v3_0`), allowing developers to upgrade at their own pace.
+
+v3.0-23 | 2025-04-08
+""""""""""""""""""""
+
+- Support saving the scheduled :abbr:`SoC (state of charge)` by referencing an appropriate sensor in the ``flex-model`` field ``state-of-charge``.
+- Introduce new price fields in the ``flex-context`` in order to relax device-level power constraints in the ``device-model``:
+
+  - ``consumption-breach-price``: if set, the ``consumption-capacity`` is used as a soft constraint.
+  - ``production-breach-price``: if set, the ``production-capacity`` is used as a soft constraint.
+  - In both cases, the price is applied both to (the height of) the highest breach in the planning window (as a per-kW price) and to (the area of) each breach that occurs (as a per-kW price per hour).
+    That means both high breaches and long breaches are penalized.
+
+v3.0-22 | 2025-03-17
+""""""""""""""""""""
+
+- Introduce new price fields in the ``flex-context`` in order to relax SoC constraints in the ``device-model``:
+
+  - ``soc-minima-breach-price``: if set, the ``soc-minima`` are used as a soft constraint.
+  - ``soc-maxima-breach-price``: if set, the ``soc-maxima`` are used as a soft constraint.
+  - In both cases, the price is applied both to (the height of) the highest breach in the planning window (as a per-kWh price) and to (the area of) each breach that occurs (as a per-kWh price per hour).
+    That means both high breaches and long breaches are penalized.
+
+- Fixed two alternatives for expressing a variable quantity as a time series; specifically, those involving the ``duration`` field.
+
+v3.0-22 | 2024-12-27
+""""""""""""""""""""
+
+- Allow using numeric values for ``flex-model`` fields accepting dimensionless quantities.
+
+v3.0-21 | 2024-12-16
+""""""""""""""""""""
+
+- Introduce new fields for defining capacity contracts and peak contracts in the ``flex-context``, used for scheduling against multiple contractual commitments simultaneously:
+
+  - ``site-consumption-breach-price``: if set, the ``site-consumption-capacity`` is used as a soft constraint.
+    The price is applied both to (the height of) the highest breach in the planning window (as a per-kW price) and to (the area of) each breach that occurs (as a per-kW price per hour).
+    That means both high breaches and long breaches are penalized.
+  - ``site-production-breach-price``: if set, the ``site-production-capacity`` is used as a soft constraint.
+    The price is applied both to (the height of) the highest breach in the planning window (as a per-kW price) and to (the area of) each breach that occurs (as a per-kW price per hour).
+    That means both high breaches and long breaches are penalized.
+  - ``site-peak-consumption-price``: consumption peaks above the ``site-peak-consumption`` are penalized against this per-kW price.
+  - ``site-peak-production-price``: production peaks above the ``site-peak-production`` are penalized against this per-kW price.
+  - ``site-peak-consumption``: current peak consumption; costs from peaks below it are considered sunk costs.
+  - ``site-peak-production``: current peak production; costs from peaks below it are considered sunk costs.
+
+v3.0-20 | 2024-09-18
+""""""""""""""""""""
+
+-  Introduce (optional) pagination to the endpoint `/assets` (GET), also adding the `all_accessible` option to allow querying all accessible accounts in one go.
+
+
+v3.0-19 | 2024-08-13
+""""""""""""""""""""
+
+- Allow passing a fixed price in the ``flex-context`` using the new fields ``consumption-price`` and ``production-price``, which are meant to replace the ``consumption-price-sensor`` and ``production-price-sensor`` fields, respectively.
+- Allow posting a single instantaneous belief as a list of one element to `/sensors/data` (POST).
+- Allow setting a SoC unit directly in some fields (formerly ``Float`` fields, and now ``Quantity`` fields), while still falling back on the contents of the ``soc-unit`` field, for backwards compatibility:
+
+  - ``soc-at-start``
+  - ``soc-min``
+  - ``soc-max``
+
+- Allow setting a unit directly in fields that already supported passing a time series:
+
+  - ``soc-maxima``
+  - ``soc-minima``
+  - ``soc-targets``
+
+- Allow passing a time series in fields that formerly only accepted passing a fixed quantity or a sensor reference:
+
+  - ``power-capacity``
+  - ``consumption-capacity``
+  - ``production-capacity``
+  - ``charging-efficiency``
+  - ``discharging-efficiency``
+  - ``storage-efficiency``
+  - ``soc-gain``
+  - ``soc-usage``
+
+- Added API notation section on variable quantities.
+- Updated section on scheduling; specifically, most flex-context and flex-model fields are now variable quantity fields, so a footnote now explains the few fields that aren't (yet) a variable quantity field.
+- Removed section on singular vs plural keys, which is no longer valid for crucial endpoints.
+
+v3.0-19 | 2024-08-09
+""""""""""""""""""""
+
+- Allow setting a SoC unit directly in some fields (formerly ``Float`` fields, and now ``Quantity`` fields), while still falling back on the contents of the ``soc-unit`` field, for backwards compatibility:
+
+  - ``soc-at-start``
+  - ``soc-min``
+  - ``soc-max``
+
+- Allow setting a unit directly in fields that already supported passing a time series:
+
+  - ``soc-maxima``
+  - ``soc-minima``
+  - ``soc-targets``
+
+- Allow passing a time series in fields that formerly only accepted passing a fixed quantity or a sensor reference:
+
+  - ``power-capacity``
+  - ``consumption-capacity``
+  - ``production-capacity``
+  - ``charging-efficiency``
+  - ``discharging-efficiency``
+  - ``storage-efficiency``
+  - ``soc-gain``
+  - ``soc-usage``
+
+
+v3.0-18 | 2024-03-07
+""""""""""""""""""""
+
+- Add support for providing a sensor definition to the ``soc-minima``, ``soc-maxima`` and ``soc-targets`` flex-model fields for `/sensors/<id>/schedules/trigger` (POST).
+
+v3.0-17 | 2024-02-26
+""""""""""""""""""""
+
+- Add support for providing a sensor definition to the ``site-power-capacity``, ``site-consumption-capacity`` and ``site-production-capacity`` flex-context fields for `/sensors/<id>/schedules/trigger` (POST).
+
+v3.0-16 | 2024-02-26
+""""""""""""""""""""
+
+- Fix support for providing a sensor definition to the ``power-capacity`` flex-model field for `/sensors/<id>/schedules/trigger` (POST).
+
+v3.0-15 | 2024-01-11
+""""""""""""""""""""
+
+- Support setting SoC constraints in the flex model for a given time period rather than a single datetime, using the new ``start``, ``end`` and/or ``duration`` fields of ``soc-maxima``, ``soc-minima`` and ``soc-targets``.
+
+v3.0-14 | 2023-12-07
+""""""""""""""""""""
+
+- Fix API version listing (GET /api/v3_0) for hosts running on Python 3.8.
+
+v3.0-13 | 2023-10-31
+""""""""""""""""""""
+
+- Read access to accounts, assets and sensors is given to external consultants (users with the *consultant* role who belong to a different organisation account) in case a consultancy relationship has been set up.
+- The `/accounts/<id>` (GET) endpoint includes the account ID of its consultancy.
+- Introduced the ``site-consumption-capacity`` and ``site-production-capacity`` to the ``flex-context`` field for `/sensors/<id>/schedules/trigger` (POST).
+
+v3.0-12 | 2023-09-20
+""""""""""""""""""""
+
+- Introduced the ``power-capacity`` field under ``flex-model``, and the ``site-power-capacity`` field under ``flex-context``, for `/sensors/<id>/schedules/trigger` (POST).
+
+v3.0-11 | 2023-08-02
+""""""""""""""""""""
+
+- Added REST endpoint for fetching one sensor: `/sensors/<id>` (GET)
+- Added REST endpoint for adding a sensor: `/sensors` (POST)
+- Added REST endpoint for patching a sensor: `/sensors/<id>` (PATCH)
+- Added REST endpoint for deleting a sensor: `/sensors/<id>` (DELETE)
+
+v3.0-10 | 2023-06-12
+""""""""""""""""""""
+
+- Introduced new ``flex-model`` fields for `/sensors/<id>/schedules/trigger` (POST):
+
+  - ``storage-efficiency``
+  - ``soc-minima``
+  - ``soc-maxima``
+
+- Introduced the ``database_redis`` optional field to the response of the endpoint `/health/ready` (GET).
+
+v3.0-9 | 2023-04-26
+"""""""""""""""""""
+
+- Added missing documentation for the public endpoints for authentication and listing active API versions.
+- Added REST endpoint for listing available services for a specific API version: `/api/v3_0` (GET). This functionality is similar to the *getService* endpoint for older API versions, but now also returns the full URL for each available service.
+
+v3.0-8 | 2023-03-23
+"""""""""""""""""""
+
+- Added REST endpoint for listing accounts and their account roles: `/accounts` (GET)
+- Added REST endpoint for showing an account and its account roles: `/accounts/<id>` (GET)
+
+v3.0-7 | 2023-02-28
+"""""""""""""""""""
+
+- Fix premature deserialization of ``flex-context`` field for `/sensors/<id>/schedules/trigger` (POST).
+
+v3.0-6 | 2023-02-01
+"""""""""""""""""""
+
+- Sunset all fields that were moved to ``flex-model`` and ``flex-context`` fields to `/sensors/<id>/schedules/trigger` (POST). See v3.0-5.
+
+v3.0-5 | 2023-01-04
+"""""""""""""""""""
+
+- Introduced ``flex-model`` and ``flex-context`` fields to `/sensors/<id>/schedules/trigger` (POST). They are dictionaries and group pre-existing fields:
+
+    - ``soc-at-start`` -> send in ``flex-model`` instead
+    - ``soc-min`` -> send in ``flex-model`` instead
+    - ``soc-max`` -> send in ``flex-model`` instead
+    - ``soc-targets`` -> send in ``flex-model`` instead
+    - ``soc-unit`` -> send in ``flex-model`` instead
+    - ``roundtrip-efficiency`` -> send in ``flex-model`` instead
+    - ``prefer-charging-sooner`` -> send in ``flex-model`` instead
+    - ``consumption-price-sensor`` -> send in ``flex-context`` instead
+    - ``production-price-sensor`` -> send in ``flex-context`` instead
+    - ``inflexible-device-sensors`` -> send in ``flex-context`` instead
+
+- Introduced the ``duration`` field to `/sensors/<id>/schedules/trigger` (POST) for setting a planning horizon explicitly.
+- Allow posting ``soc-targets`` to `/sensors/<id>/schedules/trigger` (POST) that exceed the default planning horizon, and ignore posted targets that exceed the max planning horizon.
+- Added a subsection on deprecating and sunsetting to the Introduction section.
+- Added a subsection on describing flexibility to the Notation section.
+
+v3.0-4 | 2022-12-08
+"""""""""""""""""""
+
+- Allow posting ``null`` values to `/sensors/data` (POST) to correctly space time series that include missing values (the missing values are not stored).
+- Introduced the ``source`` field to `/sensors/data` (GET) to obtain data for a given source (ID).
+- Fixed the JSON wrapping of the return message for `/sensors/data` (GET).
+- Changed the Notation section:
+
+    - Rewrote the section on filtering by source (ID) with a deprecation notice on filtering by account role and user ID.
+
+v3.0-3 | 2022-08-28
+"""""""""""""""""""
+
+- Introduced ``consumption_price_sensor``, ``production_price_sensor`` and ``inflexible_device_sensors`` fields to `/sensors/<id>/schedules/trigger` (POST).
+
+v3.0-2 | 2022-07-08
+"""""""""""""""""""
+
+- Introduced the "resolution" field to `/sensors/data` (GET) to obtain data in a given resolution.
+
+v3.0-1 | 2022-05-08
+"""""""""""""""""""
+
+- Added REST endpoint for checking application health (readiness to accept requests): `/health/ready` (GET).
+
+v3.0-0 | 2022-03-25
+"""""""""""""""""""
+
+- Added REST endpoint for listing sensors: `/sensors` (GET).
+- Added REST endpoints for managing sensor data: `/sensors/data` (GET, POST).
+- Added REST endpoints for managing assets: `/assets` (GET, POST) and `/assets/<id>` (GET, PATCH, DELETE).
+- Added REST endpoints for triggering and getting schedules: `/sensors/<id>/schedules/<uuid>` (GET) and `/sensors/<id>/schedules/trigger` (POST).
+- [**Breaking change**] Switched to plural resource names for REST endpoints:  `/users/<id>` (GET, PATCH) and `/users/<id>/password-reset` (PATCH).
+- [**Breaking change**] Deprecated the following endpoints (NB replacement endpoints mentioned below no longer require the message "type" field):
+
+    - *getConnection* -> use `/sensors` (GET) instead
+    - *getDeviceMessage* -> use `/sensors/<id>/schedules/<uuid>` (GET) instead, where <id> is the sensor id from the "event" field and <uuid> is the value of the "schedule" field returned by `/sensors/<id>/schedules/trigger` (POST)
+    - *getMeterData* -> use `/sensors/data` (GET) instead, replacing the "connection" field with "sensor"
+    - *getPrognosis* -> use `/sensors/data` (GET) instead, replacing the "connection" field with "sensor"
+    - *getService* -> use `/api/v3_0` (GET) instead (since v3.0-9), or consult the public API documentation instead, at https://flexmeasures.readthedocs.io
+    - *postMeterData* -> use `/sensors/data` (POST) instead, replacing the "connection" field with "sensor"
+    - *postPriceData* -> use `/sensors/data` (POST) instead, replacing the "market" field with "sensor"
+    - *postPrognosis* -> use `/sensors/data` (POST) instead, replacing the "connection" field with "sensor"
+    - *postUdiEvent* -> use `/sensors/<id>/schedules/trigger` (POST) instead, where <id> is the sensor id from the "event" field, and rename the following fields:
+
+        - "datetime" -> "start"
+        - "value -> "soc-at-start"
+        - "unit" -> "soc-unit"
+        - "targets" -> "soc-targets"
+        - "soc_min" -> soc-min"
+        - "soc_max" -> soc-max"
+        - "roundtrip_efficiency" -> "roundtrip-efficiency"
+
+    - *postWeatherData* -> use `/sensors/data` (POST) instead
+    - *restoreData*
+
+- Changed the Introduction section:
+
+    - Rewrote the section on service listing for API versions to refer to the public documentation.
+    - Rewrote the section on entity addresses to refer to *sensors* instead of *connections*.
+    - Rewrote the sections on roles and sources into a combined section that refers to account roles rather than USEF roles.
+    - Deprecated the section on group notation.
+
+v2.0-7 | 2022-05-05
+"""""""""""""""""""
+
+*API v2.0 is removed.*
+
+v2.0-6 | 2022-04-26
+"""""""""""""""""""
+
+*API v2.0 is sunset.*
+
+v2.0-5 | 2022-02-13
+"""""""""""""""""""
+
+*API v2.0 is deprecated.*
+
+v2.0-4 | 2022-01-04
+"""""""""""""""""""
+
+- Updated entity addresses in documentation, according to the fm1 scheme.
+- Changed the Introduction section:
+
+    - Rewrote the subsection on entity addresses to refer users to where they can find the entity addresses of their sensors.
+    - Rewrote the subsection on sensor identification (formerly known as asset identification) to place the fm1 scheme front and center.
+
+- Fixed the categorisation of the *postMeterData*, *postPrognosis*, *postPriceData* and *postWeatherData* endpoints from the User category to the Data category.
+
+v2.0-3 | 2021-06-07
+"""""""""""""""""""
+
+- Updated all entity addresses in documentation according to the fm0 scheme, preserving backwards compatibility.
+- Introduced the fm1 scheme for entity addresses for connections, markets, weather sensors and sensors.
+
+v2.0-2 | 2021-04-02
+"""""""""""""""""""
+
+- [**Breaking change**] Switched the interpretation of horizons to rolling horizons.
+- [**Breaking change**] Deprecated the use of ISO 8601 repeating time intervals to denote rolling horizons.
+- Introduced the "prior" field for *postMeterData*, *postPrognosis*, *postPriceData* and *postWeatherData* endpoints.
+- Changed the Introduction section:
+
+    - Rewrote the subsection on prognoses to explain the horizon and prior fields.
+
+- Changed the Simulation section:
+
+    - Rewrote relevant examples using horizon and prior fields.
+
+v2.0-1 | 2021-02-19
+"""""""""""""""""""
+
+- Added REST endpoints for managing users: `/users/` (GET), `/user/<id>` (GET, PATCH) and `/user/<id>/password-reset` (PATCH).
+
+v2.0-0 | 2020-11-14
+"""""""""""""""""""
+
+- Added REST endpoints for managing assets: `/assets/` (GET, POST) and `/asset/<id>` (GET, PATCH, DELETE).
+
+
+v1.3-14 | 2022-05-05
+""""""""""""""""""""
+
+*API v1.3 is removed.*
+
+v1.3-13 | 2022-04-26
+""""""""""""""""""""
+
+*API v1.3 is sunset.*
+
+v1.3-12 | 2022-02-13
+""""""""""""""""""""
+
+*API v1.3 is deprecated.*
+
+v1.3-11 | 2022-01-05
+""""""""""""""""""""
+
+*Affects all versions since v1.3*.
+
+- Changed and extended the *postUdiEvent* endpoint:
+
+    - The recording time of new schedules triggered by calling the endpoint is now the time at which the endpoint was called, rather than the datetime of the sent state of charge (SOC).
+    - Introduced the "prior" field for the purpose of communicating an alternative recording time, thereby keeping support for simulations.
+    - Introduced an optional "roundtrip_efficiency" field, for use in scheduling.
+
+v1.3-10 | 2021-11-08
+""""""""""""""""""""
+
+*Affects all versions since v1.3*.
+
+- Fixed the *getDeviceMessage* endpoint for cases in which there are multiple schedules available, by returning only the most recent one.
+
+v1.3-9 | 2021-04-21
+"""""""""""""""""""
+
+*Affects all versions since v1.0*.
+
+- Fixed regression by partially reverting the breaking change of v1.3-8: Re-instantiated automatic inference of horizons for Post requests for API versions below v2.0, but changed to inference policy: now inferring the data was recorded **right after each event** took place (leading to a zero horizon for each data point) rather than **after the last event** took place (which led to a different horizon for each data point); the latter had been the inference policy before v1.3-8.
+
+v1.3-8 | 2020-04-02
+"""""""""""""""""""
+
+*Affects all versions since v1.0*.
+
+- [**Breaking change**, partially reverted in v1.3-9] Deprecated the automatic inference of horizons for *postMeterData*, *postPrognosis*, *postPriceData* and *postWeatherData* endpoints for API versions below v2.0.
+
+v1.3-7 | 2020-12-16
+"""""""""""""""""""
+
+*Affects all versions since v1.0*.
+
+- Separated the dual purpose of the "horizon" field in the *getMeterData* and *getPrognosis* endpoints by introducing the "prior" field:
+
+    - The "horizon" field in GET endpoints is now always interpreted as a rolling horizon, regardless of whether it is stated as an ISO 8601 repeating time interval.
+    - The *getMeterData* and *getPrognosis* endpoints now accept an optional "prior" field to select only data recorded before a certain ISO 8601 timestamp (replacing the unintuitive usage of the horizon field for specifying a latest time of belief).
+
+v1.3-6 | 2020-12-11
+"""""""""""""""""""
+
+*Affects all versions since v1.0*.
+
+- The *getMeterData* and *getPrognosis* endpoints now return the INVALID_SOURCE status 400 response in case the optional "source" field is used and no relevant sources can be found.
+
+v1.3-5 | 2020-10-29
+"""""""""""""""""""
+
+*Affects all versions since v1.0*.
+
+- Endpoints to POST meter data will now check incoming data to see if the required asset's resolution is being used ― upsampling is done if possible.
+  These endpoints can now return the REQUIRED_INFO_MISSING status 400 response.
+- Endpoints to GET meter data will return data in the asset's resolution ― downsampling to the "resolution" field is done if possible.
+- As they need to determine the asset, all of the mentioned POST and GET endpoints can now return the UNRECOGNIZED_ASSET status 400 response.
+
+v1.3-4 | 2020-06-18
+"""""""""""""""""""
+
+- Improved support for use cases of the *getDeviceMessage* endpoint in which a longer duration, between posting UDI events and retrieving device messages based on those UDI events, is required; the default *time to live* of UDI event identifiers is prolonged from 500 seconds to 7 days, and can be set as a config variable (`FLEXMEASURES_PLANNING_TTL`)
+
+v1.3-3 | 2020-06-07
+"""""""""""""""""""
+
+- Changed backend support (API specifications unaffected) for scheduling charging stations to scheduling Electric Vehicle Supply Equipment (EVSE), in accordance with the Open Charge Point Interface (OCPI).
+
+v1.3-2 | 2020-03-11
+"""""""""""""""""""
+
+- Fixed example entity addresses in simulation section
+
+v1.3-1 | 2020-02-08
+"""""""""""""""""""
+
+- Backend change: the default planning horizon can now be set in FlexMeasures's configuration (`FLEXMEASURES_PLANNING_HORIZON`)
+
+v1.3-0 | 2020-01-28
+"""""""""""""""""""
+
+- Introduced new event type "soc-with-targets" to support scheduling charging stations (see extra example for the *postUdiEvent* endpoint)
+- The *postUdiEvent* endpoint now triggers scheduling jobs to be set up (rather than scheduling directly triggered by the *getDeviceMessage* endpoint)
+- The *getDeviceMessage* now queries the job queue and database for an available schedule
+
+v1.2-6 | 2022-05-05
+"""""""""""""""""""
+
+*API v1.2 is removed.*
+
+v1.2-5 | 2022-04-26
+"""""""""""""""""""
+
+*API v1.2 is sunset.*
+
+v1.2-4 | 2022-02-13
+"""""""""""""""""""
+
+*API v1.2 is deprecated.*
+
+v1.2-3 | 2020-01-28
+"""""""""""""""""""
+
+- Updated endpoint descriptions with additional possible status 400 responses:
+
+    - INVALID_DOMAIN for invalid entity addresses
+    - UNKNOWN_PRICES for infeasible schedules due to missing prices
+
+v1.2-2 | 2018-10-08
+"""""""""""""""""""
+
+- Added a list of registered types of weather sensors to the Simulation section and *postWeatherData* endpoint
+- Changed example for the *postPriceData* endpoint to reflect Korean situation
+
+v1.2-1 | 2018-09-24
+"""""""""""""""""""
+
+- Added a local table of contents to the Simulation section
+- Added a description of the *postPriceData* endpoint in the Simulation section
+- Added a description of the *postWeatherData* endpoint in the Simulation section
+- Revised the subsection about posting power data in the Simulation section
+- Revised the entity address for UDI events to include the type of the event
+
+.. code-block:: json
+
+    i.e.
+
+    {
+        "type": "PostUdiEventRequest",
+        "event": "ea1.2021-01.io.flexmeasures.company:7:10:203:soc",
+    }
+
+    rather than the erroneously double-keyed:
+
+    {
+        "type": "PostUdiEventRequest",
+        "event": "ea1.2021-01.io.flexmeasures.company:7:10:203",
+        "type": "soc"
+    }
+
+v1.2-0 | 2018-09-08
+"""""""""""""""""""
+
+- Added a description of the *postUdiEvent* endpoint in the Prosumer and Simulation sections
+- Added a description of the *getDeviceMessage* endpoint in the Prosumer and Simulation sections
+
+v1.1-8 | 2022-05-05
+"""""""""""""""""""
+
+*API v1.1 is removed.*
+
+v1.1-7 | 2022-04-26
+"""""""""""""""""""
+
+*API v1.1 is sunset.*
+
+v1.1-6 | 2022-02-13
+"""""""""""""""""""
+
+*API v1.1 is deprecated.*
+
+v1.1-5 | 2020-06-18
+"""""""""""""""""""
+
+- Fixed the *getConnection* endpoint where the returned list of connection names had been unnecessarily nested
+
+v1.1-4 | 2020-03-11
+"""""""""""""""""""
+
+- Added support for posting daily and weekly prices for the *postPriceData* endpoint
+
+v1.1-3 | 2018-09-08
+"""""""""""""""""""
+
+- Added the Simulation section:
+
+    - Added information about setting up a new simulation
+    - Added examples for calling the *postMeterData* endpoint
+    - Added example for calling the *getPrognosis* endpoint
+
+v1.1-2 | 2018-08-15
+"""""""""""""""""""
+
+- Added the *postPrognosis* endpoint
+- Added the *postPriceData* endpoint
+- Added a description of the *postPrognosis* endpoint in the Aggregator section
+- Added a description of the *postPriceData* endpoint in the Aggregator and Supplier sections
+- Added the *restoreData* endpoint for servers in play mode
+
+v1.1-1 | 2018-08-06
+"""""""""""""""""""
+
+- Added the *getConnection* endpoint
+- Added the *postWeatherData* endpoint
+- Changed the Introduction section:
+
+    - Added information about the sign of power values (production is negative)
+    - Updated information about horizons (now anchored to the end of each time interval rather than to the start)
+ 
+- Added an optional horizon to the *postMeterData* endpoint
+
+v1.1-0 | 2018-07-15
+"""""""""""""""""""
+
+- Added the *getPrognosis* endpoint
+- Changed the *getMeterData* endpoint to accept an optional resolution, source, and horizon
+- Changed the Introduction section:
+
+    - Added information about timeseries resolutions
+    - Added information about sources
+    - Updated information about horizons
+
+- Added a description of the *getPrognosis* endpoint in the Supplier section
+
+v1.0-4 | 2022-05-05
+"""""""""""""""""""
+
+*API v1.0 is removed.*
+
+v1.0-3 | 2022-04-26
+"""""""""""""""""""
+
+*API v1.0 is sunset.*
+
+v1.0-2 | 2022-02-13
+"""""""""""""""""""
+
+*API v1.0 is deprecated.*
+
+v1.0-1 | 2018-07-10
+"""""""""""""""""""
+
+- Moved specifications to be part of the platform's Sphinx documentation:
+
+    - Each API service is now documented in the docstring of its respective endpoint
+    - Added sections listing all endpoints per version
+    - Documentation includes specifications of **all** supported API versions (supported versions have a registered Flask blueprint)
+
+v1.0-0 | 2018-07-10
+"""""""""""""""""""
+
+- Started change log
+- Added Introduction section with notes regarding:
+
+    - Authentication
+    - Relevant roles for the API
+    - Key notation
+    - The addressing scheme for assets
+    - Connection group notation
+    - Timeseries notation
+    - Prognosis notation
+    - Units of timeseries data
+
+- Added a description of the *getService* endpoint in the Introduction section
+- Added a description of the *postMeterData* endpoint in the MDC section
+- Added a description of the *getMeterData* endpoint in the Prosumer section

+ 22 - 0
documentation/api/dev.rst

@@ -0,0 +1,22 @@
+.. _dev:
+
+Developer API
+=============
+
+These endpoints are still under development and are subject to change in new releases.
+
+Summary
+-------
+
+.. qrefflask:: flexmeasures.app:create(env="documentation")
+    :modules: flexmeasures.api.dev.assets, flexmeasures.api.dev.sensors
+    :order: path
+    :include-empty-docstring:
+
+API Details
+-----------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :modules: flexmeasures.api.dev.assets, flexmeasures.api.dev.sensors
+    :order: path
+    :include-empty-docstring:

+ 209 - 0
documentation/api/introduction.rst

@@ -0,0 +1,209 @@
+.. _api_introduction:
+
+API Introduction
+============
+
+This document details the Application Programming Interface (API) of the FlexMeasures web service. The API supports user automation for flexibility valorisation in the energy sector, both in a live setting and for the purpose of simulating scenarios. The web service adheres to the concepts and terminology used in the Universal Smart Energy Framework (USEF).
+
+All requests and responses to and from the web service should be valid JSON messages.
+For deeper explanations on how to construct messages, see :ref:`api_notation`.
+
+.. _api_versions:
+
+Main endpoint and API versions
+------------------------------
+
+All versions of the API are released on:
+
+.. code-block:: html
+
+    https://<flexmeasures-root-url>/api
+
+So if you are running FlexMeasures on your computer, it would be:
+
+.. code-block:: html
+
+    https://localhost:5000/api
+
+Let's assume we are running a server for a client at:
+
+.. code-block:: html
+
+    https://company.flexmeasures.io/api
+
+where `company` is a client of ours. All their accounts' data lives on that server.
+
+We assume in this document that the FlexMeasures instance you want to connect to is hosted at https://company.flexmeasures.io.
+
+Let's see what the ``/api`` endpoint returns:
+
+.. code-block:: python
+
+    >>> import requests
+    >>> res = requests.get("https://company.flexmeasures.io/api")
+    >>> res.json()
+    {'flexmeasures_version': '0.9.0',
+     'message': 'For these API versions endpoints are available. An authentication token can be requested at: /api/requestAuthToken. For a list of services, see https://flexmeasures.readthedocs.io',
+     'status': 200,
+     'versions': ['v3_0']
+    }
+
+So this tells us which API versions exist. For instance, we know that the latest API version is available at:
+
+.. code-block:: html
+
+    https://company.flexmeasures.io/api/v3_0
+
+
+Also, we can see that a list of endpoints is available on https://flexmeasures.readthedocs.io for each of these versions.
+
+.. note:: Sunset API versions are still documented there, simply select an older version.
+
+
+.. _api_auth:
+
+Authentication
+--------------
+
+Service usage is only possible with a user access token specified in the request header, for example:
+
+.. code-block:: json
+
+    {
+        "Authorization": "<token>"
+    }
+
+A fresh "<token>" can be generated on the user's profile after logging in:
+
+.. code-block:: html
+
+    https://company.flexmeasures.io/logged-in-user
+
+or through a POST request to the following endpoint:
+
+.. code-block:: html
+
+    https://company.flexmeasures.io/api/requestAuthToken
+
+using the following JSON message for the POST request data:
+
+.. code-block:: json
+
+    {
+        "email": "<user email>",
+        "password": "<user password>"
+    }
+
+which gives a response like this if the credentials are correct:
+
+.. code-block:: json
+
+    {
+        "auth_token": "<authentication token>",
+        "user_id": "<ID of the user>"
+    }
+
+.. note:: Each access token has a limited lifetime, see :ref:`api_auth`.
+
+.. _api_deprecation:
+
+Deprecation and sunset
+----------------------
+
+When an API feature becomes obsolete, we deprecate it.
+Deprecation of major features doesn't happen a lot, but when it does, it happens in multiple stages, during which we support clients and hosts in adapting.
+For more information on our multi-stage deprecation approach and available options for FlexMeasures hosts, see :ref:`Deprecation and sunset for hosts<api_deprecation_hosts>`.
+
+.. _api_deprecation_clients:
+
+Clients
+^^^^^^^
+
+Professional API users should monitor API responses for the ``"Deprecation"`` and ``"Sunset"`` response headers [see `draft-ietf-httpapi-deprecation-header-02 <https://datatracker.ietf.org/doc/draft-ietf-httpapi-deprecation-header/>`_ and `RFC 8594 <https://www.rfc-editor.org/rfc/rfc8594>`_, respectively], so system administrators can be warned when using API endpoints that are flagged for deprecation and/or are likely to become unresponsive in the future.
+
+The deprecation header field shows an `IMF-fixdate <https://www.rfc-editor.org/rfc/rfc7231#section-7.1.1.1>`_ indicating when the API endpoint was deprecated.
+The sunset header field shows an `IMF-fixdate <https://www.rfc-editor.org/rfc/rfc7231#section-7.1.1.1>`_ indicating when the API endpoint is likely to become unresponsive.
+
+More information about a deprecation, sunset, and possibly recommended replacements, can be found under the ``"Link"`` response header. Relevant relations are:
+
+- ``"deprecation"``
+- ``"successor-version"``
+- ``"latest-version"``
+- ``"alternate"``
+- ``"sunset"``
+
+Here is a client-side code example in Python (this merely prints out the deprecation header, sunset header and relevant links, and should be revised to make use of the client's monitoring tools):
+
+.. code-block:: python
+
+        def check_deprecation_and_sunset(self, url, response):
+        """Print deprecation and sunset headers, along with info links.
+
+        Reference
+        ---------
+        https://flexmeasures.readthedocs.io/en/latest/api/introduction.html#deprecation-and-sunset
+        """
+        # Go through the response headers in their given order
+        for header, content in response.headers:
+            if header == "Deprecation":
+                print(f"Your request to {url} returned a deprecation warning. Deprecation: {content}")
+            elif header == "Sunset":
+                print(f"Your request to {url} returned a sunset warning. Sunset: {content}")
+            elif header == "Link" and ('rel="deprecation";' in content or 'rel="sunset";' in content):
+                print(f"Further info is available: {content}")
+
+.. _api_deprecation_hosts:
+
+Hosts
+^^^^^
+
+FlexMeasures versions go through the following stages for deprecating major features (such as API versions):
+
+- :ref:`api_deprecation_stage_1`: status 200 (OK) with :ref:`relevant headers<api_deprecation_clients>`, plus a toggle to 410 (Gone) for blackout tests
+- :ref:`api_deprecation_stage_2`: status 410 (Gone), plus a toggle to 200 (OK) for sunset rollbacks
+- :ref:`api_deprecation_stage_3`: status 410 (Gone)
+
+Let's go over these stages in more detail.
+
+.. _api_deprecation_stage_1:
+
+Stage 1: Deprecation
+""""""""""""""""""""
+
+When upgrading to a FlexMeasures version that deprecates an API version (e.g. ``flexmeasures==0.12`` deprecates API version 2), clients will receive ``"Deprecation"`` and ``"Sunset"`` response headers [see `draft-ietf-httpapi-deprecation-header-02 <https://datatracker.ietf.org/doc/draft-ietf-httpapi-deprecation-header/>`_ and `RFC 8594 <https://www.rfc-editor.org/rfc/rfc8594>`_, respectively].
+
+Hosts should not expect every client to monitor response headers and proactively upgrade to newer API versions.
+Please make sure that your users have upgraded before you upgrade to a FlexMeasures version that sunsets an API version.
+You can do this by checking your server logs for warnings about users who are still calling deprecated endpoints.
+
+In addition, we recommend running blackout tests during the deprecation notice phase.
+You (and your users) can learn which systems need attention and how to deal with them.
+Be sure to announce these beforehand.
+Here is an example of how to run a blackout test:
+If a sunset happens in version ``0.13``, and you are hosting a version which includes the deprecation notice (e.g. ``0.12``), FlexMeasures will simulate the sunset if you set the config setting ``FLEXMEASURES_API_SUNSET_ACTIVE = True`` (see :ref:`Sunset Configuration<sunset-config>`).
+During such a blackout test, clients will receive ``HTTP status 410 (Gone)`` responses when calling corresponding endpoints.
+
+.. admonition:: What is a blackout test
+   :class: info-icon
+
+   A blackout test is a planned, timeboxed event when a host will turn off a certain API or some of the API capabilities.
+   The test is meant to help developers understand the impact the retirement will have on the applications and users.
+   `Source: Platform of Trust <https://design.oftrust.net/api-migration-policies/blackout-testing>`_
+
+.. _api_deprecation_stage_2:
+
+Stage 2: Preliminary sunset
+"""""""""""""""""""""""""""
+
+When upgrading to a FlexMeasures version that sunsets an API version (e.g. ``flexmeasures==0.13`` sunsets API version 2), clients will receive ``HTTP status 410 (Gone)`` responses when calling corresponding endpoints.
+
+In case you have users that haven't upgraded yet, and would still like to upgrade FlexMeasures (to the version that officially sunsets the API version), you can.
+For a little while after sunset (usually one more minor version), we will continue to support a "sunset rollback".
+To enable this, just set the config setting ``FLEXMEASURES_API_SUNSET_ACTIVE = False`` and consider announcing some more blackout tests to your users, during which you can set this setting to ``True`` to reactivate the sunset.
+
+.. _api_deprecation_stage_3:
+
+Stage 3: Definitive sunset
+""""""""""""""""""""""""""
+
+After upgrading to one of the next FlexMeasures versions (e.g. ``flexmeasures==0.14``), clients that call sunset endpoints will receive ``HTTP status 410 (Gone)`` responses.

+ 17 - 0
documentation/api/mdc.rst

@@ -0,0 +1,17 @@
+.. _mdc:
+
+Meter Data Company
+==================
+
+The meter data company (MDC) represents a trusted party that shares the meter data of connections that are
+registered within FlexMeasures. In case the MDC cannot be queried to provide relevant meter data (e.g. because the role
+has not taken up by a market party), the party taking up the Prosumer role will also take up the MDC role, and will
+bear the responsibility to post their own meter data with the *postMeterData* service.
+
+The granularity of the meter data and the time delay between the actual measurement and its posting should be
+specified in the service contract between Prosumer and Aggregator. In this example, the Prosumer decided to share
+the meter data in 15-minute intervals and only after 1.30am. It is desirable to send meter readings in 5-minute
+intervals (or with an even finer granularity), and as soon as possible after measurement.
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.post_meter_data

+ 370 - 0
documentation/api/notation.rst

@@ -0,0 +1,370 @@
+.. _api_notation:
+
+Notation
+--------
+
+This page helps you to construct messages to the FlexMeasures API. Please consult the endpoint documentation first. Here we dive into topics useful across endpoints.
+
+
+.. _variable_quantities:
+
+Variable quantities
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Many API fields deal with variable quantities, for example, :ref:`flex-model <flex_models_and_schedulers>` and :ref:`flex-context <flex_context>` fields.
+Unless stated otherwise, values of such fields can take one of the following forms:
+
+- A fixed quantity, to describe steady constraints such as a physical power capacity.
+  For example:
+
+  .. code-block:: json
+
+     {
+         "power-capacity": "15 kW"
+     }
+
+- A variable quantity defined at specific moments in time, to describe dynamic constraints/preferences such as target states of charge.
+
+  .. code-block:: json
+
+     {
+         "soc-targets": [
+             {"datetime": "2024-02-05T08:00:00+01:00", "value": "8.2 kWh"},
+             ...
+             {"datetime": "2024-02-05T13:00:00+01:00", "value": "2.2 kWh"}
+         ]
+     }
+
+- A variable quantity defined for specific time ranges, to describe dynamic constraints/preferences such as usage forecasts.
+
+  .. code-block:: json
+
+     {
+         "soc-usage": [
+             {"start": "2024-02-05T08:00:00+01:00", "duration": "PT2H", "value": "10.1 kW"},
+             ...
+             {"start": "2024-02-05T13:00:00+01:00", "end": "2024-02-05T13:15:00+01:00", "value": "10.3 kW"}
+         ]
+     }
+
+  Note the two distinct ways of specifying a time period (``"end"`` in combination with ``"duration"`` also works).
+
+  .. note:: In case a field defines partially overlapping time periods, FlexMeasures automatically resolves this.
+            By default, time periods that are defined earlier in the list take precedence.
+            Fields that deviate from this policy will note so explicitly.
+            (For example, for fields dealing with capacities, the minimum is selected instead.)
+
+- A reference to a sensor that records a variable quantity, which allows cross-referencing to dynamic contexts that are already recorded as sensor data in FlexMeasures. For instance, a site's contracted consumption capacity that changes over time.
+
+  .. code-block:: json
+
+     {
+         "site-consumption-capacity": {"sensor": 55}
+     }
+
+  The unit of the data is specified on the sensor.
+
+Sensors and entity addresses
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In many API endpoints, sensors are identified by their ID, e.g. ``/sensors/45``. However, all sensors can also be identified with an entity address following the EA1 addressing scheme prescribed by USEF[1],
+which is mostly taken from IETF RFC 3720 [2].
+
+This is the complete structure of an EA1 address:
+
+.. code-block:: json
+
+    {
+        "sensor": "ea1.{date code}.{reversed domain name}:{locally unique string}"
+    }
+
+Here is a full example for an entity address of a sensor in FlexMeasures:
+
+.. code-block:: json
+
+    {
+        "sensor": "ea1.2021-02.io.flexmeasures.company:fm1.73"
+    }
+
+where FlexMeasures runs at `company.flexmeasures.io` (which the current domain owner started using in February 2021), and the locally unique string uses the `fm1` scheme (see below) to identify sensor ID 73.
+
+Assets are listed at:
+
+.. code-block:: html
+
+    https://company.flexmeasures.io/assets
+
+The full entity addresses of all of the asset's sensors can be obtained on the asset's page, e.g. for asset 81:
+
+.. code-block:: html
+
+    https://company.flexmeasures.io/assets/81
+
+
+Entity address structure
+""""""""""""""""""""""""""
+Some deeper explanations about an entity address:
+
+- "ea1" is a constant, indicating this is a type 1 USEF entity address
+- The date code "must be a date during which the naming authority owned the domain name used in this format, and should be the first month in which the domain name was owned by this naming authority at 00:01 GMT of the first day of the month.
+- The reversed domain name is taken from the naming authority (person or organization) creating this entity address
+- The locally unique string can be used for local purposes, and FlexMeasures uses it to identify the resource.
+  Fields in the locally unique string are separated by colons, see for other examples
+  IETF RFC 3721, page 6 [3]. While [2] says it's possible to use dashes, dots or colons as separators, we might use dashes and dots in
+  latitude/longitude coordinates of sensors, so we settle on colons.
+
+
+[1] https://www.usef.energy/app/uploads/2020/01/USEF-Flex-Trading-Protocol-Specifications-1.01.pdf
+
+[2] https://tools.ietf.org/html/rfc3720
+
+[3] https://tools.ietf.org/html/rfc3721
+
+
+Types of sensor identification used in FlexMeasures
+""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+FlexMeasures expects the locally unique string string to contain information in a certain structure.
+We distinguish type ``fm0`` and type ``fm1`` FlexMeasures entity addresses.
+
+The ``fm1`` scheme is the latest version.
+It uses the fact that all FlexMeasures sensors have unique IDs.
+
+.. code-block::
+
+    ea1.2021-01.io.flexmeasures:fm1.42
+    ea1.2021-01.io.flexmeasures:fm1.<sensor_id>
+
+The ``fm0`` scheme is the original scheme.
+It identified different types of sensors (such as grid connections, weather sensors and markets) in different ways.
+The ``fm0`` scheme has been sunset since API version 3.
+
+
+Timeseries
+^^^^^^^^^^
+
+Timestamps and durations are consistent with the ISO 8601 standard.
+The frequency of the data is implicit (from duration and number of values), while the resolution of the data is explicit, see :ref:`frequency_and_resolution`.
+
+All timestamps in requests to the API must be timezone-aware. For instance, in the below example, the timezone indication "Z" indicates a zero offset from UTC.
+
+We use the following shorthand for sending sequential, equidistant values within a time interval:
+
+.. code-block:: json
+
+    {
+        "values": [
+            10,
+            5,
+            8
+        ],
+        "start": "2016-05-01T13:00:00Z",
+        "duration": "PT45M"
+    }
+
+Technically, this is equal to:
+
+.. code-block:: json
+
+    {
+        "timeseries": [
+            {
+                "value": 10,
+                "start": "2016-05-01T13:00:00Z",
+                "duration": "PT15M"
+            },
+            {
+                "value": 5,
+                "start": "2016-05-01T13:15:00Z",
+                "duration": "PT15M"
+            },
+            {
+                "value": 8,
+                "start": "2016-05-01T13:30:00Z",
+                "duration": "PT15M"
+            }
+        ]
+    }
+
+This intuitive convention allows us to reduce communication by sending univariate timeseries as arrays.
+
+
+In all current versions of the FlexMeasures API, only equidistant timeseries data is expected to be communicated. Therefore:
+
+- only the array notation should be used (first notation from above),
+- "start" should be a timestamp on the hour or a multiple of the sensor resolution thereafter (e.g. "16:10" works if the resolution is 5 minutes), and
+- "duration" should also be a multiple of the sensor resolution.
+
+
+.. _beliefs:
+
+Tracking the recording time of beliefs
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For all its time series data, FlexMeasures keeps track of the time they were recorded. Data can be defined and filtered accordingly, which allows you to get a snapshot of what was known at a certain point in time.
+
+.. note:: FlexMeasures uses the `timely-beliefs data model <https://github.com/SeitaBV/timely-beliefs/#the-data-model>`_ for modelling such facts about time series data, and accordingly we use the term "belief" in this documentation. In that model, the recording time is referred to as "belief time".
+
+
+Querying by recording time
+""""""""""""""""""""""""""""
+
+Some GET endpoints have two optional timing fields to allow such filtering.
+
+The ``prior`` field (a timestamp) can be used to select beliefs recorded before some moment in time.
+It can be used to "time-travel" to see the state of information at some moment in the past.
+
+In addition, the ``horizon`` field (a duration) can be used to select beliefs recorded before some moment in time, `relative to each event`.
+For example, to filter out meter readings communicated within a day (denoted by a negative horizon) or forecasts created at least a day beforehand (denoted by a positive horizon).
+
+The two timing fields follow the ISO 8601 standard and are interpreted as follows:
+
+- ``prior``: recorded prior to <timestamp>.
+- ``horizon``: recorded at least <duration> before the fact (indicated by a positive horizon), or at most <duration> after the fact (indicated by a negative horizon).
+
+For example (note that you can use both fields together):
+
+.. code-block:: json
+
+    {
+        "horizon": "PT6H",
+        "prior": "2020-08-01T17:00:00Z"
+    }
+
+These fields denote that the data should have been recorded at least 6 hours before the fact (i.e. forecasts) and prior to 5 PM on August 1st 2020 (UTC).
+
+.. note:: In addition to these two timing filters, beliefs can be filtered by their source (see :ref:`sources`).
+
+
+.. _prognoses:
+
+Setting the recording time
+""""""""""""""""""""""""""""
+
+Some POST endpoints have two optional fields to allow setting the time at which beliefs are recorded in an explicit manner.
+This is useful to keep an accurate history of what was known at what time, especially for prognoses.
+If not used, FlexMeasures will infer the belief time from the arrival time of the message.
+
+The "prior" field (a timestamp) can be used to set a single time at which the entire time series (e.g. a prognosed series) was recorded.
+Alternatively, the "horizon" field (a duration) can be used to set the recording times relative to each (prognosed) event.
+In case both fields are set, the earliest possible recording time is determined and recorded for each (prognosed) event.
+
+The two timing fields follow the ISO 8601 standard and are interpreted as follows:
+
+.. code-block:: json
+
+    {
+        "values": [
+            10,
+            5,
+            8
+        ],
+        "start": "2016-05-01T13:00:00Z",
+        "duration": "PT45M",
+        "prior": "2016-05-01T07:45:00Z",
+    }
+
+This message implies that the entire prognosis was recorded at 7:45 AM UTC, i.e. 6 hours before the end of the entire time interval.
+
+.. code-block:: json
+
+    {
+        "values": [
+            10,
+            5,
+            8
+        ],
+        "start": "2016-05-01T13:00:00Z",
+        "duration": "PT45M",
+        "horizon": "PT6H"
+    }
+
+This message implies that all prognosed values were recorded 6 hours in advance.
+That is, the value for 1:00-1:15 PM was made at 7:15 AM, the value for 1:15-1:30 PM was made at 7:30 AM, and the value for 1:30-1:45 PM was made at 7:45 AM.
+
+Negative horizons may also be stated (breaking with the ISO 8601 standard) to indicate a belief about something that has already happened (i.e. after the fact, or simply *ex post*).
+For example, the following message implies that all prognosed values were made 10 minutes after the fact:
+
+.. code-block:: json
+
+    {
+        "values": [
+            10,
+            5,
+            8
+        ],
+        "start": "2016-05-01T13:00:00Z",
+        "duration": "PT45M",
+        "horizon": "-PT10M"
+    }
+
+Note that, for a horizon indicating a belief 10 minutes after the *start* of each 15-minute interval, the "horizon" would have been "PT5M".
+This denotes that the prognosed interval has 5 minutes left to be concluded.
+
+.. _frequency_and_resolution:
+
+Frequency and resolution
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+FlexMeasures handles two types of time series, which can be distinguished by defining the following timing properties for events recorded by sensors:
+
+- Frequency: how far apart events occur (a constant duration between event starts)
+- Resolution: how long an event lasts (a constant duration between the start and end of an event)
+
+.. note:: FlexMeasures runs on Pandas, and follows Pandas terminology accordingly.
+          The term frequency as used by Pandas is the reciprocal of the `SI quantity for frequency <https://en.wikipedia.org/wiki/SI_derived_unit>`_.
+
+1. The first type of time series describes non-instantaneous events such as average hourly wind speed.
+   For this case, it is commonly assumed that ``frequency == resolution``.
+   That is, events follow each other sequentially and without delay.
+
+2. The second type of time series describes instantaneous events (zero resolution) such as temperature at a given time.
+   For this case, we have ``frequency != resolution``.
+
+Specifying a frequency and resolution is redundant for POST requests that contain both "values" and a "duration" ― FlexMeasures computes the frequency by dividing the duration by the number of values, and, for sensors that record non-instantaneous events, assumes the resolution of the data is equal to the frequency.
+
+When POSTing data, FlexMeasures checks this inferred resolution against the required resolution of the sensors that are posted to.
+If these can't be matched (through upsampling), an error will occur.
+
+GET requests (such as */sensors/data*) return data with a frequency either equal to the resolution that the sensor is configured for (for non-instantaneous sensors), or a default frequency befitting (in our opinion) the requested time interval.
+A "resolution" may be specified explicitly to obtain the data in downsampled form, which can be very beneficial for download speed.
+For non-instantaneous sensors, the specified resolution needs to be a multiple of the sensor's resolution, e.g. hourly or daily values if the sensor's resolution is 15 minutes.
+For instantaneous sensors, the specified resolution is interpreted as a request for data in a specific frequency.
+The resolution of the underlying data will remain zero (and the returned message will say so).
+
+
+.. _sources:
+
+Sources
+^^^^^^^
+
+Requests for data may filter by source. FlexMeasures keeps track of the data source (the data's author, for example, a user, forecaster or scheduler belonging to a given organisation) of time series data.
+For example, to obtain data originating from data source 42, include the following:
+
+.. code-block:: json
+
+    {
+        "source": 42,
+    }
+
+Data source IDs can be found by hovering over data in charts.
+
+.. _units:
+
+Units
+^^^^^
+
+The FlexMeasures API is quite flexible with sent units.
+A valid unit for timeseries data is any unit that is convertible to the configured sensor unit registered in FlexMeasures.
+So, for example, you can send timeseries data with "W" unit to a "kW" sensor.
+And if you wish to do so, you can even send a timeseries with "kWh" unit to a "kW" sensor.
+In this case, FlexMeasures will convert the data using the resolution of the timeseries.
+
+.. _signs:
+
+Signs of power values
+^^^^^^^^^^^^^^^^^^^^^
+
+USEF recommends to use positive power values to indicate consumption and negative values to indicate production, i.e.
+to take the perspective of the Prosumer.
+If an asset has been configured as a pure producer or pure consumer, the web service will help avoid mistakes by checking the sign of posted power values.

+ 71 - 0
documentation/api/prosumer.rst

@@ -0,0 +1,71 @@
+.. _prosumer:
+
+Prosumer
+========
+
+A Prosumer owns a number of energy consuming or producing assets behind a connection to the electricity grid.
+
+A Prosumer can access the following services:
+
+- *postMeterData* :ref:`(example) <post_meter_data_prosumer>`
+- *postPrognosis* :ref:`(example) <post_prognosis_prosumer>`
+- *getMeterData* :ref:`(example) <get_meter_data_prosumer>`
+- *getPrognosis* :ref:`(example) <get_prognosis_prosumer>`
+- *postUdiEvent* :ref:`(example) <post_udi_event_prosumer>`
+- *getDeviceMessage* :ref:`(example) <get_device_message_prosumer>`
+
+.. _post_meter_data_prosumer:
+
+Post meter data
+---------------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.post_meter_data
+
+.. _post_prognosis_prosumer:
+
+Post prognosis
+--------------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.post_prognosis
+
+.. _get_meter_data_prosumer:
+
+Get meter data
+--------------
+
+A Prosumer can query the FlexMeasures web service for its own meter data using the *getMeterData* service.
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.get_meter_data
+
+.. _get_prognosis_prosumer:
+
+Get prognosis
+-------------
+
+A Prosumer can query the FlexMeasures web service for prognoses of its own meter data using the *getPrognosis* service.
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_1.get_prognosis
+
+.. _post_udi_event_prosumer:
+
+Post UDI event
+--------------
+
+A Prosumer can post its flexibility constraints to the FlexMeasures web service as UDI events using the *postUdiEvent* service.
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_2.post_udi_event
+
+.. _get_device_message_prosumer:
+
+Get device message
+------------------
+
+A Prosumer can query the FlexMeasures web service for control signals using the *getDeviceMessage* service.
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures_api_v1_2.get_device_message

+ 39 - 0
documentation/api/supplier.rst

@@ -0,0 +1,39 @@
+.. _supplier:
+
+Supplier
+========
+
+For FlexMeasures, the Supplier represents the balance responsible party that request flexibility from asset owners.
+
+A Supplier can access the following services:
+
+- *getPrognosis* :ref:`(example) <get_prognosis_supplier>`
+- *postPriceData* :ref:`(example) <post_price_data_supplier>`
+- *postFlexRequest*
+- *getFlexOffer*
+- *postFlexOrder*
+
+.. _get_prognosis_supplier:
+
+Get prognosis
+-------------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures.api_v1_1.get_prognosis
+
+.. _post_price_data_supplier:
+
+Post price data
+---------------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures.api_v1_1.post_price_data
+
+..  .. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures.api_v1_1.post_flex_request
+
+..  .. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures.api_v1_1.get_flex_offer
+
+..  .. autoflask:: flexmeasures.app:create(env="documentation")
+    :endpoints: flexmeasures.api_v1_1.post_flex_order

+ 7 - 0
documentation/api/v1.rst

@@ -0,0 +1,7 @@
+.. _v1:
+
+Version 1.0
+===========
+
+
+.. note:: This API version has been sunset. Please update to :ref:`v3_0`. For more information about how FlexMeasures handles deprecation and sunsetting, see :ref:`api_deprecation`. In case your FlexMeasures host still runs ``flexmeasures<0.13``, a snapshot of documentation for API version 1.0 will stay available `here on readthedocs.org <https://flexmeasures.readthedocs.io/en/v0.12.3/api/v1.html>`_.

+ 7 - 0
documentation/api/v1_1.rst

@@ -0,0 +1,7 @@
+.. _v1_1:
+
+Version 1.1
+===========
+
+
+.. note:: This API version has been sunset. Please update to :ref:`v3_0`. For more information about how FlexMeasures handles deprecation and sunsetting, see :ref:`api_deprecation`. In case your FlexMeasures host still runs ``flexmeasures<0.13``, a snapshot of documentation for API version 1.1 will stay available `here on readthedocs.org <https://flexmeasures.readthedocs.io/en/v0.12.3/api/v1_1.html>`_.

+ 7 - 0
documentation/api/v1_2.rst

@@ -0,0 +1,7 @@
+.. _v1_2:
+
+Version 1.2
+===========
+
+
+.. note:: This API version has been sunset. Please update to :ref:`v3_0`. For more information about how FlexMeasures handles deprecation and sunsetting, see :ref:`api_deprecation`. In case your FlexMeasures host still runs ``flexmeasures<0.13``, a snapshot of documentation for API version 1.2 will stay available `here on readthedocs.org <https://flexmeasures.readthedocs.io/en/v0.12.3/api/v1_2.html>`_.

+ 7 - 0
documentation/api/v1_3.rst

@@ -0,0 +1,7 @@
+.. _v1_3:
+
+Version 1.3
+===========
+
+
+.. note:: This API version has been sunset. Please update to :ref:`v3_0`. For more information about how FlexMeasures handles deprecation and sunsetting, see :ref:`api_deprecation`. In case your FlexMeasures host still runs ``flexmeasures<0.13``, a snapshot of documentation for API version 1.3 will stay available `here on readthedocs.org <https://flexmeasures.readthedocs.io/en/v0.12.3/api/v1_3.html>`_.

+ 6 - 0
documentation/api/v2_0.rst

@@ -0,0 +1,6 @@
+.. _v2_0:
+
+Version 2.0
+===========
+
+.. note:: This API version has been sunset. Please update to :ref:`v3_0`. For more information about how FlexMeasures handles deprecation and sunsetting, see :ref:`api_deprecation`. In case your FlexMeasures host still runs ``flexmeasures<0.13``, a snapshot of documentation for API version 2.0 will stay available `here on readthedocs.org <https://flexmeasures.readthedocs.io/en/v0.12.3/api/v2_0.html>`_.

+ 20 - 0
documentation/api/v3_0.rst

@@ -0,0 +1,20 @@
+.. _v3_0:
+
+Version 3.0
+===========
+
+Summary
+-------
+
+.. qrefflask:: flexmeasures.app:create(env="documentation")
+    :modules: flexmeasures.api, flexmeasures.api.v3_0.assets, flexmeasures.api.v3_0.sensors, flexmeasures.api.v3_0.users, flexmeasures.api.v3_0.health, flexmeasures.api.v3_0.public
+    :order: path
+    :include-empty-docstring:
+
+API Details
+-----------
+
+.. autoflask:: flexmeasures.app:create(env="documentation")
+    :modules: flexmeasures.api, flexmeasures.api.v3_0.assets, flexmeasures.api.v3_0.sensors, flexmeasures.api.v3_0.users, flexmeasures.api.v3_0.health, flexmeasures.api.v3_0.public
+    :order: path
+    :include-empty-docstring:

File diff suppressed because it is too large
+ 1158 - 0
documentation/changelog.rst


+ 172 - 0
documentation/cli/change_log.rst

@@ -0,0 +1,172 @@
+.. _cli-changelog:
+
+**********************
+FlexMeasures CLI Changelog
+**********************
+
+since v0.26.0 | June 03, 2025
+=================================
+* Switch to ``flexmeasures jobs save-last`` CLI command for saving the last n canceled/deferred/failed/finished/scheduled/started jobs (from the scheduling queue, by default).
+
+since v0.25.0 | April 01, 2025
+=================================
+* Report parameters set using ``flexmeasures add report --parameters`` can use any argument supported by ``Sensor.search_beliefs`` to allow more control over input for the report.
+* Add ``flexmeasures jobs save-last-failed`` (since v0.26: ``flexmeasures jobs save-last``) CLI command for saving the last n failed jobs (from the scheduling queue, by default).
+* Add ``flexmeasures jobs delete-queue`` CLI command for deleting an obsolete queue.
+
+since v0.24.0 | January 6, 2025
+=================================
+
+* ``flexmeasures show beliefs`` shows datetime values on x-axis labels.
+* ``flexmeasures add sensor`` no longer requires the ``capacity_in_mw`` attribute to be set for power sensors.
+
+since v0.22.0 | June 29, 2024
+=================================
+
+* Add ``--resolution`` option to ``flexmeasures show chart`` to produce charts in different time resolutions.
+
+since v0.21.0 | April 16, 2024
+=================================
+
+* Include started, deferred and scheduled jobs in the overview printed by the CLI command ``flexmeasures jobs show-queues``.
+
+since v.0.20.0 | March 26, 2024
+=================================
+
+* Add command ``flexmeasures edit transfer-ownership`` to transfer the ownership of an asset and its children.
+* Add ``--offspring`` option to ``flexmeasures delete beliefs`` command, allowing to delete beliefs of children, as well.
+* Add support for providing a sensor definition to the ``--site-power-capacity``, ``--site-consumption-capacity`` and ``--site-production-capacity`` options of the ``flexmeasures add schedule for-storage`` command.
+
+since v0.19.1 | February 26, 2024
+=======================================
+
+* Fix support for providing a sensor definition to the ``--storage-power-capacity`` option of the ``flexmeasures add schedule for-storage`` command.
+
+since v0.19.0 | February 18, 2024
+=======================================
+
+* Enable the use of QuantityOrSensor fields for the ``flexmeasures add schedule for-storage`` CLI command:
+
+    * ``charging-efficiency``
+    * ``discharging-efficiency``
+    * ``soc-gain``
+    * ``soc-usage``
+    * ``power-capacity``
+    * ``production-capacity``
+    * ``consumption-capacity``
+    * ``storage-efficiency``
+
+* Streamline CLI option naming by favoring ``--<entity>`` over ``--<entity>-id``. This affects the following options:
+
+    * ``--account-id`` -> ``--account``
+    * ``--asset-id`` -> ``--asset``
+    * ``--asset-type-id`` -> ``--asset-type``
+    * ``--sensor-id`` -> ``--sensor``
+    * ``--source-id`` -> ``--source``
+    * ``--user-id`` -> ``--user`
+
+since v0.18.1 | January 15, 2024
+=======================================
+
+* Fix the validation of the option ``--parent-asset`` of command ``flexmeasures add asset``.
+
+since v0.17.0 | November 8, 2023
+=======================================
+
+* Add ``--consultancy`` option to ``flexmeasures add account`` to create a consultancy relationship with another account.
+
+since v0.16.0 | September 29, 2023
+=======================================
+
+* Add command ``flexmeasures add sources`` to add the base `DataSources` for the `DataGenerators`.
+* Add command ``flexmeasures show chart`` to export sensor and asset charts in PNG or SVG formats.
+* Add ``--kind reporter`` option to ``flexmeasures add toy-account`` to create the asset and sensors for the reporter tutorial.
+* Add ``--id`` option to ``flexmeasures show data-sources`` to show just one ``DataSource``.
+* Add ``--show-attributes`` flag to ``flexmeasures show data-sources`` to select whether to show the attributes field or not.
+
+since v0.15.0 | August 9, 2023
+================================
+* Allow deleting multiple sensors with a single call to ``flexmeasures delete sensor`` by passing the ``--id`` option multiple times.
+* Add ``flexmeasures add schedule for-process`` to create a new process schedule for a given power sensor.
+* Add support for describing ``config`` and ``parameters`` in YAML for the command ``flexmeasures add report``, editable in user's code editor using the flags ``--edit-config`` or ``--edit-parameters``.
+* Add ``--kind process`` option to create the asset and sensors for the ``ProcessScheduler`` tutorial.
+
+since v0.14.1 | June 20, 2023
+=================================
+
+* Avoid saving any :abbr:`NaN (not a number)` values to the database, when calling ``flexmeasures add report``.
+* Fix defaults for the ``--start-offset`` and ``--end-offset` options to ``flexmeasures add report``, which weren't being interpreted in the local timezone of the reporting sensor.
+
+since v0.14.0 | June 15, 2023
+=================================
+
+* Allow setting a storage efficiency using the new ``--storage-efficiency`` option to the ``flexmeasures add schedule for-storage`` CLI command.
+* Add CLI command ``flexmeasures add report`` to calculate a custom report from sensor data and save the results to the database, with the option to export them to a CSV or Excel file.
+* Add CLI command ``flexmeasures show reporters`` to list available reporters, including any defined in registered plugins.
+* Add CLI command ``flexmeasures show schedulers`` to list available schedulers, including any defined in registered plugins.
+* Make ``--account-id`` optional in ``flexmeasures add asset`` to support creating public assets, which are available to all users.
+
+since v0.13.0 | May 1, 2023
+=================================
+
+* Add ``flexmeasures add source`` CLI command for adding a new data source.
+* Add ``--inflexible-device-sensor`` option to ``flexmeasures add schedule``.
+
+since v0.12.0 | January 04, 2023
+=================================
+
+* Add ``--resolution``, ``--timezone`` and ``--to-file`` options to ``flexmeasures show beliefs``, to show beliefs data in a custom resolution and/or timezone, and also to save shown beliefs data to a CSV file.
+* Add options to ``flexmeasures add beliefs`` to 1) read CSV data with timezone naive datetimes (use ``--timezone`` to localize the data), 2) read CSV data with datetime/timedelta units (use ``--unit datetime`` or ``--unit timedelta``, 3) remove rows with NaN values, and 4) add filter to read-in data by matching values in specific columns (use ``--filter-column`` and ``--filter-value`` together).
+* Fix ``flexmeasures db-ops dump`` and ``flexmeasures db-ops restore`` incorrectly reporting a success when `pg_dump` and `pg_restore` are not installed.
+* Add ``flexmeasures monitor last-seen``. 
+* Rename ``flexmeasures monitor tasks`` to ``flexmeasures monitor last-run``. 
+* Rename ``flexmeasures add schedule`` to ``flexmeasures add schedule for-storage`` (in expectation of more scheduling commands, based on in-built flex models). 
+
+
+since v0.11.0 | August 28, 2022
+==============================
+
+* Add ``flexmeasures jobs show-queues`` to show contents of computation job queues.
+* ``--name`` parameter in ``flexmeasures jobs run-worker`` is now optional.
+* Add ``--custom-message`` param to ``flexmeasures monitor tasks``.
+* Rename ``-optimization-context-id`` to ``--consumption-price-sensor`` in ``flexmeasures add schedule``, and added ``--production-price-sensor``.
+
+
+since v0.9.0 | March 25, 2022
+==============================
+
+* Add CLI commands for showing data ``flexmeasures show accounts``, ``flexmeasures show account``, ``flexmeasures show roles``, ``flexmeasures show asset-types``, ``flexmeasures show asset``, ``flexmeasures show data-sources``, and ``flexmeasures show beliefs``.
+* Add ``flexmeasures db-ops resample-data`` CLI command to resample sensor data to a different resolution.
+* Add ``flexmeasures edit attribute`` CLI command to edit/add an attribute on an asset or sensor.
+* Add ``flexmeasures add toy-account`` for tutorials and trying things.
+* Add ``flexmeasures add schedule`` to create a new schedule for a given power sensor.
+* Add ``flexmeasures delete asset`` to delete an asset (including its sensors and data).
+* Rename ``flexmeasures add structure`` to ``flexmeasures add initial-structure``. 
+
+
+since v0.8.0 | January 26, 2022
+===============================
+
+* Add ``flexmeasures add sensor``, ``flexmeasures add asset-type``, ```flexmeasures add beliefs``. These were previously experimental features (under the `dev-add` command group).
+* ``flexmeasures add asset`` now directly creates an asset in the new data model.
+* Add ``flexmeasures delete sensor``, ``flexmeasures delete nan-beliefs`` and ``flexmeasures delete unchanged-beliefs``. 
+
+
+since v0.6.0 | April 2, 2021
+=====================
+
+* Add ``flexmeasures add account``, ``flexmeasures delete account``, and the ``--account-id`` param to ``flexmeasures add user``.
+
+
+since v0.4.0 | April 2, 2021
+=====================
+
+* Add the ``dev-add`` command group for experimental features around the upcoming data model refactoring.
+
+
+since v0.3.0 | April 2, 2021
+=====================
+
+* Refactor CLI into the main groups ``add``, ``delete``, ``jobs`` and ``db-ops``
+* Add ``flexmeasures add asset``,  ``flexmeasures add user`` and ``flexmeasures add weather-sensor``
+* Split the ``populate-db`` command into ``flexmeasures add structure`` and ``flexmeasures add forecasts``

+ 123 - 0
documentation/cli/commands.rst

@@ -0,0 +1,123 @@
+.. _cli:
+
+CLI Commands
+=============================
+
+FlexMeasures comes with a command-line utility, which helps to manage data.
+Below, we list all available commands.
+
+Each command has more extensive documentation if you call it with ``--help``.
+
+We keep track of changes to these commands in :ref:`cli-changelog`.
+You can also get the current overview over the commands you have available by:
+
+.. code-block:: console
+
+    flexmeasures --help
+    flexmeasures [command] --help
+
+
+This also shows admin commands made available through Flask and installed extensions (such as `Flask-Security <https://flask-security-too.readthedocs.io>`_ and `Flask-Migrate <https://flask-migrate.readthedocs.io>`_),
+of which some are referred to in this documentation.
+
+
+``add`` - Add data
+--------------
+
+================================================= =======================================
+``flexmeasures add initial-structure``            Initialize structural data like users, roles and asset types. 
+``flexmeasures add account-role``                 Create a FlexMeasures tenant account role.
+``flexmeasures add account``                      Create a FlexMeasures tenant account.
+``flexmeasures add user``                         Create a FlexMeasures user.
+``flexmeasures add asset-type``                   Create a new asset type.
+``flexmeasures add asset``                        Create a new asset.
+``flexmeasures add sensor``                       Add a new sensor.
+``flexmeasures add beliefs``                      Load beliefs from file.
+``flexmeasures add source``                       Add a new data source.
+``flexmeasures add forecasts``                    Create forecasts.
+``flexmeasures add schedule for-storage``         Create a charging schedule for a storage asset.
+``flexmeasures add schedule for-process``         Create a schedule for a process asset.
+``flexmeasures add holidays``                     Add holiday annotations to accounts and/or assets.
+``flexmeasures add annotation``                   Add annotation to accounts, assets and/or sensors.
+``flexmeasures add toy-account``                  Create a toy account, for tutorials and trying things.
+``flexmeasures add report``                       Create a report.
+================================================= =======================================
+
+
+``show`` - Show data
+--------------
+
+================================================= =======================================
+``flexmeasures show accounts``                    List accounts.
+``flexmeasures show account``                     Show an account, its users and assets.
+``flexmeasures show asset-types``                 List available asset types.
+``flexmeasures show asset``                       Show an asset and its sensors.
+``flexmeasures show roles``                       List available account- and user roles.
+``flexmeasures show data-sources``                List available data sources.
+``flexmeasures show beliefs``                     Plot time series data.
+``flexmeasures show reporters``                   List available reporters.
+``flexmeasures show schedulers``                  List available schedulers.
+================================================= =======================================
+
+
+
+``edit`` - Edit data
+--------------
+
+================================================= =======================================
+``flexmeasures edit attribute``                   Edit (or add) an asset attribute or sensor attribute.
+``flexmeasures edit resample-data``               | Assign a new event resolution to an existing sensor
+                                                  | and resample its data accordingly.
+``flexmeasures edit transfer-ownership``          | Transfer the ownership of an asset and its children to
+                                                  | a different account.
+================================================= =======================================
+
+
+``delete`` - Delete data
+--------------
+
+================================================= =======================================
+``flexmeasures delete structure``                 | Delete all structural (non time-series) data, 
+                                                  | like assets (types), roles and users.
+``flexmeasures delete account-role``              Delete a tenant account role.
+``flexmeasures delete account``                   | Delete a tenant account & also their users
+                                                  | (with assets and power measurements).
+``flexmeasures delete user``                      Delete a user & also their assets and power measurements.
+``flexmeasures delete asset``                     Delete an asset & also its sensors and data.
+``flexmeasures delete sensor``                    Delete a sensor and all beliefs about it.
+``flexmeasures delete measurements``              Delete measurements (with horizon <= 0).
+``flexmeasures delete prognoses``                 Delete forecasts and schedules (forecasts > 0).
+``flexmeasures delete unchanged-beliefs``         Delete unchanged beliefs.
+``flexmeasures delete nan-beliefs``               Delete NaN beliefs.
+================================================= =======================================
+
+
+``monitor`` - Monitoring
+--------------
+
+================================================= =======================================
+``flexmeasures monitor latest-run``               Check if the given task's last successful execution happened less than the allowed time ago.
+``flexmeasures monitor last-seen``                Check if given users last contact (via a request) happened less than the allowed time ago.
+================================================= =======================================
+
+
+``jobs`` - Job queueing
+--------------
+
+================================================= =======================================
+``flexmeasures jobs run-worker``                  Start a worker process for forecasting and/or scheduling jobs.
+``flexmeasures jobs show queues``                 List job queues.
+``flexmeasures jobs clear-queue``                 Clear a job queue.
+================================================= =======================================
+
+
+``db-ops`` - Operations on the whole database
+--------------
+
+================================================= =======================================
+``flexmeasures db-ops dump``                      Create a dump of all current data (using `pg_dump`).
+``flexmeasures db-ops load``                      Load backed-up contents (see `db-ops save`), run `reset` first.
+``flexmeasures db-ops reset``                     Reset database data and re-create tables from data model.
+``flexmeasures db-ops restore``                   Restore the dump file, see `db-ops dump` (run `reset` first).
+``flexmeasures db-ops save``                      Backup db content to files.
+================================================= =======================================

+ 137 - 0
documentation/concepts/algorithms.rst

@@ -0,0 +1,137 @@
+.. _algorithms:
+
+
+Algorithms
+==========================================
+
+.. contents::
+    :local:
+    :depth: 2
+
+
+.. _algorithms_forecasting:
+
+Forecasting
+-----------
+
+Forecasting algorithms are used by FlexMeasures to assess the likelihood of future consumption/production and prices.
+Weather forecasting is included in the platform, but is usually not the result of an internal algorithm (weather forecast services are being used by import scripts, e.g. with `this tool <https://github.com/SeitaBV/weatherforecaststorage>`_).
+
+FlexMeasures uses linear regression and falls back to naive forecasting of the last known value if errors happen. 
+What might be even more important than the type of algorithm is the features handed to the model ― lagged values (e.g. value of the same time yesterday) and regressors (e.g. wind speed prediction to forecast wind power production).
+
+
+The performance of our algorithms is indicated by the mean absolute error (MAE) and the weighted absolute percentage error (WAPE).
+Power profiles on an asset level often include zero values, such that the mean absolute percentage error (MAPE), a common statistical measure of forecasting accuracy, is undefined.
+For such profiles, it is more useful to report the WAPE, which is also known as the volume weighted MAPE.
+The MAE of a power profile gives an indication of the size of the uncertainty in consumption and production.
+This allows the user to compare an asset's predictability to its flexibility, i.e. to the size of possible flexibility activations.
+
+Example benchmarks per asset type are listed in the table below for various assets and forecasting horizons.
+FlexMeasures updates the benchmarks automatically for the data currently selected by the user.
+Amongst other factors, accuracy is influenced by:
+
+- The chosen metric (see below)
+- Resolution of the forecast
+- Horizon of the forecast
+- Asset type
+- Location / Weather conditions
+- Level of aggregation
+
+Accuracies in the table are reported as 1 minus WAPE, which can be interpreted as follows:
+
+- 100% accuracy denotes that all values are correct.
+- 50% accuracy denotes that, on average, the values are wrong by half of the reference value.
+- 0% accuracy denotes that, on average, the values are wrong by exactly the reference value (i.e. zeros or twice the reference value).
+- negative accuracy denotes that, on average, the values are off-the-chart wrong (by more than the reference value itself).
+
+
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| Asset                     | Building      | Charge Points | Solar         | Wind (offshore) | Day-ahead market|
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| Average power per asset   | 204 W         | 75 W          | 140 W         | 518 W           |                 |
++===========================+===============+===============+===============+=================+=================+
+| 1 - WAPE (1 hour ahead)   | 93.4 %        | 87.6 %        | 95.2 %        | 81.6 %          | 88.0 %          |
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| 1 - WAPE (6 hours ahead)  | 92.6 %        | 73.0 %        | 83.7 %        | 73.8 %          | 81.9 %          |
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| 1 - WAPE (24 hours ahead) | 92.4 %        | 65.2 %        | 46.1 %        | 60.1 %          | 81.4 %          |
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| 1 - WAPE (48 hours ahead) | 92.1 %        | 63.7 %        | 43.3 %        | 56.9 %          | 72.3 %          |
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+
+Defaults:
+
+- The application uses an ordinary least squares auto-regressive model with external variables.
+- Lagged outcome variables are selected based on the periodicity of the asset (e.g. daily and/or weekly).
+- Common external variables are weather forecasts of temperature, wind speed and irradiation.
+- Timeseries data with frequent zero values are transformed using a customised Box-Cox transformation.
+- To avoid over-fitting, cross-validation is used.
+- Before fitting, explicit annotations of expert knowledge to the model (like the definition of asset-specific seasonality and special time events) are possible.
+- The model is currently fit each day for each asset and for each horizon.
+
+Improvements:
+
+- Most assets have yearly seasonality (e.g. wind, solar) and therefore forecasts would benefit from >= 2 years of history.
+
+
+.. _algorithms_scheduling:
+
+Scheduling 
+------------
+
+Given price conditions or other conditions of relevance, a scheduling algorithm is used by the Aggregator (in case of explicit DR) or by the Energy Service Company (in case of implicit DR) to form a recommended schedule for the Prosumer's flexible assets.
+
+
+Storage devices
+^^^^^^^^^^^^^^^
+
+So far, FlexMeasures provides algorithms for storage ― for batteries (e.g. home batteries or EVs) and car charging stations.
+We thus cover the asset types "battery", "one-way_evse" and "two-way_evse".
+
+These algorithms schedule the storage assets based directly on the latest beliefs regarding market prices, within the specified time window.
+They are mixed integer linear programs, which are configured in FlexMeasures and then handed to a dedicated solver.
+
+For all scheduling algorithms, a starting state of charge (SOC) as well as a set of SOC targets can be given. If no SOC is available, we set the starting SOC to 0. 
+
+Also, per default we incentivise the algorithms to prefer scheduling charging now rather than later, and discharging later rather than now.
+We achieve this by adding a tiny artificial price slope. We penalise the future with at most 1 per thousand times the price spread. This behaviour can be turned off with the `prefer_charging_sooner` parameter set to `False`.
+
+.. note:: For the resulting consumption schedule, consumption is defined as positive values.
+    
+
+Possible future work on algorithms
+-----------------------------------
+
+Enabling more algorithmic expression in FlexMeasures is crucial. This are a few ideas for future work. Some of them are excellent topics for Bachelor or Master theses. so get in touch if that is of interest to you.
+
+More configurable forecasting
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+On the roadmap for FlexMeasures is to make features easier to configure, especially regressors.
+Furthermore, we plan to add more types of forecasting algorithms, like random forest or even LSTM.
+
+
+Other optimisation goals for scheduling
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Next to market prices, optimisation goals like reduced CO₂ emissions are sometimes required. There are multiple ways to measure this, e.g. against the CO₂ mix in the grid, or the use of fossil fuels.
+
+
+Scheduling of other flexible asset types
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Next to storage, there are other interesting flexible assets which can require specific implementations.
+For shifting, there are heat pumps and other buffers. For curtailment, there are wind turbines and solar panels.
+
+.. note:: See :ref:`flexibility_types` for more info on shifting and curtailment.
+
+Broker algorithm
+^^^^^^^^^^^^^^^^^
+A broker algorithm is used by the Aggregator to analyse flexibility in the Supplier's portfolio of assets, and to suggest the most valuable flexibility activations to take for each time slot.
+The differences to single-asset scheduling are that these activations are based on a helicopter perspective (the Aggregator optimises a portfolio, not a single asset) and that the flexibility offers are presented to the Supplier in the form of an order book.
+
+
+Trading algorithm
+^^^^^^^^^^^^^^^^^^
+A trading algorithm is used to assist the Supplier with its decision-making across time slots, based on the order books made by the broker (see above).
+The algorithm suggests which offers should be accepted next, and the Supplier may automate its decision-making by letting the algorithm place orders on its behalf.
+
+A default approach would be a myopic greedy strategy ― order all flexibility opportunities with a positive expected value in the first available timeslot, then those in the second available timeslot, and so on.

+ 82 - 0
documentation/concepts/data-model.rst

@@ -0,0 +1,82 @@
+.. _datamodel:
+
+The FlexMeasures data model 
+=============================
+
+The data model being used in FlexMeasures is visualized here (click for larger version):
+
+.. image:: https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/FlexMeasures-NewDataModel.png
+    :target: https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/FlexMeasures-NewDataModel.png
+    :align: center
+..    :scale: 40%
+
+
+Let's dive into some of the more crucial model types:
+
+
+Assets
+---------
+
+Assets can represent physical objects (e.g. a car battery or an industrial machine) or "virtual" objects (e.g. a market).
+In essence, an asset is anything on which you collect data.
+
+Assets can also have a parent-child relationship with other assets.
+So, you could model a building that contains assets like solar panels, a heat pump and EV chargers.
+
+We model asset types explicitly. None are required for running FlexMeasures. Some asset types have support in the UI (for icons, like a sun for ``"solar"``), and in the toy tutorial and test. Some are used to select the scheduler (e.g. using ``"battery"`` or ``"one-way_evse"`` leads to using the storage scheduler). You can add your own types, which is useful for plugin logic (an example is the ``"weather station"`` type for a plugin that reads in weather forecasts).
+
+
+Sensors
+---------
+
+A sensor depicts how data is collected in detail. Each sensor links to an asset.
+
+For instance, an asset might have both a energy meter and a temperature reading.
+You'd link two sensors to that asset and each sensor would have a unique **unit** (e.g. kWh and °C).
+
+You can also tell FlexMeasures in what **timezone** your data is expected to be set, and what the **resolution** should be.
+Then, FlexMeasures can try to convert incoming data to these specifications (e.g. if Fahrenheit readings come in, it converts them to Celsius).
+
+A bit more intricate control is to describe when beliefs (see below) are known. You might get prices from a supplier, but the time you imported them is not the time they were known.
+A market might have a publication date you want to adhere to. More information `in the timely-beliefs documentation <https://github.com/SeitaBV/timely-beliefs/blob/main/timely_beliefs/docs/timing.md/#beliefs-in-economics>`_.
+
+
+Data sources
+------------
+
+We keep track of where data comes from, for better reporting, graphing and the status page (this is also an aspect of the timely-beliefs package).
+A data source can be a FlexMeasures user, but also simply a named source from outside, e.g. a third-party API, where weather forecasts are collected from.
+
+In FlexMeasures, data sources have a type. It is just a string which you can freely choose (we do not model them explicitly im the data model like Asset types).
+We do support some types out of the box: "scheduler", "forecaster" "reporter", "demo script" and "user".  
+
+
+Beliefs
+---------
+
+When we discussed sensors, we hinted at the care we took to model the event data well. We call each data point a "belief", as we not only store measurements ―
+we also store forecasts, schedules and the like, many of which do not have a 100% truth value.
+
+For instance, a horizon of 0 means the data point was known right after it happened. A positive horizon means the data point is a forecast.
+
+The `timely-beliefs package <https://github.com/SeitaBV/timely-beliefs>`_ helps us to model many aspects about data points, e.g. who claims to know that value,
+when they said so and how certain they were. 
+
+Each belief links to a sensor and a data source. Here are two examples:
+
+
+- The power sensor of a battery, where we store the schedules, can have two sources: (1) the schedule itself (a data source of type "scheduler", representing how FlexMeasures created this data) and (2) the realized schedule, i.e. the measurements of how the battery responded (or not) to the schedule. The latter might have a data source of type "user" (who sent the measurements to FlexMeasures).
+- A thermal demand sensor containing forecasts (data source of type "forecast", e.g. heating usage forecast sent to FlexMeasures or made by FlexMeasures) and measurements (sent into FlexMeasures, data source type "user").
+
+
+
+Accounts & Users
+----------------
+
+FlexMeasures is a multi-tenant system. Each account should model an organization with multiple users.
+
+Accounts "own" assets, and data of these assets are protected against anyone from a different account (unless a user has the ``admin`` role).
+
+Accounts can "consult" other accounts. This depicts the real situation that some organizations are the consultants or advisors to many others.
+They have certain rights, e.g. to read the data of their clients. That is useful for serving them.
+If you are hosting FlexMeasures, and the organizations you serve with it use this feature, you are effectively running a B2B2B setup :)

+ 195 - 0
documentation/concepts/device_scheduler.rst

@@ -0,0 +1,195 @@
+.. _storage_device_scheduler:
+
+Storage device scheduler: Linear model
+=======================================
+
+Introduction
+--------------
+This generic storage device scheduler is able to handle an EMS with multiple devices, with various types of constraints on the EMS level and on the device level,
+and with multiple market commitments on the EMS level.
+
+A typical example is a house with many devices. The commitments are assumed to be with regard to the flow of energy to the device (positive for consumption, negative for production). In practice, this generic scheduler is used in the **StorageScheduler** to schedule a storage device.
+    
+The solver minimizes the costs of deviating from the commitments.
+
+
+
+Notation
+---------
+
+Indexes
+^^^^^^^^
+================================ ================================================ ==============================================================================================================  
+Symbol                              Variable in the Code                           Description
+================================ ================================================ ==============================================================================================================  
+:math:`c`                             c                                                  Commitments, for example, day-ahead or intra-day market commitments.
+:math:`d`                             d                                                  Devices, for example, a battery or a load.
+:math:`j`                             j                                                  0-indexed time dimension. 
+================================ ================================================ ==============================================================================================================  
+
+.. note::
+  The time index :math:`j` has two interpretations: a time period or an instantaneous moment at the end of time period :math:`j`. 
+  For example, :math:`j` in flow constraints correspond to time periods, whereas :math:`j` used in a stock constraint refers to the end of time period :math:`j`.
+
+Parameters
+^^^^^^^^^^
+================================ ================================================ ==============================================================================================================  
+Symbol                              Variable in the Code                           Description
+================================ ================================================ ==============================================================================================================  
+:math:`Price_{up}(c,j)`               up_price                                           Price of incurring an upwards deviations in commitment :math:`c` during time period :math:`j`.
+:math:`Price_{down}(c,j)`             down_price                                         Price of incurring a downwards deviations in commitment :math:`c` during time period :math:`j`.
+:math:`\eta_{up}(d,j)`                device_derivative_up_efficiency                    Upwards conversion efficiency.
+:math:`\eta_{down}(d,j)`              device_derivative_down_efficiency                  Downwards conversion efficiency.
+:math:`Stock_{min}(d,j)`              device_min                                         Minimum quantity for the stock of device :math:`d` at the end of time period :math:`j`.
+:math:`Stock_{max}(d,j)`              device_max                                         Maximum quantity for the stock of device :math:`d` at the end of time period :math:`j`.
+:math:`\epsilon(d,j)`                 efficiencies                                       Stock energy losses.
+:math:`P_{max}(d,j)`                  device_derivative_max                              Maximum flow of device :math:`d` during time period :math:`j`.
+:math:`P_{min}(d,j)`                  device_derivative_min                              Minimum flow of device :math:`d` during time period :math:`j`.
+:math:`P^{ems}_{min}(j)`              ems_derivative_min                                 Minimum flow of the EMS during time period :math:`j`.
+:math:`P^{ems}_{max}(j)`              ems_derivative_max                                 Maximum flow of the EMS during time period :math:`j`.
+:math:`Commitment(c,j)`               commitment_quantity                                Commitment c (at EMS level) over time step :math:`j`.
+:math:`M`                             M                                                  Large constant number, upper bound of :math:`Power_{up}(d,j)` and :math:`|Power_{down}(d,j)|`.
+:math:`D(d,j)`                        stock_delta                                        Explicit energy gain or loss of device :math:`d` during time period :math:`j`.
+================================ ================================================ ==============================================================================================================  
+
+
+Variables
+^^^^^^^^^
+================================ ================================================ ==============================================================================================================  
+Symbol                              Variable in the Code                           Description
+================================ ================================================ ==============================================================================================================  
+:math:`\Delta_{up}(c,j)`              commitment_upwards_deviation                       Upwards deviation from the power commitment :math:`c` of the EMS during time period :math:`j`.
+:math:`\Delta_{down}(c,j)`            commitment_downwards_deviation                     Downwards deviation from the power commitment :math:`c` of the EMS during time period :math:`j`.
+:math:`\Delta Stock(d,j)`                           n/a                                  Change of stock of device :math:`d` at the end of time period :math:`j`.
+:math:`P_{up}(d,j)`                   device_power_up                                    Upwards power of device :math:`d` during time period :math:`j`.
+:math:`P_{down}(d,j)`                 device_power_down                                  Downwards power of device :math:`d` during time period :math:`j`.
+:math:`P^{ems}(j)`                    ems_power                                          Aggregated power of all the devices during time period :math:`j`.
+:math:`\sigma(d,j)`                   device_power_sign                                  Upwards power activation if :math:`\sigma(d,j)=1`, downwards power activation otherwise.
+================================ ================================================ ==============================================================================================================  
+
+Cost function
+--------------
+
+The cost function quantifies the total cost of upwards and downwards deviations from the different commitments.
+
+.. math:: 
+    :name: cost_function
+
+    \min [\sum_{c,j} \Delta_{up}(c,j) \cdot Price_{up}(c,j) +  \Delta_{down}(c,j) \cdot Price_{down}(c,j)]
+
+
+State dynamics
+---------------
+
+To simplify the description of the model, the auxiliary variable :math:`\Delta Stock(d,j)` is introduced in the documentation. It represents the
+change of :math:`Stock(d,j)`, taking into account conversion efficiencies but not considering the storage losses.
+
+.. math::
+  :name: stock
+
+    \Delta Stock(d,j) = \frac{P_{down}(d,j)}{\eta_{down}(d,j) } + P_{up}(d,j)  \cdot \eta_{up}(d,j) + D(d,j)
+
+
+.. math:: 
+  :name: device_bounds
+
+    Stock_{min}(d,j)  \leq Stock(d,j) - Stock(d,-1)\leq Stock_{max}(d,j) 
+
+
+Perfect efficiency
+^^^^^^^^^^^^^^^^^^^
+
+.. math:: 
+  :name: efficiency_e1
+
+    Stock(d, j) = Stock(d, j-1) + \Delta Stock(d,j)
+
+Left efficiency
+^^^^^^^^^^^^^^^^^
+First apply the stock change, then apply the losses (i.e. the stock changes on the left side of the time interval in which the losses apply)
+
+
+.. math:: 
+  :name: efficiency_left
+
+    Stock(d, j)  = (Stock(d, j-1) + \Delta Stock(d,j)) \cdot \epsilon(d,j)
+
+
+Right efficiency
+^^^^^^^^^^^^^^^^^
+First apply the losses, then apply the stock change (i.e. the stock changes on the right side of the time interval in which the losses apply)
+
+.. math:: 
+  :name: efficiency_right
+
+    Stock(d, j)  = Stock(d, j-1) \cdot \epsilon(d,j) + \Delta Stock(d,j)
+
+Linear efficiency
+^^^^^^^^^^^^^^^^^
+Assume the change happens at a constant rate, leading to a linear stock change, and exponential decay, within the current interval
+
+.. math:: 
+  :name: efficiency_linear
+
+    Stock(d, j)  = Stock(d, j-1) \cdot \epsilon(d,j) + \Delta Stock(d,j) \cdot \frac{\epsilon(d,j) - 1}{log(\epsilon(d,j))}
+
+Constraints
+--------------
+
+Device bounds
+^^^^^^^^^^^^^
+
+.. math:: 
+  :name: device_derivative_bounds
+
+    P_{min}(d,j) \leq P_{up}(d,j) + P_{down}(d,j)\leq P_{max}(d,j)
+
+.. math:: 
+  :name: device_down_derivative_bounds
+
+    min(P_{min}(d,j),0) \leq P_{down}(d,j)\leq 0
+
+
+.. math:: 
+  :name: device_up_derivative_bounds
+
+    0 \leq P_{up}(d,j)\leq max(P_{max}(d,j),0)
+
+
+Upwards/Downwards activation selection
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Avoid simultaneous upwards and downwards activation during the same time period.
+
+.. math:: 
+  :name: device_up_derivative_sign
+
+    P_{up}(d,j) \leq M \cdot \sigma(d,j)
+
+.. math:: 
+  :name: device_down_derivative_sign
+
+    -P_{down}(d,j) \leq M \cdot (1-\sigma(d,j))
+
+
+Grid constraints
+^^^^^^^^^^^^^^^^^
+
+.. math:: 
+    :name: device_derivative_equalities
+
+    P^{ems}(d,j) = P_{up}(d,j) + P_{down}(d,j)
+
+.. math:: 
+  :name: ems_derivative_bounds
+
+    P^{ems}_{min}(j) \leq \sum_d P^{ems}(d,j) \leq P^{ems}_{max}(j)
+
+Power coupling constraints
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. math:: 
+    :name: ems_flow_commitment_equalities
+
+    \sum_d P^{ems}(d,j) = \sum_c Commitment(c,j) + \Delta_{up}(c,j) + \Delta_{down}(c,j)
+

File diff suppressed because it is too large
+ 164 - 0
documentation/concepts/flexibility.rst


File diff suppressed because it is too large
+ 97 - 0
documentation/concepts/security_auth.rst


File diff suppressed because it is too large
+ 34 - 0
documentation/concepts/users.rst


+ 258 - 0
documentation/conf.py

@@ -0,0 +1,258 @@
+# -*- coding: utf-8 -*-
+#
+# Configuration file for the Sphinx documentation builder.
+#
+# This file does only contain a selection of the most common options. For a
+# full list see the documentation:
+# http://www.sphinx-doc.org/en/stable/config
+
+import os
+import shutil
+
+from datetime import datetime
+from pkg_resources import get_distribution
+import sphinx_fontawesome
+
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+
+
+# -- Project information -----------------------------------------------------
+
+project = "FlexMeasures"
+copyright = f"{datetime.now().year}, Seita Energy Flexibility, developed in partnership with A1 Engineering, South Korea"
+author = "Seita B.V."
+
+# The full version, including alpha/beta/rc tags
+release = get_distribution("flexmeasures").version
+# The short X.Y.Z version
+version = ".".join(release.split(".")[:3])
+
+rst_prolog = sphinx_fontawesome.prolog
+
+# -- General configuration ---------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+    "sphinx_rtd_theme",
+    "sphinx.ext.intersphinx",
+    "sphinx.ext.coverage",
+    "sphinx.ext.mathjax",
+    "sphinx.ext.ifconfig",
+    "sphinx.ext.todo",
+    "sphinx_copybutton",
+    "sphinx_tabs.tabs",
+    "sphinx_fontawesome",
+    "sphinxcontrib.autohttp.flask",
+    "sphinxcontrib.autohttp.flaskqref",
+]
+
+autodoc_default_options = {}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ["_templates"]
+
+# if GEN_CODE_DOCS is not found, the default is gen_code_docs=True
+gen_code_docs = not bool(
+    os.environ.get("GEN_CODE_DOCS", "True").lower() in ("f", "false", "0")
+)
+
+
+# Generate code docs
+if gen_code_docs:
+
+    # Add dependencies
+    extensions.extend(
+        [
+            "sphinx.ext.autosummary",
+            "sphinx.ext.autodoc.typehints",
+            "sphinx.ext.autodoc",
+        ]
+    )
+else:
+    if os.path.exists("_autosummary"):
+        shutil.rmtree("_autosummary")
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = ".rst"
+
+# The master toctree document.
+master_doc = "index"
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = "en"
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path .
+exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "_templates"]
+
+# Todo: these are not mature enough yet for release, or should be removed
+exclude_patterns.append("int/*.rst")
+exclude_patterns.append("concepts/assets.rst")
+exclude_patterns.append("concepts/markets.rst")
+exclude_patterns.append("concepts/users.rst")
+exclude_patterns.append("api/aggregator.rst")
+exclude_patterns.append("api/mdc.rst")
+exclude_patterns.append("api/prosumer.rst")
+exclude_patterns.append("api/supplier.rst")
+
+# Whether to show todo notes in the documentation
+todo_include_todos = True
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = "sphinx"
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages.  See the documentation for
+# a list of builtin themes.
+#
+html_theme = "sphinx_rtd_theme"
+
+html_logo = "https://artwork.lfenergy.org/projects/flexmeasures/horizontal/white/flexmeasures-horizontal-white.png"
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further.  For a list of options available for each theme, see the
+# documentation.
+#
+html_theme_options = {
+    "logo_only": True,
+}
+
+# Add any paths that contain custom _static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin _static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ["_static"]
+html_css_files = ["css/custom.css"]
+
+# Custom sidebar templates, must be a dictionary that maps document names
+# to template names.
+#
+# The default sidebars (for documents that don't match any pattern) are
+# defined by theme itself.  Builtin themes are using these templates by
+# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
+# 'searchbox.html']``.
+#
+# html_sidebars = {}
+
+
+# -- Options for HTMLHelp output ---------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = "FLEXMEASURESdoc"
+
+
+# -- Options for LaTeX output ------------------------------------------------
+
+latex_elements = {
+    # The paper size ('letterpaper' or 'a4paper').
+    #
+    # 'papersize': 'letterpaper',
+    # The font size ('10pt', '11pt' or '12pt').
+    #
+    # 'pointsize': '10pt',
+    # Additional stuff for the LaTeX preamble.
+    #
+    # 'preamble': '',
+    # Latex figure (float) alignment
+    #
+    # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+#  author, documentclass [howto, manual, or own class]).
+latex_documents = [
+    (
+        master_doc,
+        f"{project}.tex",
+        f"{project} Documentation",
+        author,
+        "manual",
+    )
+]
+
+
+# -- Options for manual page output ------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [(master_doc, project, f"{project} Documentation", [author], 1)]
+
+
+# -- Options for Texinfo output ----------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+#  dir menu entry, description, category)
+texinfo_documents = [
+    (
+        master_doc,
+        project,
+        f"{project} Documentation",
+        author,
+        project,
+        f"The {project} Platform is a tool for scheduling energy flexibility activations on behalf of the connected asset owners.",
+        "Miscellaneous",
+    )
+]
+
+
+# -- Extension configuration -------------------------------------------------
+
+# -- Options for intersphinx extension ---------------------------------------
+
+# Example configuration for intersphinx: refer to the Python standard library.
+intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
+
+# -- Options for copybytton extension ---------------------------------------
+copybutton_prompt_is_regexp = True
+copybutton_prompt_text = r">>> |\.\.\. |\$ "  # Python Repl + continuation + Bash
+copybutton_line_continuation_character = "\\"
+
+# -- Options for ifconfig extension ---------------------------------------
+
+
+def setup(sphinx_app):
+    """
+    Here you can set config variables for Sphinx or even pass config variables from FlexMeasures to Sphinx.
+    For example, to display content depending on FLEXMEASURES_MODE (specified in the FlexMeasures app's config.py),
+    place this in one of the rst files:
+
+    .. ifconfig:: FLEXMEASURES_MODE == "play"
+
+        We are in play mode.
+
+    """
+
+    # sphinx_app.add_config_value('RELEASE_LEVEL', 'alpha', 'env')
+    sphinx_app.add_config_value(
+        "FLEXMEASURES_MODE",
+        "live",
+        "env",  # hard-coded, documentation is not server-specific for the time being
+    )
+
+    if gen_code_docs:
+        from flexmeasures.app import create
+
+        create(
+            env="documentation"
+        )  # we need to create the app for when sphinx imports modules that use current_app

+ 666 - 0
documentation/configuration.rst

@@ -0,0 +1,666 @@
+.. _configuration:
+
+Configuration
+=============
+
+The following configurations are used by FlexMeasures.
+
+Required settings (e.g. postgres db) are marked with a double star (**).
+To enable easier quickstart tutorials, continuous integration use cases and basic usage of FlexMeasures within other projects, these required settings, as well as a few others, can be set by environment variables ― this is also noted per setting.
+Recommended settings (e.g. mail, redis) are marked by one star (*).
+
+.. note:: FlexMeasures is best configured via a config file. The config file for FlexMeasures can be placed in one of two locations: 
+
+
+* in the user's home directory (e.g. ``~/.flexmeasures.cfg`` on Unix). In this case, note the dot at the beginning of the filename!
+* in the app's instance directory (e.g. ``/path/to/your/flexmeasures/code/instance/flexmeasures.cfg``\ ). The path to that instance directory is shown to you by running flexmeasures (e.g. ``flexmeasures run``\ ) with required settings missing or otherwise by running ``flexmeasures shell``. Under :ref:`docker_configuration`, we explain how to load a config file into a FlexMeasures Docker container.
+
+
+Basic functionality
+-------------------
+
+LOGGING_LEVEL
+^^^^^^^^^^^^^
+
+Level above which log messages are added to the log file. See the ``logging`` package in the Python standard library.
+
+Default: ``logging.WARNING``
+
+.. note:: This setting is also recognized as environment variable.
+
+
+.. _modes-config:
+
+FLEXMEASURES_MODE
+^^^^^^^^^^^^^^^^^
+
+The mode in which FlexMeasures is being run, e.g. "demo" or "play".
+This is used to turn on certain extra behaviours, see :ref:`modes-dev` for details.
+
+Default: ``""``
+
+
+.. _overwrite-config:
+
+FLEXMEASURES_ALLOW_DATA_OVERWRITE
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Whether to allow overwriting existing data when saving data to the database.
+
+Default: ``False``
+
+
+.. _solver-config:
+
+FLEXMEASURES_LP_SOLVER
+^^^^^^^^^^^^^^^^^^^^^^
+
+The command to run the scheduling solver. This is the executable command which FlexMeasures calls via the `pyomo library <http://www.pyomo.org/>`_. Potential values might be ``cbc``, ``cplex``, ``glpk`` or ``appsi_highs``. Consult `their documentation <https://pyomo.readthedocs.io/en/stable/solving_pyomo_models.html#supported-solvers>`_ to learn more. 
+We have tested FlexMeasures with `HiGHS <https://highs.dev/>`_ and `Cbc <https://coin-or.github.io/Cbc/intro>`_.
+Note that you need to install the solver, read more at :ref:`installing-a-solver`.
+
+Default: ``"appsi_highs"``
+
+
+
+FLEXMEASURES_HOSTS_AND_AUTH_START
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Configuration used for entity addressing. This contains the domain on which FlexMeasures runs
+and the first month when the domain was under the current owner's administration.
+
+Default: ``{"flexmeasures.io": "2021-01"}``
+
+
+.. _plugin-config:
+
+FLEXMEASURES_PLUGINS
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A list of plugins you want FlexMeasures to load (e.g. for custom views or CLI functions). 
+This can be a Python list (e.g. ``["plugin1", "plugin2"]``) or a comma-separated string (e.g. ``"plugin1, plugin2"``).
+
+Two types of entries are possible here:
+
+* File paths (absolute or relative) to plugins. Each such path needs to point to a folder, which should contain an ``__init__.py`` file where the Blueprint is defined. 
+* Names of installed Python modules. 
+
+Added functionality in plugins needs to be based on Flask Blueprints. See :ref:`plugins` for more information and examples.
+
+Default: ``[]``
+
+.. note:: This setting is also recognized as environment variable (since v0.14, which is also the version required to pass this setting as a string).
+
+
+FLEXMEASURES_DB_BACKUP_PATH
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Relative path to the folder where database backups are stored if that feature is being used.
+
+Default: ``"migrations/dumps"``
+
+FLEXMEASURES_PROFILE_REQUESTS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If True, the processing time of requests are profiled.
+
+The overall time used by requests are logged to the console. In addition, if `pyinstrument` is installed, then a profiling report is made (of time being spent in different function calls) for all Flask API endpoints.
+
+The profiling results are stored in the ``profile_reports`` folder in the instance directory.
+
+Note: Profile reports for API endpoints are overwritten on repetition of the same request.
+
+Interesting for developers.
+
+Default: ``False``
+
+
+UI
+--
+
+FLEXMEASURES_PLATFORM_NAME
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Name being used in headings and in the menu bar.
+
+For more fine-grained control, this can also be a list, where it's possible to set the platform name for certain account roles (as a tuple of view name and list of applicable account roles). In this case, the list is searched from left to right, and the first fitting name is used.
+
+For example, ``("MyMDCApp", ["MDC"]), "MyApp"]`` would show the name "MyMDCApp" for users connected to accounts with the account role "MDC", while all others would see the name "/MyApp".
+
+.. note:: This fine-grained control requires FlexMeasures version 0.6.0
+
+Default: ``"FlexMeasures"``
+
+
+FLEXMEASURES_MENU_LOGO_PATH
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A URL path to identify an image being used as logo in the upper left corner (replacing some generic text made from platform name and the page title).
+The path can be a complete URL or a relative from the app root. 
+
+Default: ``""``
+
+
+.. _extra-css-config:
+
+FLEXMEASURES_EXTRA_CSS_PATH
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A URL path to identify a CSS style-sheet to be added to the base template.
+The path can be a complete URL or a relative from the app root. 
+
+.. note:: You can also add extra styles for plugins with the usual Blueprint method. That is more elegant but only applies to the Blueprint's views.
+
+Default: ``""``
+
+
+FLEXMEASURES_ROOT_VIEW
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Root view (reachable at "/"). For example ``"/dashboard"``.
+
+For more fine-grained control, this can also be a list, where it's possible to set the root view for certain account roles (as a tuple of view name and list of applicable account roles). In this case, the list is searched from left to right, and the first fitting view is shown.
+
+For example, ``[("metering-dashboard", ["MDC", "Prosumer"]), "default-dashboard"]`` would route to "/metering-dashboard" for users connected to accounts with account roles "MDC" or "Prosumer", while all others would be routed to "/default-dashboard".
+
+If this setting is empty or not applicable for the current user, the "/" view will be shown (FlexMeasures' default dashboard or a plugin view which was registered at "/").
+
+Default ``[]``
+
+.. note:: This setting was introduced in FlexMeasures version 0.6.0
+
+
+.. _menu-config:
+
+FLEXMEASURES_MENU_LISTED_VIEWS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A list of the view names which are listed in the menu.
+
+.. note:: This setting only lists the names of views, rather than making sure the views exist.
+
+For more fine-grained control, the entries can also be tuples of view names and list of applicable account roles. For example, the entry ``("details": ["MDC", "Prosumer"])`` would add the "/details" link to the menu only for users who are connected to accounts with roles "MDC" or "Prosumer". For clarity: the title of the menu item would read "Details", see also the FLEXMEASURES_LISTED_VIEW_TITLES setting below.
+
+.. note:: This fine-grained control requires FlexMeasures version 0.6.0
+
+Default: ``["dashboard"]``
+
+
+FLEXMEASURES_MENU_LISTED_VIEW_ICONS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A dictionary containing a Font Awesome icon name for each view name listed in the menu.
+For example, ``{"freezer-view": "snowflake-o"}`` puts a snowflake icon (|snowflake-o|) next to your freezer-view menu item.
+
+Default: ``{}``
+
+.. note:: This setting was introduced in FlexMeasures version 0.6.0
+
+
+FLEXMEASURES_MENU_LISTED_VIEW_TITLES
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A dictionary containing a string title for each view name listed in the menu.
+For example, ``{"freezer-view": "Your freezer"}`` lists the freezer-view in the menu as "Your freezer".
+
+Default: ``{}``
+
+.. note:: This setting was introduced in FlexMeasures version 0.6.0
+
+
+FLEXMEASURES_HIDE_NAN_IN_UI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Whether to hide the word "nan" if any value in metrics tables is ``NaN``.
+
+Default: ``False``
+
+RQ_DASHBOARD_POLL_INTERVAL
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Interval in which viewing the queues dashboard refreshes itself, in milliseconds.
+
+Default: ``3000`` (3 seconds) 
+
+
+FLEXMEASURES_ASSET_TYPE_GROUPS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+How to group asset types together, e.g. in a dashboard.
+
+Default: ``{"renewables": ["solar", "wind"], "EVSE": ["one-way_evse", "two-way_evse"]}``
+
+FLEXMEASURES_JS_VERSIONS
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Default: ``{"vega": "5.22.1", "vegaembed": "6.20.8", "vegalite": "5.2.0"}``
+
+
+Timing
+------
+
+FLEXMEASURES_TIMEZONE
+^^^^^^^^^^^^^^^^^^^^^
+
+Timezone in which the platform operates. This is useful when datetimes are being localized.
+
+Default: ``"Asia/Seoul"``
+
+
+FLEXMEASURES_JOB_TTL
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Time to live for jobs (e.g. forecasting, scheduling) in their respective queue.
+
+A job that is passed this time to live might get cleaned out by Redis' memory manager.
+
+Default: ``timedelta(days=1)``
+
+FLEXMEASURES_PLANNING_TTL
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Time to live for schedule UUIDs of successful scheduling jobs. Set a negative timedelta to persist forever.
+
+Default: ``timedelta(days=7)``
+
+FLEXMEASURES_JOB_CACHE_TTL
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Time to live for the job caching keys in seconds. The default value of 1h responds to the reality that within an hour, there is not
+much change, other than the input arguments, that justifies recomputing the schedules.
+
+In an hour, we will have more accurate forecasts available and the situation of the power grid
+might have changed (imbalance prices, distribution level congestion, activation of FCR or aFRR reserves, ...).
+
+Set a negative value to persist forever.
+
+.. warning::
+    Keep in mind that unless a proper clean up mechanism is set up, the number of
+    caching keys will grow with time if the TTL is set to a negative value.
+
+Default: ``3600``
+
+.. _datasource_config:
+
+FLEXMEASURES_DEFAULT_DATASOURCE
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The default DataSource of the resulting data from `DataGeneration` classes.
+
+Default: ``"FlexMeasures"``
+
+
+.. _planning_horizon_config:
+
+FLEXMEASURES_PLANNING_HORIZON
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The default horizon for making schedules.
+API users can set a custom duration if they need to.
+
+Default: ``timedelta(days=2)``
+
+
+FLEXMEASURES_MAX_PLANNING_HORIZON
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The maximum horizon for making schedules.
+API users are not able to request longer schedules.
+Can be set to a specific ``datetime.timedelta`` or to an integer number of planning steps, where the duration of a planning step is equal to the resolution of the applicable power sensor.
+Set to ``None`` to forgo this limitation altoghether.
+
+Default: ``2520`` (e.g. 7 days for a 4-minute resolution sensor, 105 days for a 1-hour resolution sensor)
+
+
+Access Tokens
+---------------
+
+.. _mapbox_access_token:
+
+MAPBOX_ACCESS_TOKEN
+^^^^^^^^^^^^^^^^^^^
+
+Token for accessing the MapBox API (for displaying maps on the dashboard and asset pages). You can learn how to obtain one `here <https://docs.mapbox.com/help/glossary/access-token/>`_
+
+Default: ``None``
+
+.. note:: This setting is also recognized as environment variable.
+
+.. _sentry_access_token:
+
+SENTRY_SDN
+^^^^^^^^^^^^
+
+Set tokenized URL, so errors will be sent to Sentry when ``app.env`` is not in `debug` or `testing` mode.
+E.g.: ``https://<examplePublicKey>@o<something>.ingest.sentry.io/<project-Id>``
+
+Default: ``None``
+
+.. note:: This setting is also recognized as environment variable.
+
+
+SQLAlchemy
+----------
+
+This is only a selection of the most important settings.
+See `the Flask-SQLAlchemy Docs <https://flask-sqlalchemy.palletsprojects.com/en/master/config>`_ for all possibilities.
+
+SQLALCHEMY_DATABASE_URI (**)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Connection string to the postgres database, format: ``postgresql://<user>:<password>@<host-address>[:<port>]/<db>``
+
+Default: ``None``
+
+.. note:: This setting is also recognized as environment variable.
+
+
+SQLALCHEMY_ENGINE_OPTIONS
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Configuration of the SQLAlchemy engine.
+
+Default: 
+
+.. code-block:: python
+
+       {
+           "pool_recycle": 299,
+           "pool_pre_ping": True,
+           "connect_args": {"options": "-c timezone=utc"},
+       }
+
+
+SQLALCHEMY_TEST_DATABASE_URI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When running tests (``make test``, which runs ``pytest``), the default database URI is set in ``utils.config_defaults.TestingConfig``.
+You can use this setting to overwrite that URI and point the tests to an (empty) database of your choice. 
+
+.. note:: This setting is only supported as an environment variable, not in a config file, and only during testing.
+
+
+
+Security
+--------
+
+Settings to ensure secure handling of credentials and data.
+
+For Flask-Security and Flask-Cors (setting names start with "SECURITY" or "CORS"), this is only a selection of the most important settings.
+See `the Flask-Security Docs <https://flask-security-too.readthedocs.io/en/stable/configuration.html>`_ as well as the `Flask-CORS docs <https://flask-cors.readthedocs.io/en/latest/configuration.html>`_ for all possibilities.
+
+SECRET_KEY (**)
+^^^^^^^^^^^^^^^
+
+Used to sign user sessions and also as extra salt (a.k.a. pepper) for password salting if ``SECURITY_PASSWORD_SALT`` is not set.
+This is actually part of Flask - but is also used by Flask-Security to sign all tokens.
+
+It is critical this is set to a strong value. For python3 consider using: ``secrets.token_urlsafe()``
+You can also set this in a file (which some Flask tutorials advise).
+
+.. note:: Leave this setting set to ``None`` to get more instructions when you attempt to run FlexMeasures.
+
+Default: ``None``
+
+SECURITY_PASSWORD_SALT
+^^^^^^^^^^^^^^^^^^^^^^
+
+Extra password salt (a.k.a. pepper)
+
+Default: ``None`` (falls back to ``SECRET_KEY``\ )
+
+SECURITY_TOKEN_AUTHENTICATION_HEADER
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Name of the header which carries the auth bearer token in API requests.
+
+Default: ``Authorization``
+
+SECURITY_TOKEN_MAX_AGE
+^^^^^^^^^^^^^^^^^^^^^^
+
+Maximal age of security tokens in seconds.
+
+Default: ``60 * 60 * 6``  (six hours)
+
+SECURITY_TRACKABLE
+^^^^^^^^^^^^^^^^^^
+
+Whether to track user statistics. Turning this on requires certain user fields.
+We do not use this feature, but we do track number of logins.
+
+Default: ``False``
+
+CORS_ORIGINS
+^^^^^^^^^^^^
+
+Allowed cross-origins. Set to "*" to allow all. For development (e.g. JavaScript on localhost) you might use "null" in this list.
+
+Default: ``[]``
+
+CORS_RESOURCES:
+^^^^^^^^^^^^^^^
+
+FlexMeasures resources which get cors protection. This can be a regex, a list of them or a dictionary with all possible options.
+
+Default: ``[r"/api/*"]``
+
+CORS_SUPPORTS_CREDENTIALS
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Allows users to make authenticated requests. If true, injects the Access-Control-Allow-Credentials header in responses. This allows cookies and credentials to be submitted across domains.
+
+.. note::  This option cannot be used in conjunction with a “*” origin.
+
+Default: ``True``
+
+
+FLEXMEASURES_FORCE_HTTPS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Set to ``True`` if all requests should be forced to be HTTPS.
+
+Default: ``False``
+
+
+FLEXMEASURES_ENFORCE_SECURE_CONTENT_POLICY
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When ``FLEXMEASURES_ENFORCE_SECURE_CONTENT_POLICY`` is set to ``True``, the ``<meta>`` tag with the ``Content-Security-Policy`` directive, specifically ``upgrade-insecure-requests``, is included in the HTML head. This directive instructs the browser to upgrade insecure requests from ``http`` to ``https``. One example of a use case for this is if you have a load balancer in front of FlexMeasures, which is secured with a certificate and only accepts https.
+
+Default: ``False``
+
+
+.. _mail-config:
+
+Mail
+----
+
+For FlexMeasures to be able to send email to users (e.g. for resetting passwords), you need an email account which can do that (e.g. GMail).
+
+This is only a selection of the most important settings.
+See `the Flask-Mail Docs <https://flask-mail.readthedocs.io/en/latest/#configuring-flask-mail>`_ for others.
+
+.. note:: The mail settings are also recognized as environment variables.
+
+MAIL_SERVER (*)
+^^^^^^^^^^^^^^^
+
+Email name server domain.
+
+Default: ``"localhost"``
+
+MAIL_PORT (*)
+^^^^^^^^^^^^^
+
+SMTP port of the mail server.
+
+Default: ``25``
+
+MAIL_USE_TLS
+^^^^^^^^^^^^
+
+Whether to use TLS.
+
+Default: ``False``
+
+MAIL_USE_SSL
+^^^^^^^^^^^^
+
+Whether to use SSL.
+
+Default: ``False``
+
+MAIL_USERNAME (*)
+^^^^^^^^^^^^^^^^^
+
+Login name of the mail system user.
+
+Default: ``None``
+
+MAIL_DEFAULT_SENDER (*)
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Tuple of shown name of sender and their email address.
+
+.. note:: Some recipient mail servers will refuse emails for which the shown email address (set under ``MAIL_DEFAULT_SENDER``) differs from the sender's real email address (registered to ``MAIL_USERNAME``).
+         Match them to avoid ``SMTPRecipientsRefused`` errors.
+
+Default:
+
+.. code-block:: python
+
+   (
+       "FlexMeasures",
+       "no-reply@example.com",
+   )
+
+MAIL_PASSWORD
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Password of mail system user.
+
+Default: ``None``
+
+
+.. _monitoring
+
+Monitoring
+-----------
+
+Monitoring potential problems in FlexMeasure's operations.
+
+
+SENTRY_DSN
+^^^^^^^^^^^^
+
+Set tokenized URL, so errors will be sent to Sentry when ``app.env`` is not in `debug` or `testing` mode.
+E.g.: ``https://<examplePublicKey>@o<something>.ingest.sentry.io/<project-Id>``
+
+Default: ``None``
+
+
+FLEXMEASURES_SENTRY_CONFIG
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A dictionary with values to configure reporting to Sentry. Some options are taken care of by FlexMeasures (e.g. environment and release), but not all.
+See `here <https://docs.sentry.io/platforms/python/configuration/options/>_` for a complete list.
+
+Default: ``{}``
+
+
+FLEXMEASURES_TASK_CHECK_AUTH_TOKEN
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Token which external services can use to check on the status of recurring tasks within FlexMeasures.
+
+Default: ``None``
+
+
+.. _monitoring_mail_recipients:
+
+FLEXMEASURES_MONITORING_MAIL_RECIPIENTS
+^^^^^^^^^^^^^^^^^^^^^^^
+
+E-mail addresses to send monitoring alerts to from the CLI task ``flexmeasures monitor tasks``. For example ``["fred@one.com", "wilma@two.com"]``
+
+Default: ``[]``
+
+
+.. _redis-config:
+
+Redis
+-----
+
+FlexMeasures uses the Redis database to support our forecasting and scheduling job queues.
+
+.. note:: The redis settings are also recognized as environment variables.
+
+
+FLEXMEASURES_REDIS_URL (*)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+URL of redis server.
+
+Default: ``"localhost"``
+
+FLEXMEASURES_REDIS_PORT (*)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Port of redis server.
+
+Default: ``6379``
+
+FLEXMEASURES_REDIS_DB_NR (*)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Number of the redis database to use (Redis per default has 16 databases, numbered 0-15)
+
+Default: ``0``
+
+FLEXMEASURES_REDIS_PASSWORD (*)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Password of the redis server.
+
+Default: ``None``
+
+Demonstrations
+--------------
+
+.. _demo-credentials-config:
+
+FLEXMEASURES_PUBLIC_DEMO_CREDENTIALS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When ``FLEXMEASURES_MODE=demo``\ , this can hold login credentials (demo user email and password, e.g. ``("demo at seita.nl", "flexdemo")``\ ), so anyone can log in and try out the platform.
+
+Default: ``None``
+
+.. _sunset-config:
+
+Sunset
+------
+
+FLEXMEASURES_API_SUNSET_ACTIVE
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Allow control over the effect of sunsetting API versions.
+Specifically, if True, the endpoints of sunset API versions will return ``HTTP status 410 (Gone)`` status codes.
+If False, these endpoints will either return ``HTTP status 410 (Gone) status codes``, or work like before (including Deprecation and Sunset headers in their response), depending on whether the installed FlexMeasures version still contains the endpoint implementations.
+
+Default: ``False``
+
+FLEXMEASURES_API_SUNSET_DATE
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Allow to override the default sunset date for your clients.
+
+Default: ``None`` (defaults are set internally for each sunset API version, e.g. ``"2023-05-01"`` for v2.0)
+
+FLEXMEASURES_API_SUNSET_LINK
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Allow to override the default sunset link for your clients.
+
+Default: ``None`` (defaults are set internally for each sunset API version, e.g. ``"https://flexmeasures.readthedocs.io/en/v0.13.0/api/v2_0.html"`` for v2.0)

+ 153 - 0
documentation/dev/api.rst

@@ -0,0 +1,153 @@
+.. _api-dev:
+
+Developing on the API
+============================================
+
+The FlexMeasures API is the main way that third-parties can automate their interaction with FlexMeasures, so it's highly important.
+
+This is a small guide for creating new versions of the API and its docs.
+
+.. warning:: This guide was written for API versions below v3.0 and is currently out of date.
+
+.. todo:: A guide for endpoint design, e.g. using Marshmallow schemas and common validators.
+
+.. contents:: Table of contents
+    :local:
+    :depth: 2
+
+
+Introducing a new API version
+---------------------
+
+Larger changes to the API, other than fixes and refactoring, should be done by creating a new API version.
+In the guide we're assuming the new version is ``v1.1``.
+
+Whether we need a new API version or not, doesn't have a clear set of rules yet.
+Certainly backward-incompatible changes should require one, but as you'll see, there is also certain overhead in creating
+a new version, so a careful trade-off is advised.
+
+.. note:: For the rest of this guide we'll assume your new API version is ``v1_1``.
+
+
+Set up new module with routes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In ``flexmeasures/api`` create a new module (folder with ``__init__.py``\ ).
+Copy over the ``routes.py`` from the previous API version.
+By default we import all routes from the previous version:
+
+.. code-block:: python
+
+   from flexmeasures.api.v1 import routes as v1_routes, implementations as v1_implementations
+
+
+Set the service listing for this version (or overwrite completely if needed):
+
+.. code-block:: python
+
+   v1_1_service_listing = copy.deepcopy(v1_routes.v1_service_listing)
+   v1_1_service_listing["version"] = "1.1"
+
+
+Then update and redecorate each API endpoint as follows:
+
+.. code-block:: python
+
+   @flexmeasures_api.route("/getService", methods=["GET"])
+   @as_response_type("GetServiceResponse")
+   @append_doc_of(v1_routes.get_service)
+   def get_service():
+       return v1_implementations.get_service_response(v1_1_service_listing)
+
+
+Set up a new blueprint
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In the new module's ``flexmeasures/api/v1_1/__init.py__``\ , copy the contents of ``flexmeasures/api/v1/__init.py__`` (previous API version).
+Change all references to the version name in the new file (for example: ``flexmeasures_api_v1`` should become ``flexmeasures_api_v1_1``\ ).
+
+In ``flexmeasures/api/__init__.py`` update the version listing in ``get_versions()`` and register a blueprint for the new api version by adding:
+
+.. code-block:: python
+
+   from flexmeasures.api.v1_1 import register_at as v1_1_register_at
+   v1_1_register_at(app) 
+
+
+New or updated endpoint implementations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Write functionality of new or updated endpoints in:
+
+.. code-block::
+
+   flexmeasures/api/v1_1/implementations.py
+
+
+Utility functions that are commonly shared between endpoint implementations of different versions should go in:
+
+.. code-block::
+
+   flexmeasures/api/common/utils
+
+
+where we distinguish between response decorators, request validators and other utils.
+
+Testing
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you changed an endpoint in the new version, write a test for it.
+Usually, there is no need to copy the tests for unchanged endpoints, if not a major API version is being released.
+
+Test the entire api or just your new version:
+
+.. code-block:: bash
+
+   $ pytest -k api
+   $ pytest -k v1_1
+
+UI Crud
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In ``ui/crud``\ , we support FlexMeasures' in-built UI with Flask endpoints, which then talk to our internal API.
+The routes used there point to an API version. You should consider updating them to point to your new version.
+
+
+Documentation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In ``documentation/api`` start a new specification ``v1_1.rst`` with contents like this:
+
+.. code-block:: RST
+
+    .. _v1_1:
+
+    Version 1.1
+    ===========
+
+    Summary
+    -------
+
+    .. qrefflask:: flexmeasures.app:create()
+      :blueprints: flexmeasures_api, flexmeasures_api_v1_1
+      :order: path
+      :include-empty-docstring:
+
+    API Details
+    -----------
+
+    .. autoflask:: flexmeasures.app:create()
+      :blueprints: flexmeasures_api, flexmeasures_api_v1_1
+      :order: path
+      :include-empty-docstring:
+
+
+If you are ready to publish the new specifications, enter your changes in ``documentation/api/change_log.rst`` and update the api toctree in ``documentation/index.rst``
+to include the new version in the table of contents.
+
+You're not done. Several sections in the API documentation list endpoints as examples. If you want other developers to use your new API version, make sure those examples reference the latest endpoints. Remember that `Sphinx autoflask <https://sphinxcontrib-httpdomain.readthedocs.io/en/stable/#module-sphinxcontrib.autohttp.flask>`_ likes to prefix the names of endpoints with the blueprint’s name, for example:
+
+.. code-block:: RST
+
+    .. autoflask:: flexmeasures.app:create()
+       :endpoints: flexmeasures_api_v1_1.post_meter_data

File diff suppressed because it is too large
+ 75 - 0
documentation/dev/auth.rst


File diff suppressed because it is too large
+ 111 - 0
documentation/dev/ci.rst


+ 43 - 0
documentation/dev/dependency-management.rst

@@ -0,0 +1,43 @@
+Dependency Management
+=======================
+
+Requirements
+-------------
+
+FlexMeasures is built on the shoulder of giants, namely other open source libraries.
+Look into the `requirements` folder to see what is required to run FlexMeasures (`app.in`) or to test it, or to build this documentation.
+
+The `.in` files specify our general demands, and in `.txt` files, we keep a set of pinned dependency versions, so we can all work on the same background (crucial to compare behavior of installations to each other).
+
+To install these pinned requirements, run:
+
+.. code-block:: bash
+
+    $ make install-for-dev
+
+Check out `Makefile` for other useful commands, but this should get you going.
+
+To upgrade the pinned versions, we can run:
+
+
+.. code-block:: bash
+
+    $ make upgrade-deps
+
+
+Python versions
+----------------
+
+In addition, we support a range of Python versions (as you can see in the `requirements` folder.
+
+Now ― you probably have only one Python version installed. Let's say you add a dependency, or update the minimum required version. How to update the pinned sets of requirements across all Python versions?
+
+.. code-block:: bash
+
+    $ cd ci; ./update-packages.sh; cd ../
+
+This script will use docker to do these upgrades per Python version.
+
+Still, we'd also like to be able to test FlexMeasures across all these versions.
+We've added that capability to our CI pipeline (GitHub Actions), so you could clone it an make a PR, in order to run them.
+

+ 183 - 0
documentation/dev/docker-compose.rst

@@ -0,0 +1,183 @@
+.. _docker-compose:
+
+Running a complete stack with docker-compose
+=============================================
+
+To install FlexMeasures, plus the libraries and databases it depends on, on your computer is some work, and can have unexpected hurdles, e.g. depending on the operating system. A nice alternative is to let that happen within Docker. The whole stack can be run via `Docker compose <https://docs.docker.com/compose/>`_, saving the developer much time.
+
+For this, we assume you are in the directory (in the `FlexMeasures git repository <https://github.com/FlexMeasures/flexmeasures>`_) housing ``docker-compose.yml``.
+
+
+.. note:: The minimum Docker version is 17.09 and for docker-compose we tested successfully at version 1.25. You can check your versions with ``docker[-compose] --version``.
+
+.. note:: The command might also be ``docker compose`` (no dash), for instance if you are using `Docker Desktop <https://docs.docker.com/desktop>`_.
+
+Build the compose stack
+------------------------
+
+Run this:
+
+.. code-block:: bash
+
+    $ docker-compose build
+
+This pulls the images you need, and re-builds the FlexMeasures ones from code. If you change code, re-running this will re-build that image.
+
+This compose script can also serve as an inspiration for using FlexMeasures in modern cloud environments (like Kubernetes). For instance, you might want to not build the FlexMeasures image from code, but simply pull the image from DockerHub.
+
+If you wanted, you could stop building from source, and directly use the official flexmeasures image for the server and worker container
+(set ``image: lfenergy/flexmeasures`` in the file ``docker-compose.yml``).
+
+
+Run the compose stack
+----------------------
+
+Start the stack like this:
+
+.. code-block:: bash
+
+    $ docker-compose up
+
+.. warning:: This might fail if ports 5000 (Flask) or 6379 (Redis) are in use on your system. Stop these processes before you continue.
+
+Check ``docker ps`` or ``docker-compose ps`` to see if your containers are running:
+
+
+.. code-block:: bash
+
+    $ docker ps
+    CONTAINER ID   IMAGE                 COMMAND                  CREATED          STATUS                             PORTS                                            NAMES
+    beb9bf567303   flexmeasures_server   "bash -c 'flexmeasur…"   44 seconds ago   Up 38 seconds (health: starting)   0.0.0.0:5000->5000/tcp                           flexmeasures-server-1
+    e36cd54a7fd5   flexmeasures_worker   "flexmeasures jobs r…"   44 seconds ago   Up 5 seconds                       5000/tcp                                         flexmeasures-worker-1
+    c9985de27f68   postgres              "docker-entrypoint.s…"   45 seconds ago   Up 40 seconds                      5432/tcp                                         flexmeasures-test-db-1
+    03582d37230e   postgres              "docker-entrypoint.s…"   45 seconds ago   Up 40 seconds                      5432/tcp                                         flexmeasures-dev-db-1
+    25024ada1590   mailhog/mailhog       "MailHog"                45 seconds ago   Up 40 seconds                      0.0.0.0:1025->1025/tcp, 0.0.0.0:8025->8025/tcp   flexmeasures-mailhog-1
+    792ec3d86e71   redis                 "docker-entrypoint.s…"   45 seconds ago   Up 40 seconds                      0.0.0.0:6379->6379/tcp                           flexmeasures-queue-db-1
+
+
+The FlexMeasures server container has a health check implemented, which is reflected in this output and you can see which ports are available on your machine to interact.
+
+You can use the terminal or ``docker-compose logs`` to look at output. ``docker inspect <container>`` and ``docker exec -it <container> bash`` can be quite useful to dive into details. 
+We'll see the latter more in this tutorial.
+
+
+Configuration
+---------------
+
+You can pass in your own configuration (e.g. for MapBox access token, or db URI, see below) like we described in :ref:`docker_configuration` ― put a file ``flexmeasures.cfg`` into a local folder called ``flexmeasures-instance`` (the volume should be already mapped).
+
+In case your configuration loads FlexMeasures plugins that have additional dependencies, you can add a requirements.txt file to the same local folder. The dependencies listed in that file will be freshly installed each time you run ``docker-compose up``.
+
+
+Data
+-------
+
+The postgres database is a test database with toy data filled in when the flexmeasures container starts.
+You could also connect it to some other database (on your PC, in the cloud), by setting a different ``SQLALCHEMY_DATABASE_URI`` in the config. 
+
+
+.. _docker-compose-tutorial:
+
+Seeing it work: Running the toy tutorial
+--------------------------------------
+
+A good way to see if these containers work well together, and maybe to inspire how to use them for your own purposes, is the :ref:`tut_toy_schedule`.
+
+The `flexmeasures-server` container already creates the toy account when it starts (see its initial command). We'll now walk through the rest of the toy tutorial, with one twist at the end, when we create the battery schedule.
+
+Let's go into the `flexmeasures-worker` container:
+
+.. code-block:: bash
+
+    $ docker exec -it flexmeasures-worker-1 bash
+
+There, we'll now add the price data, as described in :ref:`tut_toy_schedule_price_data`. Copy the commands from that section and run them in the container's bash session, to create the prices and add them to the FlexMeasures DB.
+
+Next, we put a scheduling job in the worker's queue. This only works because we have the Redis container running ― the toy tutorial doesn't have it. The difference is that we're adding ``--as-job``:
+
+.. code-block:: bash
+
+    $ flexmeasures add schedule for-storage --sensor 2 --consumption-price-sensor 1 \
+        --start ${TOMORROW}T07:00+01:00 --duration PT12H --soc-at-start 50% \
+        --roundtrip-efficiency 90% --as-job
+
+We should now see in the output of ``docker logs flexmeasures-worker-1`` something like the following:
+
+.. code-block:: bash
+
+    Running Scheduling Job d3e10f6d-31d2-46c6-8308-01ede48f8fdd: discharging, from 2022-07-06 07:00:00+01:00 to 2022-07-06 19:00:00+01:00
+
+So the job had been queued in Redis, was then picked up by the worker process, and the result should be in our SQL database container. Let's check!
+
+We'll not go into the server container this time, but simply send a command:
+
+.. code-block:: bash
+
+    $ TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+    $ docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor 2 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H"
+
+The charging/discharging schedule should be there:
+
+.. code-block:: bash
+
+    ┌────────────────────────────────────────────────────────────┐
+    │   ▐            ▐▀▀▌                                     ▛▀▀│ 0.5MW
+    │   ▞▌           ▌  ▌                                     ▌  │
+    │   ▌▌           ▌  ▐                                    ▗▘  │
+    │   ▌▌           ▌  ▐                                    ▐   │
+    │  ▐ ▐          ▐   ▐                                    ▐   │
+    │  ▐ ▐          ▐   ▝▖                                   ▞   │
+    │  ▌ ▐          ▐    ▌                                   ▌   │
+    │ ▐  ▝▖         ▌    ▌                                   ▌   │
+    │▀▘───▀▀▀▀▖─────▌────▀▀▀▀▀▀▀▀▀▌─────▐▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▘───│ 0.0MW
+    │         ▌    ▐              ▚     ▌                        │
+    │         ▌    ▞              ▐    ▗▘                        │
+    │         ▌    ▌              ▐    ▞                         │
+    │         ▐   ▐               ▝▖   ▌                         │
+    │         ▐   ▐                ▌  ▗▘                         │
+    │         ▐   ▌                ▌  ▐                          │
+    │         ▝▖  ▌                ▌  ▞                          │
+    │          ▙▄▟                 ▐▄▄▌                          │ -0.5MW
+    └────────────────────────────────────────────────────────────┘
+               10           20           30          40
+                            ██ discharging
+
+Like in the original toy tutorial, we can also check in the server container's `web UI <http://localhost:5000/sensors/1/>`_ (username is "toy-user@flexmeasures.io", password is "toy-password"):
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-charging.png
+    :align: center
+
+
+Email Testing
+----------------------------------
+
+To test email functionality, MailHog is included in the Docker Compose stack. You can view the emails sent by the application by navigating to http://localhost:8025/ in your browser.
+
+To verify this setup, try changing a user's password in the application. This action will trigger an email, which you can then view in `MailHog <http://localhost:8025/>`_.
+
+
+Scripting with the Docker stack
+----------------------------------
+
+A very important aspect of this stack is if it can be put to interesting use.
+For this, developers need to be able to script things ― like we just did with the toy tutorial.
+
+Note that instead of starting a console in the containers, we can also send commands to them right away.
+For instance, we sent the complete ``flexmeasures show beliefs`` command and then viewed the output on our own machine.
+Likewise, we send the ``pytest`` command to run the unit tests (see below).
+
+Used this way, and in combination with the powerful list of :ref:`cli`, this FlexMeasures Docker stack is scriptable for interesting applications and simulations!
+
+
+Running tests
+---------------
+
+You can run tests in the flexmeasures docker container, using the database service ``test-db`` in the compose file (per default, we are using the ``dev-db`` database service).
+
+After you've started the compose stack with ``docker-compose up``, run:
+
+.. code-block:: bash
+
+    $ docker exec -it -e SQLALCHEMY_TEST_DATABASE_URI="postgresql://fm-test-db-user:fm-test-db-pass@test-db:5432/fm-test-db" flexmeasures-server-1 pytest
+
+This rounds up the developer experience offered by running FlexMeasures in Docker. Now you can develop FlexMeasures and also run your tests. If you develop plugins, you could extend the command being used, e.g. ``bash -c "cd /path/to/my/plugin && pytest"``. 

File diff suppressed because it is too large
+ 86 - 0
documentation/dev/note-on-datamodel-transition.rst


+ 322 - 0
documentation/dev/setup-and-guidelines.rst

@@ -0,0 +1,322 @@
+.. _developing:
+
+
+
+Developing for FlexMeasures
+===========================
+
+This page instructs developers who work on FlexMeasures how to set up the development environment.
+Furthermore, we discuss several guidelines and best practices.
+
+.. contents:: Table of contents
+    :local:
+    :depth: 1
+
+|
+.. note:: Are you implementing code based on FlexMeasures, you're probably interested in :ref:`datamodel`.
+
+
+Getting started
+------------------
+
+Virtual environment
+^^^^^^^^^^^^^^^^^^^^
+
+Using a virtual environment is best practice for Python developers. We also strongly recommend using a dedicated one for your work on FlexMeasures, as our make target (see below) will use ``pip-sync`` to install dependencies, which could interfere with some libraries you already have installed.
+
+
+* Make a virtual environment: ``python3.10 -m venv flexmeasures-venv`` or use a different tool like ``mkvirtualenv`` or virtualenvwrapper. You can also use
+  an `Anaconda distribution <https://conda.io/docs/user-guide/tasks/manage-environments.html>`_ as base with ``conda create -n flexmeasures-venv python=3.10``.
+* Activate it, e.g.: ``source flexmeasures-venv/bin/activate``
+
+
+Download FlexMeasures
+^^^^^^^^^^^^^^^^^^^^^^^
+Clone the `FlexMeasures repository <https://github.com/FlexMeasures/flexmeasures.git>`_ from GitHub.
+
+.. code-block:: bash
+
+   $ git clone https://github.com/FlexMeasures/flexmeasures.git
+
+
+Dependencies
+^^^^^^^^^^^^^^^^^^^^
+
+Go into the ``flexmeasures`` folder and install all dependencies including the ones needed for development:
+
+.. code-block:: bash
+
+   $ cd flexmeasures
+   $ make install-for-dev
+
+:ref:`Install the LP solver <install-lp-solver>`. On Linux, the HiGHS solver can be installed with:
+
+.. code-block:: bash
+
+   $ pip install highspy
+
+On MacOS it will be installed locally by `make install-for-test` and no actions are required on your part
+
+Besides highs, the CBC solver is required for tests as well:
+
+.. tabs::
+
+    .. tab:: Linux
+
+        .. code-block:: bash
+
+            $ apt-get install coinor-cbc
+
+    .. tab:: MacOS
+
+        .. code-block:: bash
+
+            $ brew install cbc
+
+
+Configuration
+^^^^^^^^^^^^^^^^^^^^
+
+Most configuration happens in a config file, see :ref:`configuration` on where it can live and all supported settings.
+
+For now, we let it live in your home directory and we add the first required setting: a secret key:
+
+.. code-block:: bash
+
+   echo "SECRET_KEY=\"`python3 -c 'import secrets; print(secrets.token_hex(24))'`\"" >> ~/.flexmeasures.cfg
+
+   
+Also, we add some env settings in an `.env` file. Create that file in the `flexmeasures` directory (from where you'll run flexmeasures) and enter:
+
+.. code-block:: bash
+
+    FLEXMEASURES_ENV="development"
+    LOGGING_LEVEL="INFO"
+
+The development mode makes sure we don't need SSL to connect, among other things. 
+
+
+Database
+^^^^^^^^^^^^^^^^
+
+See :ref:`host-data` for tips on how to install and upgrade databases (postgres and redis).
+
+
+Loading data
+^^^^^^^^^^^^^^^^^^^^
+
+If you have a SQL Dump file, you can load that:
+
+.. code-block:: bash
+
+    $ psql -U {user_name} -h {host_name} -d {database_name} -f {file_path}
+
+One other possibility is to add a toy account (which owns some assets and a battery):
+
+.. code-block:: bash
+
+    $ flexmeasures add toy-account
+
+
+
+Run locally
+^^^^^^^^^^^^^^^^^^^^
+
+Now, to start the web application, you can run:
+
+.. code-block:: bash
+
+    $ flexmeasures run
+
+
+Or:
+
+.. code-block:: bash
+
+    $ python run-local.py
+
+
+And access the server at http://localhost:5000
+
+If you added a toy account, you could log in with `toy-user@flexmeasures.io`, password `toy-password`.
+
+Otherwise, you need to add some other user first. Here is how we add an admin:
+
+.. code-block:: bash
+    
+    $ flexmeasures add account --name MyCompany
+    $ flexmeasures add user --username admin --account 1 --email admin@mycompany.io --roles admin
+
+(The `account` you need in the 2nd command is printed by the 1st)
+
+
+.. include:: ../notes/macOS-port-note.rst
+
+.. note::
+
+    If you are on Windows, then running & developing FlexMeasures will not work 100%. For instance, the queueing only works if you install rq-win (https://github.com/michaelbrooks/rq-win) manually and the make tooling is difficult to get to work as well.
+    We recommend to use the Windows Sub-system for Linux (https://learn.microsoft.com/en-us/windows/wsl/install) or work via Docker-compose (https://flexmeasures.readthedocs.io/en/latest/dev/docker-compose.html).
+
+
+
+Logfile
+--------
+
+FlexMeasures logs to a file called ``flexmeasures.log``. You'll find this in the application's context folder, e.g. where you called ``flexmeasures run``.
+
+A rolling log file handler is used, so if ``flexmeasures.log`` gets to a few megabytes in size, it is copied to `flexmeasures.log.1` and the original file starts over empty again. 
+
+The default logging level is ``WARNING``. To see more, you can update this with the config setting ``LOGGING_LEVEL``, e.g. to ``INFO`` or ``DEBUG``
+
+
+Mocking an Email Server for Development
+--------------------------------
+
+To handle emails locally during development, you can use MailHog. Follow these steps to set it up:
+
+.. code-block:: bash
+
+   $ docker run -p 8025:8025 -p 1025:1025 --name mailhog mailhog/mailhog
+   $ export MAIL_PORT=1025  # You can also add this to your local flexmeasures.cfg
+
+Now, emails (e.g., password-reset) are being sent via this local server. Go to http://localhost:8025 to see all sent emails in a web UI.
+
+Tests
+-----
+
+You can run automated tests with:
+
+.. code-block:: bash
+
+    $ make test
+
+
+which behind the curtains installs dependencies and calls ``pytest``.
+
+However, a test database (postgres) is needed to run these tests. If you have postgres, here is the short version on how to add the test database:
+
+.. code-block:: bash
+
+    $ make clean-db db_name=flexmeasures_test db_user=flexmeasures_test
+    $ # the password for the db user is "flexmeasures_test"
+
+.. note:: The section :ref:`host-data` has more details on using postgres for FlexMeasures.
+
+Alternatively, if you don't feel like installing postgres for the time being, here is a docker command to provide a test database:
+
+.. code-block:: bash
+
+    $ docker run --rm --name flexmeasures-test-db -e POSTGRES_PASSWORD=flexmeasures_test -e POSTGRES_DB=flexmeasures_test -e POSTGRES_USER=flexmeasures_test -p 5432:5432 -v ./ci/load-psql-extensions.sql:/docker-entrypoint-initdb.d/load-psql-extensions.sql -d postgres:latest
+
+.. warning:: This assumes that the port 5432 is not being used on your machine (for instance by an existing postgres database service).
+
+If you want the tests to create a coverage report (printed on the terminal), you can run the ``pytest`` command like this:
+
+.. code-block:: bash
+
+   $ pytest --cov=flexmeasures --cov-config .coveragerc
+
+You can add `--cov-report=html`, after which a file called `htmlcov/index.html` is generated.
+Or, after a test run with coverage turned on as shown above, you can still generate it in another form:
+
+.. code-block:: bash
+
+    $ python3 -m coverage [html|lcov|json]
+
+
+
+Versioning
+----------
+
+We use `setuptool_scm <https://github.com/pypa/setuptools_scm/>`_ for versioning, which bases the FlexMeasures version on the latest git tag and the commits since then.
+
+So as a developer, it's crucial to use git tags for versions only.
+
+We use semantic versioning, and we always include the patch version, not only max and min, so that setuptools_scm makes the correct guess about the next minor version. Thus, we should use ``2.0.0`` instead of ``2.0``.
+
+See ``to_pypi.sh`` for more commentary on the development versions.
+
+Our API has its own version, which moves much slower. This is important to explicitly support outside apps who were coded against older versions. 
+
+
+Auto-applying formatting and code style suggestions
+-----------------------------------------------------
+
+We use `Black <https://github.com/ambv/black>`_ to format our Python code and `Flake8 <https://flake8.pycqa.org>`_ to enforce the PEP8 style guide and linting.
+We also run `mypy <http://mypy-lang.org/>`_ on many files to do some static type checking.
+
+We do this so real problems are found faster and the discussion about formatting is limited.
+All of these can be installed by using ``pip``, but we recommend using them as a pre-commit hook. To activate that behaviour, do:
+
+.. code-block:: bash
+
+   $ pip install pre-commit
+   $ pre-commit install
+
+
+in your virtual environment.
+
+Now each git commit will first run ``flake8``, then ``black`` and finally ``mypy`` over the files affected by the commit
+(\ ``pre-commit`` will install these tools into its own structure on the first run).
+
+This is also what happens automatically server-side when code is committed to a branch (via GitHub Actions), but having those tests locally as well will help you spot these issues faster.
+
+If ``flake8``, ``black`` or ``mypy`` propose changes to any file, the commit is aborted (saying that it "failed"). 
+The changes proposed by ``black`` are implemented automatically (you can review them with `git diff`). Some of them might even resolve the ``flake8`` warnings :)
+
+
+Using Visual Studio, including spell checking
+----------------------------------------------
+
+Are you using Visual Studio Code? Then the code you just cloned also contains the editor configuration (part of) our team is using (see `.vscode`)!
+
+We recommend installing the flake8 and spellright extensions.
+
+For spellright, the FlexMeasures repository contains the project dictionary. Here are steps to link main dictionaries, which usually work on a Linux system:
+
+.. code-block:: bash
+
+   $ mkdir $HOME/.config/Code/Dictionaries
+   $ ln -s /usr/share/hunspell/* ~/.config/Code/Dictionaries
+
+Consult the extension's Readme for other systems.
+
+
+
+A hint about using notebooks
+---------------
+
+If you edit notebooks, make sure results do not end up in git:
+
+.. code-block:: bash
+
+   $ conda install -c conda-forge nbstripout
+   $ nbstripout --install
+
+
+(on Windows, maybe you need to look closer at https://github.com/kynan/nbstripout)
+
+
+
+A hint for Unix developers
+--------------------------------
+
+I added this to my ~/.bashrc, so I only need to type ``fm`` to get started and have the ssh agent set up, as well as up-to-date code and dependencies in place.
+
+.. code-block:: bash
+
+   addssh(){
+       eval `ssh-agent -s`
+       ssh-add ~/.ssh/id_github
+   }
+   fm(){
+       addssh
+       cd ~/workspace/flexmeasures  
+       git pull  # do not use if any production-like app runs from the git code                                                                                                                                                             
+       workon flexmeasures-venv  # this depends on how you created your virtual environment
+       make install-for-dev
+   }
+
+
+.. note:: All paths depend on your local environment, of course.
+

+ 47 - 0
documentation/dev/why.rst

@@ -0,0 +1,47 @@
+
+.. _dev_why:
+
+Why FlexMeasures adds value for software developers
+----------------------------------------------------
+
+FlexMeasures is designed to help with three basic needs of developers in the energy flexibility domain:
+
+
+I need help with integrating real-time data and continuously computing new data
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+FlexMeasures is designed to make decisions based on data in an automated way. Data pipelining and dedicated machine learning tooling is crucial.
+
+- API/CLI functionality to read in time series data
+- Extensions for integrating 3rd party data, e.g. from `ENTSO-E <https://github.com/SeitaBV/flexmeasures-entsoe>`_ or `OpenWeatherMap <https://github.com/SeitaBV/flexmeasures-openweathermap>`_
+- Forecasting for the upcoming hours
+- Schedule optimization for flexible assets
+- Reporters to combine time series data and create KPIs 
+
+
+It's hard to correctly model data with different sources, resolutions, horizons and even uncertainties
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Much developer time is spent correcting data and treating it correctly, so that you know you are computing on the right knowledge.
+
+FlexMeasures is built on the `timely-beliefs framework <https://github.com/SeitaBV/timely-beliefs>`_, so we model this real-world aspect accurately:
+
+- Expected data properties are explicit (e.g. unit, time resolution)
+- Incoming data is converted to fitting unit and time resolution automatically
+- FlexMeasures also stores who thought that something happened (or that it will happen), and when they thought so
+- Uncertainty can be modelled (useful for forecasting)
+
+
+I want to build new features quickly, not spend days solving basic problems
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Building customer-facing apps & services is where developers make impact. We make their work easy.
+
+- FlexMeasures has well-documented API endpoints and CLI commands to interact with its model and data
+- You can extend it easily with your own logic by writing plugins
+- A backend UI shows you your assets in maps and your data in plots. There is also support for plots to be available per API, for integration in your own frontend
+- Multi-tenancy ― model multiple accounts on one server. Data is only seen/editable by authorized users in the right account
+
+
+For more on FlexMeasures, head right over to :ref:`getting_started`.
+

+ 134 - 0
documentation/features/forecasting.rst

@@ -0,0 +1,134 @@
+.. _forecasting:
+
+Forecasting
+============
+
+Scheduling is about the future, and you need some knowledge / expectations about the future to do it.
+
+Of course, the nicest forecasts are the one you don't have to make yourself (it's not an easy field), so do use price or usage forecasts from third parties if available.
+There are even existing plugins for importing `weather forecasts <https://github.com/SeitaBV/flexmeasures-openweathermap>`_ or `market data <https://github.com/SeitaBV/flexmeasures-entsoe>`_.
+
+If you need to make your own predictions, forecasting algorithms can be used within FlexMeasures, for instance to assess the expected profile of future consumption/production.
+
+.. warning:: This feature is currently under development, we note future plans further below. Get in touch for latest updates or if you want to help.
+
+
+.. contents::
+    :local:
+    :depth: 2
+
+
+
+Technical specs
+-----------------
+
+In a nutshell, FlexMeasures uses linear regression and falls back to naive forecasting of the last known value if errors happen. 
+
+Note that what might be even more important than the type of algorithm is the features handed to the model ― lagged values (e.g. value of the same time yesterday) and regressors (e.g. wind speed prediction to forecast wind power production).
+Most assets have yearly seasonality (e.g. wind, solar) and therefore forecasts would benefit from >= 2 years of history.
+
+Here are more details:
+
+- The application uses an ordinary least squares auto-regressive model with external variables.
+- Lagged outcome variables are selected based on the periodicity of the asset (e.g. daily and/or weekly).
+- Common external variables are weather forecasts of temperature, wind speed and irradiation.
+- Timeseries data with frequent zero values are transformed using a customised Box-Cox transformation.
+- To avoid over-fitting, cross-validation is used.
+- Before fitting, explicit annotations of expert knowledge to the model (like the definition of asset-specific seasonality and special time events) are possible.
+- The model is currently fit each day for each asset and for each horizon.
+
+
+A use case: automating solar production prediction
+-----------------------------------------------------
+
+We'll consider an example that FlexMeasures supports ― forecasting an asset that represents solar panels.
+Here is how you can ask for forecasts to be made in the CLI:
+
+.. code-block:: bash
+
+    flexmeasures add forecasts --from-date 2024-02-02 --to-date 2024-02-02 --horizon 6 --sensor 12  --as-job
+
+Sensor 12 would represent the power readings of your solar power, and here you ask for forecasts for one day (2 February, 2024), with a forecast of 6 hours.
+
+The ``--as-job`` parameter is optional. If given, the computation becomes a job which a worker needs to pick up. There is some more information at :ref:`how_queue_forecasting`.
+
+
+Rolling vs fixed-point
+-------------------------
+
+These forecasts are `rolling` forecasts ― which means they all have the same horizon. This is useful mostly for analytics and simulations.
+
+We plan to work on fixed-point forecasts, which would forecast all values from one point in time, with a growing horizon as the forecasted time is further away.
+This resembles the real-time situation better.
+
+
+Regressors
+-------------
+
+If you want to take regressors into account, in addition to merely past measurements (e.g. weather forecasts, see above),
+currently FlexMeasures supports only weather correlations.
+
+The attribute `sensor.weather_correlations` can be used for this, e.g. for the solar example above you might want to set this to ``["irradiance", "temperature"]``.
+FlexMeasures will then try to find an asset with asset type "weather_station" that has a location near the asset your forecasted sensor belogs to.
+That weather station should have sensors with the correlations you entered, and if they have data in a suitable range, the regressors can be used in your forecasting.
+
+In `this weather forecast plugin <https://github.com/SeitaBV/flexmeasures-openweathermap>`_, we enabled you to collect regressor data for ``["temperature", "wind speed", "cloud cover", "irradiance"]``, at a location you select.
+
+
+Performance benchmarks
+-----------------------
+
+Above, we focused on technical ways to achieve forecasting within FlexMeasures. As we mentioned, the results differ, based on what information you give to the model.
+
+However, let's discuss performance a little more ― how can we measure it and what have we seen?
+The performance of FlexMeasures' forecasting algorithms is indicated by the mean absolute error (MAE) and the weighted absolute percentage error (WAPE).
+Power profiles on an asset level often include zero values, such that the mean absolute percentage error (MAPE), a common statistical measure of forecasting accuracy, is undefined.
+For such profiles, it is more useful to report the WAPE, which is also known as the volume weighted MAPE.
+The MAE of a power profile gives an indication of the size of the uncertainty in consumption and production.
+This allows the user to compare an asset's predictability to its flexibility, i.e. to the size of possible flexibility activations.
+
+Example benchmarks per asset type are listed in the table below for various assets and forecasting horizons.
+Amongst other factors, accuracy is influenced by:
+
+- The chosen metric (see below)
+- Resolution of the forecast
+- Horizon of the forecast
+- Asset type
+- Location / Weather conditions
+- Level of aggregation
+
+Accuracies in the table are reported as 1 minus WAPE, which can be interpreted as follows:
+
+- 100% accuracy denotes that all values are correct.
+- 50% accuracy denotes that, on average, the values are wrong by half of the reference value.
+- 0% accuracy denotes that, on average, the values are wrong by exactly the reference value (i.e. zeros or twice the reference value).
+- negative accuracy denotes that, on average, the values are off-the-chart wrong (by more than the reference value itself).
+
+
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| Asset                     | Building      | Charge Points | Solar         | Wind (offshore) | Day-ahead market|
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| Average power per asset   | 204 W         | 75 W          | 140 W         | 518 W           |                 |
++===========================+===============+===============+===============+=================+=================+
+| 1 - WAPE (1 hour ahead)   | 93.4 %        | 87.6 %        | 95.2 %        | 81.6 %          | 88.0 %          |
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| 1 - WAPE (6 hours ahead)  | 92.6 %        | 73.0 %        | 83.7 %        | 73.8 %          | 81.9 %          |
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| 1 - WAPE (24 hours ahead) | 92.4 %        | 65.2 %        | 46.1 %        | 60.1 %          | 81.4 %          |
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+| 1 - WAPE (48 hours ahead) | 92.1 %        | 63.7 %        | 43.3 %        | 56.9 %          | 72.3 %          |
++---------------------------+---------------+---------------+---------------+-----------------+-----------------+
+
+
+Future work
+---------------
+
+We have mentioned that forecasting within FlexMeasures can become more powerful.
+Here we summarize what is on the roadmap for forecasting:
+
+- Add fixed-point forecasting (see above)
+- Make features easier to configure, especially regressors
+- Add more types of forecasting algorithms, like random forest or even LSTM
+- Possibly integrate with existing powerful forecasting tooling, for instance `OpenStef <https://lfenergy.org/projects/openstef>`_ or `Quartz Solar OS <https://github.com/openclimatefix/Open-Source-Quartz-Solar-Forecast>`_. 
+
+

+ 117 - 0
documentation/features/reporting.rst

@@ -0,0 +1,117 @@
+.. _reporting:
+
+Reporting
+============
+
+FlexMeasures feeds upon raw measurement data (e.g. solar generation) and data from third parties (e.g. weather forecasts).
+
+However, there are use cases for enriching these raw data by combining them:
+
+- Pre-calculations: For example, from a tariff and some tax rules we compute the real financial impact of price data.
+- Post-calculations: To be able to show the customer value, we regularly want to compute things like money or CO₂ saved.
+
+These calculations can be done with code, but there'll be many repetitions. 
+
+We added an infrastructure that allows us to define computation pipelines and CLI commands for developers to list available reporters and trigger their computations regularly:
+
+- ``flexmeasures show reporters``
+- ``flexmeasures add report``
+
+The reporter classes we are designing are using pandas under the hood and can be sub-classed, allowing us to build new reporters from stable simpler ones, and even pipelines. Remember: re-use is developer power!
+
+We believe this infrastructure will become very powerful and enable FlexMeasures hosters and plugin developers to implement exciting new features.
+
+Below are two quick examples, but you can also dive deeper in :ref:`tut_toy_schedule_reporter`.
+
+
+Example: solar feed-in / self-consumption delta 
+------------------------------------------------
+
+So here is a glimpse into a reporter we made - it is based on the ``AggregatorReporter`` (which is for the combination of any two sensors).
+This simplified example reporter basically calculates ``pv - consumption`` at grid connection point.
+This tells us how much solar power we fed back to the grid (positive values) and/or the amount of grid power within the overall consumption that did not come from local solar panels (negative values).
+
+This is the configuration of how the computation works:
+
+.. code-block:: json
+    
+    {
+        "method" : "sum",
+        "weights" : {
+            "pv" : 1.0,
+            "consumption" : -1.0
+        }
+    }
+
+This parameterizes the computation (from which sensors does data come from, which range & where does it go):
+
+.. code-block:: json
+    
+    {
+        "input": [
+            {
+                "name" : "pv",
+                "sensor": 1,
+                "source" : 1,
+            },
+            {
+                "name" : "consumption",
+                "sensor": 1,
+                "source" : 2,
+            }
+        ],
+        "output": [
+            {
+                "sensor": 3,
+            }
+        ],
+        "start" : "2023-01-01T00:00:00+00:00",
+        "end" : "2023-01-03T00:00:00+00:00",
+    }
+
+
+
+Example: Profits & losses
+---------------------------
+
+A report that should cover a use case right off the shelf for almost everyone using FlexMeasures is the ``ProfitOrLossReporter`` ― a reporter to compute how profitable your operation has been.
+Showing the results of your optimization is a crucial feature, and now easier than ever.
+
+First, reporters can be stored as data sources, so they are easy to be used repeatedly and the data they generate can reference them.
+Our data source has ``ProfitOrLossReporter`` as model attribute and these configuration information stored on its ``attribute`` defines the reporter further (the least a ``ProfitOrLossReporter`` needs to know is a price): 
+
+.. code-block:: json
+
+    {
+      "data_generator": {
+        "config": {
+          "consumption_price_sensor": 1
+        }
+      }
+    }
+
+And here are more excerpts from the tutorial mentioned above.
+Here we configure the input and output:
+
+.. code-block:: bash
+    
+    $ echo "
+      {
+          'input' : [{'sensor' : 4}],
+          'output' : [{'sensor' : 9}]
+      }" > profitorloss-parameters.json
+
+The input sensor stores the power/energy flow, and the output sensor will store the report. Recall that we already provided the price sensor to use in the reporter's data source.
+ 
+
+.. code-block:: bash
+
+    $ flexmeasures add report\
+      --source 6 \
+      --parameters profitorloss-parameters.json \
+      --start-offset DB,1D --end-offset DB,2D
+
+Here, the ``ProfitOrLossReporter`` used as source (with Id 6) is the one we configured above.
+With the offsets, we control the timing ― we indicate that we want the new report to encompass the day of tomorrow (see Pandas offset strings).
+
+The report sensor will now store all costs which we know will be made tomorrow by the  schedule.

+ 338 - 0
documentation/features/scheduling.rst

@@ -0,0 +1,338 @@
+.. _scheduling:
+
+Scheduling 
+===========
+
+Scheduling is the main value-drive of FlexMeasures. We have two major types of schedulers built-in, for storage devices (usually batteries or hot water storage) and processes (usually in industry).
+
+FlexMeasures computes schedules for energy systems that consist of multiple devices that consume and/or produce electricity.
+We model a device as an asset with a power sensor, and compute schedules only for flexible devices, while taking into account inflexible devices.
+
+.. contents::
+    :local:
+    :depth: 2
+
+
+.. _describing_flexibility:
+
+Describing flexibility
+----------------------
+
+To compute a schedule, FlexMeasures first needs to assess the flexibility state of the system.
+This is described by:
+
+- :ref:`The flex-context <flex_context>` ― information about the system as a whole, in order to assess the value of activating flexibility.
+- :ref:`Flex-models <flex_models_and_schedulers>`  ― information about the state and possible actions of the flexible device. We will discuss these per scheduled device type.
+
+This information goes beyond the usual time series recorded by an asset's sensors. It can be sent to FlexMeasures through the API when triggering schedule computation.
+Also, this information can be persisted on the FlexMeasures data model (in the db), and is editable through the UI (actually, that is design work in progress, currently possible with the flex context).
+
+Let's dive into the details ― what can you tell FlexMeasures about your optimization problem?
+
+
+.. _flex_context:
+
+The flex-context
+-----------------
+
+The ``flex-context`` is independent of the type of flexible device that is optimized, or which scheduler is used.
+With the flexibility context, we aim to describe the system in which the flexible assets operate, such as its physical and contractual limitations.
+
+Fields can have fixed values, but some fields can also point to sensors, so they will always represent the dynamics of the asset's environment (as long as that sensor has current data).
+The full list of flex-context fields follows below.
+For more details on the possible formats for field values, see :ref:`variable_quantities`.
+
+Where should you set these fields?
+Within requests to the API or by editing the relevant asset in the UI.
+If they are not sent in via the API (the endpoint triggering schedule computation), the scheduler will look them up on the `flex-context` field of the asset.
+And if the asset belongs to a larger system (a hierarchy of assets), the scheduler will also search if parent assets have them set.
+
+
+
+.. list-table::
+   :header-rows: 1
+   :widths: 20 25 90
+
+   * - Field
+     - Example value
+     - Description 
+   * - ``inflexible-device-sensors``
+     - ``[3,4]``
+     - Power sensors that are relevant, but not flexible, such as a sensor recording rooftop solar power connected behind the main meter, whose production falls under the same contract as the flexible device(s) being scheduled.
+       Their power demand cannot be adjusted but still matters for finding the best schedule for other devices. Must be a list of integers.
+   * - ``consumption-price``
+     - ``{"sensor": 5}``
+       or
+       ``"0.29 EUR/kWh"``
+     - The price of consuming energy. Can be (a sensor recording) market prices, but also CO₂ intensity - whatever fits your optimization problem. (This field replaced the ``consumption-price-sensor`` field. [#old_sensor_field]_)
+   * - ``production-price``
+     - ``{"sensor": 6}``
+       or
+       ``"0.12 EUR/kWh"``
+     - The price of producing energy.
+       Can be (a sensor recording) market prices, but also CO₂ intensity - whatever fits your optimization problem, as long as the unit matches the ``consumption-price`` unit. (This field replaced the ``production-price-sensor`` field. [#old_sensor_field]_)
+   * - ``site-power-capacity``
+     - ``"45kVA"``
+     - Maximum achievable power at the grid connection point, in either direction [#asymmetric]_.
+       Becomes a hard constraint in the optimization problem, which is especially suitable for physical limitations. [#minimum_capacity_overlap]_
+   * - ``site-consumption-capacity``
+     - ``"45kW"``
+     - Maximum consumption power at the grid connection point.
+       If ``site-power-capacity`` is defined, the minimum between the ``site-power-capacity`` and ``site-consumption-capacity`` will be used. [#consumption]_
+       If a ``site-consumption-breach-price`` is defined, the ``site-consumption-capacity`` becomes a soft constraint in the optimization problem.
+       Otherwise, it becomes a hard constraint. [#minimum_capacity_overlap]_
+   * - ``site-consumption-breach-price``
+     - ``"1000 EUR/kW"``
+     - The price of breaching the ``site-consumption-capacity``, useful to treat ``site-consumption-capacity`` as a soft constraint but still make the scheduler attempt to respect it.
+       Can be (a sensor recording) contractual penalties, but also a theoretical penalty just to allow the scheduler to breach the consumption capacity, while influencing how badly breaches should be avoided. [#penalty_field]_ [#breach_field]_
+   * - ``site-production-capacity``
+     - ``"0kW"``
+     - Maximum production power at the grid connection point.
+       If ``site-power-capacity`` is defined, the minimum between the ``site-power-capacity`` and ``site-production-capacity`` will be used. [#production]_
+       If a ``site-production-breach-price`` is defined, the ``site-production-capacity`` becomes a soft constraint in the optimization problem.
+       Otherwise, it becomes a hard constraint. [#minimum_capacity_overlap]_
+   * - ``site-production-breach-price``
+     - ``"1000 EUR/kW"``
+     - The price of breaching the ``site-production-capacity``, useful to treat ``site-production-capacity`` as a soft constraint but still make the scheduler attempt to respect it.
+       Can be (a sensor recording) contractual penalties, but also a theoretical penalty just to allow the scheduler to breach the production capacity, while influencing how badly breaches should be avoided. [#penalty_field]_ [#breach_field]_
+   * - ``site-peak-consumption``
+     - ``{"sensor": 7}``
+     - Current peak consumption.
+       Costs from peaks below it are considered sunk costs. Default to 0 kW.
+   * - ``site-peak-consumption-price``
+     - ``"260 EUR/MWh"``
+     - Consumption peaks above the ``site-peak-consumption`` are penalized against this per-kW price. [#penalty_field]_
+   * - ``site-peak-production``
+     - ``{"sensor": 8}``
+     - Current peak production.
+       Costs from peaks below it are considered sunk costs. Default to 0 kW.
+   * - ``site-peak-production-price``
+     - ``"260 EUR/MWh"``
+     - Production peaks above the ``site-peak-production`` are penalized against this per-kW price. [#penalty_field]_
+   * - ``soc-minima-breach-price``
+     - ``"120 EUR/kWh"``
+     - Penalty for not meeting ``soc-minima`` defined in the flex-model. [#penalty_field]_ [#breach_field]_
+   * - ``soc-maxima-breach-price``
+     - ``"120 EUR/kWh"``
+     - Penalty for not meeting ``soc-maxima`` defined in the flex-model. [#penalty_field]_ [#breach_field]_
+   * - ``consumption-breach-price``
+     - ``"10 EUR/kW"``
+     - The price of breaching the ``consumption-capacity`` in the flex-model, useful to treat ``consumption-capacity`` as a soft constraint but still make the scheduler attempt to respect it. [#penalty_field]_ [#breach_field]_
+   * - ``production-breach-price``
+     - ``"10 EUR/kW"``
+     - The price of breaching the ``production-capacity`` in the flex-model, useful to treat ``production-capacity`` as a soft constraint but still make the scheduler attempt to respect it. [#penalty_field]_ [#breach_field]_
+
+.. [#old_sensor_field] The old field only accepted an integer (sensor ID).
+
+.. [#asymmetric] ``site-consumption-capacity`` and ``site-production-capacity`` allow defining asymmetric contracted transport capacities for each direction (i.e. production and consumption).
+
+.. [#minimum_capacity_overlap] In case this capacity field defines partially overlapping time periods, the minimum value is selected. See :ref:`variable_quantities`.
+
+.. [#consumption] Example: with a connection capacity (``site-power-capacity``) of 1 MVA (apparent power) and a consumption capacity (``site-consumption-capacity``) of 800 kW (active power), the scheduler will make sure that the grid outflow doesn't exceed 800 kW.
+
+.. [#penalty_field] Prices must share the same currency. Negative prices are not allowed (penalties only).
+
+.. [#production] Example: with a connection capacity (``site-power-capacity``) of 1 MVA (apparent power) and a production capacity (``site-production-capacity``) of 400 kW (active power), the scheduler will make sure that the grid inflow doesn't exceed 400 kW.
+
+.. [#breach_field] Breach prices are applied both to (the height of) the highest breach in the planning window and to (the area of) each breach that occurs.
+                   That means both high breaches and long breaches are penalized.
+                   For example, a :abbr:`SoC (state of charge)` breach price of 120 EUR/kWh is applied as a breach price of 120 EUR/kWh on the height of the highest breach, and as a breach price of 120 EUR/kWh/h on the area (kWh*h) of each breach.
+                   For a 5-minute resolution sensor, this would amount to applying a SoC breach price of 10 EUR/kWh for breaches measured every 5 minutes (in addition to the 120 EUR/kWh applied to the highest breach only).
+
+.. note:: If no (symmetric, consumption and production) site capacity is defined (also not as defaults), the scheduler will not enforce any bound on the site power.
+          The flexible device can still have its own power limit defined in its flex-model.
+
+
+.. _flex_models_and_schedulers:
+
+The flex-models & corresponding schedulers
+-------------------------------------------
+
+FlexMeasures comes with a storage scheduler and a process scheduler, which work with flex models for storages and loads, respectively.
+
+The storage scheduler is suitable for batteries and :abbr:`EV (electric vehicle)` chargers, and is automatically selected when scheduling an asset with one of the following asset types: ``"battery"``, ``"one-way_evse"`` and ``"two-way_evse"``.
+
+The process scheduler is suitable for shiftable, breakable and inflexible loads, and is automatically selected for asset types ``"process"`` and ``"load"``.
+
+
+We describe the respective flex models below.
+At the moment, they have to be sent through the API (the endpoint to trigger schedule computation, or using the FlexMeasures client) or through the CLI (the command to add schedules).
+We will soon work on the possibility to store (a subset of) these fields on the data model and edit them in the UI.
+
+
+Storage
+^^^^^^^^
+
+For *storage* devices, the FlexMeasures scheduler deals with the state of charge (SoC) for an optimal outcome.
+You can do a lot with this ― examples for storage devices are:
+
+- batteries
+- :abbr:`EV (electric vehicle)` batteries connected to charge points
+- hot water storage ("heat batteries", where the SoC relates to the water temperature)
+- pumped hydro storage (SoC is the water level)
+- water basins (here, SoC is supposed to be low, as water is being pumped out)
+- buffers of energy-intensive chemicals that are needed in other industry processes
+
+
+The ``flex-model`` for storage devices describes to the scheduler what the flexible asset's state is,
+and what constraints or preferences should be taken into account.
+
+The full list of flex-model fields for the storage scheduler follows below.
+For more details on the possible formats for field values, see :ref:`variable_quantities`.
+
+.. list-table::
+   :header-rows: 1
+   :widths: 20 40 80
+
+   * - Field
+     - Example value
+     - Description 
+   * - ``soc-at-start``
+     - ``"3.1 kWh"``
+     - The (estimated) state of charge at the beginning of the schedule (defaults to 0). [#quantity_field]_
+   * - ``soc-unit``
+     - ``"kWh"`` or ``"MWh"``
+     - The unit used to interpret any SoC related flex-model value that does not mention a unit itself (only applies to numeric values, so not to string values).
+       However, we advise to mention the unit in each field explicitly (for instance, ``"3.1 kWh"`` rather than ``3.1``).
+       Enumerated option only.
+   * - ``soc-min``
+     - ``"2.5 kWh"``
+     - A constant lower boundary for all values in the schedule (defaults to 0). [#quantity_field]_
+   * - ``soc-max``
+     - ``"7 kWh"``
+     - A constant upper boundary for all values in the schedule (defaults to max soc target, if provided). [#quantity_field]_
+   * - ``soc-minima``
+     - ``[{"datetime": "2024-02-05T08:00:00+01:00", value: "8.2 kWh"}]``
+     - Set points that form lower boundaries, e.g. to target a full car battery in the morning (defaults to NaN values). [#maximum_overlap]_
+   * - ``soc-maxima``
+     - ``{"value": "51 kWh", "start": "2024-02-05T12:00:00+01:00", "end": "2024-02-05T13:30:00+01:00"}``
+     - Set points that form upper boundaries at certain times (defaults to NaN values). [#minimum_overlap]_
+   * - ``soc-targets``
+     - ``[{"datetime": "2024-02-05T08:00:00+01:00", value: "3.2 kWh"}]``
+     - Exact set point(s) that the scheduler needs to realize (defaults to NaN values).
+   * - ``soc-gain``
+     - ``[".1kWh"]``
+     - SoC gain per time step, e.g. from a secondary energy source (defaults to zero).
+   * - ``soc-usage``
+     - ``[{"sensor": 23}]``
+     - SoC reduction per time step, e.g. from a load or heat sink (defaults to zero).
+   * - ``roundtrip-efficiency``
+     - ``"90%"``
+     - Below 100%, this represents roundtrip losses (of charging & discharging), usually used for batteries. Can be percent or ratio ``[0,1]`` (defaults to 100%). [#quantity_field]_
+   * - ``charging-efficiency``
+     - ``".9"``
+     - Apply efficiency losses only at time of charging, not across roundtrip (defaults to 100%).
+   * - ``discharging-efficiency``
+     - ``"90%"``
+     - Apply efficiency losses only at time of discharging, not across roundtrip (defaults to 100%).
+   * - ``storage-efficiency``
+     - ``"99.9%"``
+     - This can encode losses over time, so each time step the energy is held longer leads to higher losses (defaults to 100%). Also read [#storage_efficiency]_ about applying this value per time step across longer time spans.
+   * - ``prefer-charging-sooner``
+     - ``True``
+     - Tie-breaking policy to apply if conditions are stable, which signals a preference to charge sooner rather than later (defaults to True). It also signals a preference to discharge later. Boolean option only.
+   * - ``prefer-curtailing-later``
+     - ``True``
+     - Tie-breaking policy to apply if conditions are stable, which signals a preference to curtail both consumption and production later, whichever is applicable (defaults to True). Boolean option only.
+   * - ``power-capacity``
+     - ``"50kW"``
+     - Device-level power constraint. How much power can be applied to this asset (defaults to the Sensor attribute ``capacity_in_mw``). [#minimum_overlap]_
+   * - ``consumption-capacity``
+     - ``{"sensor": 56}``
+     - Device-level power constraint on consumption. How much power can be drawn by this asset. [#minimum_overlap]_
+   * - ``production-capacity``
+     - ``"0kW"`` (only consumption)
+     - Device-level power constraint on production. How much power can be supplied by this asset. For :abbr:`PV (photovoltaic solar panels)` curtailment, set this to reference your sensor containing PV power forecasts. [#minimum_overlap]_
+
+.. [#quantity_field] Can only be set as a fixed quantity.
+
+.. [#maximum_overlap] In case this field defines partially overlapping time periods, the maximum value is selected. See :ref:`variable_quantities`.
+
+.. [#minimum_overlap] In case this field defines partially overlapping time periods, the minimum value is selected. See :ref:`variable_quantities`.
+
+.. [#storage_efficiency] The storage efficiency (e.g. 95% or 0.95) to use for the schedule is applied over each time step equal to the sensor resolution. For example, a storage efficiency of 95 percent per (absolute) day, for scheduling a 1-hour resolution sensor, should be passed as a storage efficiency of :math:`0.95^{1/24} = 0.997865`.
+
+For more details on the possible formats for field values, see :ref:`variable_quantities`.
+
+Usually, not the whole flexibility model is needed.
+FlexMeasures can infer missing values in the flex model, and even get them (as default) from the sensor's attributes.
+
+You can add new storage schedules with the CLI command ``flexmeasures add schedule for-storage``.
+
+If you model devices that *buffer* energy (e.g. thermal energy storage systems connected to heat pumps), we can use the same flexibility parameters described above for storage devices.
+However, here are some tips to model a buffer correctly:
+
+   - Describe the thermal energy content in kWh or MWh.
+   - Set ``soc-minima`` to the accumulative usage forecast.
+   - Set ``charging-efficiency`` to the sensor describing the :abbr:`COP (coefficient of performance)` values.
+   - Set ``storage-efficiency`` to a value below 100% to model (heat) loss.
+
+What happens if the flex model describes an infeasible problem for the storage scheduler? Excellent question!
+It is highly important for a robust operation that these situations still lead to a somewhat good outcome.
+From our practical experience, we derived a ``StorageFallbackScheduler``.
+It simplifies an infeasible situation by just starting to charge, discharge, or do neither,
+depending on the first target state of charge and the capabilities of the asset.
+
+Of course, we also log a failure in the scheduling job, so it's important to take note of these failures. Often, mis-configured flex models are the reason.
+
+For a hands-on tutorial on using some of the storage flex-model fields, head over to :ref:`tut_v2g` use case and `the API documentation for triggering schedules <../api/v3_0.html#post--api-v3_0-sensors-(id)-schedules-trigger>`_.
+
+Finally, are you interested in the linear programming details behind the storage scheduler?
+Then head over to :ref:`storage_device_scheduler`!
+You can also review the current flex-model for storage in the code, at ``flexmeasures.data.schemas.scheduling.storage.StorageFlexModelSchema``.
+
+
+Shiftable loads (processes)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For *processes* that can be shifted or interrupted, but have to happen at a constant rate (of consumption), FlexMeasures provides the ``ProcessScheduler``.
+Some examples from practice (usually industry) could be:
+
+- A centrifuge's daily work of combing through sludge water. Depends on amount of sludge present.
+- Production processes with a target amount of output until the end of the current shift. The target usually comes out of production planning.
+- Application of coating under hot temperature, with fixed number of times it needs to happen before some deadline.   
+   
+.. list-table::
+   :header-rows: 1
+   :widths: 20 25 90
+
+   * - Field
+     - Example value
+     - Description 
+   * - ``power``
+     - ``"15kW"``
+     - Nominal power of the load.
+   * - ``duration``
+     - ``"PT4H"``
+     - Time that the load needs to lasts.
+   * - ``optimization_direction``
+     - ``"MAX"``
+     - Objective of the scheduler, to maximize (``"MAX"``) or minimize (``"MIN"``).
+   * - ``time_restrictions``
+     - ``[{"start": "2015-01-02T08:00:00+01:00", "duration": "PT2H"}]`` 
+     - Time periods in which the load cannot be scheduled to run.
+   * - ``process_type``
+     - ``"INFLEXIBLE"``, ``"SHIFTABLE"`` or ``"BREAKABLE"``
+     - Is the load inflexible and should it run as soon as possible? Or can the process's start time be shifted? Or can it even be broken up into smaller segments?
+
+You can review the current flex-model for processes in the code, at ``flexmeasures.data.schemas.scheduling.process.ProcessSchedulerFlexModelSchema``.
+
+You can add new shiftable-process schedules with the CLI command ``flexmeasures add schedule for-process``.
+
+.. note:: Currently, the ``ProcessScheduler`` uses only the ``consumption-price`` field of the flex-context, so it ignores any site capacities and inflexible devices.
+
+
+Work on other schedulers
+--------------------------
+
+We believe the two schedulers (and their flex-models) we describe here are covering a lot of use cases already.
+Here are some thoughts on further innovation:
+
+- Writing your own scheduler.
+  You can always write your own scheduler (see :ref:`plugin_customization`).
+  You then might want to add your own flex model, as well.
+  FlexMeasures will let the scheduler decide which flexibility model is relevant and how it should be validated.
+- We also aim to model situations with more than one flexible asset, and that have different types of flexibility (e.g. EV charging and smart heating in the same site).
+  This is ongoing architecture design work, and therefore happens in development settings, until we are happy with the outcomes.
+  Thoughts welcome :)
+- Aggregating flexibility of a group of assets (e.g. a neighborhood) and optimizing its aggregated usage (e.g. for grid congestion support) is also an exciting direction for expansion.

+ 13 - 0
documentation/get-in-touch.rst

@@ -0,0 +1,13 @@
+.. _get_in_touch:
+
+Get in touch
+=============
+
+We want you to succeed in using, hosting or extending FlexMeasures. For all your questions and ideas, you can join the FlexMeasures community in the following ways:
+
+- View the code and/or create a ticket on `GitHub <https://github.com/FlexMeasures/flexmeasures>`_
+- Join the ``#flexmeasures`` Slack channel over at `https://lfenergy.slack.com <https://lfenergy.slack.com>`_
+- Write to us at `flexmeasures@lists.lfenergy.org <flexmeasures@lists.lfenergy.org>`_ (you can join this mailing list `here <https://lists.lfenergy.org/g/flexmeasures>`_)
+- Follow `@flexmeasures <https://twitter.com/flexmeasures>`_ on Twitter
+
+We'd love to hear from you!

+ 71 - 0
documentation/getting-started.rst

@@ -0,0 +1,71 @@
+.. _getting_started:
+
+Getting started
+=================================
+
+For a direct intro on running FlexMeasures, go to :ref:`installation`. However, FlexMeasures is useful from different perspectives.
+Below, we added helpful pointers to start reading.
+
+.. contents::
+    :local:
+    :depth: 2
+
+
+.. _start_using_flexmeasures_in_your_organization:
+
+For organizations
+------------------
+
+We make FlexMeasures, so that your software developers are as productive with energy optimization as possible. Because we are developers ourselves, we know that it takes a couple smaller steps to engage with new technology. 
+
+Your journey, from dipping your toes in the water towards being a productive energy optimization company, could look like this:
+
+1. Quickstart ― Find an optimized schedule for your flexible asset, like a battery, with standard FlexMeasures tooling. This is basically what we show in :ref:`tut_toy_schedule`. All you need are 10 minutes and a CSV file with prices to optimize against.
+2. Automate ― get the prices from an open API, for instance `ENTSO-E <https://transparency.entsoe.eu/>`_ (using a plugin like `flexmeasures-entsoe <https://github.com/SeitaBV/flexmeasures-entsoe>`_), and run the scheduler regularly in a cron job.
+3. Integrate ― Load the schedules via FlexMeasures' API, so you can directly control your assets and/or show them within your own frontend.
+4. Customize ― Load other data (e.g. your solar production or weather forecasts via `flexmeasures-openweathermap <https://github.com/SeitaBV/flexmeasures-openweathermap/>`_). Adapt the algorithms, e.g. do your own forecasting or tweak the standard scheduling algorithm so it optimizes what you care about. Or write a plugin for accessing a new kind of market. The opportunities are endless!
+
+
+
+
+For Individuals
+----------------
+
+Using FlexMeasures
+^^^^^^^^^^^^^^^^^^^
+
+You are connecting to a running FlexMeasures server, e.g. for sending data, getting schedules or administrate users and assets. 
+
+First, you'll need an account from the party running the server. Also, you probably want to:
+
+- Look at the UI, e.g. pages for :ref:`dashboard` and :ref:`admin`.
+- Read the :ref:`api_introduction`.
+- Learn how to interact with the API in :ref:`tut_posting_data`.
+
+
+Hosting FlexMeasures
+^^^^^^^^^^^^^^^^^^^^^^
+
+You want to run your own FlexMeasures instance, to offer services or for trying it out. You'll want to:
+
+- Have a first playful scheduling session, following :ref:`tut_toy_schedule`.
+- Get real with the tutorial on :ref:`installation`.
+- Discover the power of :ref:`cli`.
+- Understand how to :ref:`deployment`.
+
+
+Plugin developers
+^^^^^^^^^^^^^^^^^^
+
+You want to extend the functionality of FlexMeasures, e.g. a custom integration or a custom algorithm:
+
+- Read the docs on :ref:`plugins`.
+- See how some existing plugins are made `flexmeasures-entsoe <https://github.com/SeitaBV/flexmeasures-entsoe>`_ or `flexmeasures-openweathermap <https://github.com/SeitaBV/flexmeasures-openweathermap>`_
+- Of course, some of the developers resources (see below) might be helpful to you, as well.
+
+
+Core developers
+^^^^^^^^^^^^^^^^
+
+You want to help develop FlexMeasures, e.g. to fix a bug. We provide a getting-started guide to becoming a developer at :ref:`developing`.
+

+ 416 - 0
documentation/host/data.rst

@@ -0,0 +1,416 @@
+.. _host-data:
+
+Postgres database
+=====================
+
+This document describes how to get the postgres database ready to use and maintain it (do migrations / changes to the structure).
+
+.. note:: This is about a stable database, useful for longer development work or production. A super quick way to get a postgres database running with Docker is described in :ref:`tut_toy_schedule`. In :ref:`docker-compose` we use both postgres and redis.
+
+We also spend a few words on coding with database transactions in mind.
+
+
+.. contents:: Table of contents
+    :local:
+    :depth: 2
+
+
+Getting ready to use
+----------------------
+
+Notes: 
+
+* We assume ``flexmeasures`` for your database and username here. You can use anything you like, of course.
+* The name ``flexmeasures_test`` for the test database is good to keep this way, as automated tests are looking for that database / user / password. 
+
+Install
+^^^^^^^^^^^^^
+
+We believe FlexMeasures works with Postgres above version 9 and we ourselves have run it with versions up to 14.
+
+On Linux:
+
+.. code-block:: bash
+
+   $ # On Ubuntu and Debian, you can install postgres like this:
+   $ sudo apt-get install postgresql-12  # replace 12 with the version available in your packages
+   $ pip install psycopg2-binary
+
+   $ # On Fedora, you can install postgres like this:
+   $ sudo dnf install postgresql postgresql-server
+   $ sudo postgresql-setup --initdb --unit postgresql
+
+
+On Windows:
+
+
+* Download postgres here: https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
+* Install and remember your ``postgres`` user password
+* Add the lib and bin directories to your Windows path: http://bobbyong.com/blog/installing-postgresql-on-windoes/
+* ``conda install psycopg2``
+
+
+On Macos:
+
+.. code-block:: bash
+
+   $ brew update
+   $ brew doctor
+   $ # Need to specify postgres version, in this example we use 13
+   $ brew install postgresql@13
+   $ brew link postgresql@13 --force
+   $ # Start postgres (you can change /usr/local/var/postgres to any directory you like)
+   $ pg_ctl -D /usr/local/var/postgres -l logfile start
+
+
+Using Docker Compose:
+
+
+Alternatively, you can use Docker Compose to run a postgres database. You can use the following ``docker-compose.yml`` as a starting point:
+
+
+.. code-block:: yaml
+
+   version: '3.7'
+
+   services:
+     postgres:
+       image: postgres:latest
+       restart: always
+       environment:
+         POSTGRES_USER: flexmeasures
+         POSTGRES_PASSWORD: this-is-your-secret-choice
+         POSTGRES_DB: flexmeasures
+       ports:
+         - 5432:5432
+       volumes:
+         - ./postgres-data:/var/lib/postgresql/data
+       network_mode: host
+
+To run this, simply type ``docker-compose up`` in the directory where you saved the ``docker-compose.yml`` file. Pass the ``-d`` flag to run it in the background.
+
+This will create a postgres database in a directory ``postgres-data`` in your current working directory. You can change the password and database name to your liking. You can also change the port mapping to e.g. ``5433:5432`` if you already have a postgres database running on your host machine.
+
+
+Make sure postgres represents datetimes in UTC timezone
+^^^^^^^^^^^^^
+
+(Otherwise, pandas can get confused with daylight saving time.)
+
+Luckily, many web hosters already have ``timezone= 'UTC'`` set correctly by default,
+but local postgres installations often use ``timezone='localtime'``.
+
+In any case, check both your local installation and the server, like this:
+
+Find the ``postgres.conf`` file. Mine is at ``/etc/postgresql/9.6/main/postgresql.conf``.
+You can also type ``SHOW config_file;`` in a postgres console session (as superuser) to find the config file.
+
+Find the ``timezone`` setting and set it to 'UTC'.
+
+Then restart the postgres server.
+
+.. tabs::
+
+   .. tab:: Linux
+
+      .. code-block:: bash
+
+         $ sudo service postgresql restart
+
+   .. tab:: Macos
+
+      .. code-block:: bash
+
+         $ pg_ctl -D /usr/local/var/postgres -l logfile restart
+
+.. note:: If you are using Docker to run postgres, the ``timezone`` setting is already set to ``UTC`` by default.
+
+
+Create "flexmeasures" and "flexmeasures_test" databases and users
+^^^^^^^^^^^^^
+
+From the terminal:
+
+Open a console (use your Windows key and type ``cmd``\ ).
+Proceed to create a database as the postgres superuser (using your postgres user password):
+
+.. code-block:: bash
+
+   $ sudo -i -u postgres
+   $ createdb -U postgres flexmeasures
+   $ createdb -U postgres flexmeasures_test
+   $ createuser --pwprompt -U postgres flexmeasures      # enter your password
+   $ createuser --pwprompt -U postgres flexmeasures_test  # enter "flexmeasures_test" as password
+   $ exit
+
+.. note:: In case you encounter the following "FAILS: sudo: unknown user postgres" you need to create "postgres" OS user with sudo rights first - better done via System preferences -> Users & Groups.
+
+
+Or, from within Postgres console:
+
+.. code-block:: sql
+
+   CREATE USER flexmeasures WITH PASSWORD 'this-is-your-secret-choice';
+   CREATE DATABASE flexmeasures WITH OWNER = flexmeasures;
+   CREATE USER flexmeasures_test WITH PASSWORD 'flexmeasures_test';
+   CREATE DATABASE flexmeasures_test WITH OWNER = flexmeasures_test;
+
+
+Finally, test if you can log in as the flexmeasures user:
+
+.. code-block:: bash
+
+   $ psql -U flexmeasures --password -h 127.0.0.1 -d flexmeasures
+
+.. code-block:: sql
+
+   \q
+
+
+Add Postgres Extensions to your database(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To find the nearest sensors, FlexMeasures needs some extra Postgres support.
+Add the following extensions while logged in as the postgres superuser:
+
+.. code-block:: bash
+
+   $ sudo -u postgres psql
+
+.. code-block:: sql
+
+   \connect flexmeasures
+   CREATE EXTENSION cube;
+   CREATE EXTENSION earthdistance;
+
+.. note:: Lines from above should be run seperately
+
+
+If you have it, connect to the ``flexmeasures_test`` database and repeat creating these extensions there. Then ``exit``.
+
+
+Configure FlexMeasures app for that database
+^^^^^^^^^^^^^
+
+Write:
+
+.. code-block:: python
+
+   SQLALCHEMY_DATABASE_URI = "postgresql://flexmeasures:<password>@127.0.0.1/flexmeasures"
+
+
+into the config file you are using, e.g. ~/.flexmeasures.cfg
+
+
+Get structure (and some data) into place
+^^^^^^^^^^^^^
+
+You need data to enjoy the benefits of FlexMeasures or to develop features for it. In this section, there are some ways to get started.
+
+
+Import from another database
+""""""""""""""""""""""""""""""
+
+Here is a short recipe to import data from a FlexMeasures database (e.g. a demo database) into your local system.
+
+On the to-be-exported database:
+
+.. code-block:: bash
+
+   $ flexmeasures db-ops dump
+
+
+.. note:: Only the data gets dumped here.
+
+Then, we create the structure in our database anew, based on the data model given by the local codebase:
+
+.. code-block:: bash
+
+   $ flexmeasures db-ops reset
+
+
+Then we import the data dump we made earlier:
+
+.. code-block:: bash
+
+   $ flexmeasures db-ops restore <DATABASE DUMP FILENAME>
+
+
+A potential ``alembic_version`` error should not prevent other data tables from being restored.
+You can also choose to import a complete db dump into a freshly created database, of course.
+
+.. note:: To make sure passwords will be decrypted correctly when you authenticate, set the same SECURITY_PASSWORD_SALT value in your config as the one that was in use when the dumped passwords were encrypted! 
+
+Create data manually
+"""""""""""""""""""""""
+
+First, you can get the database structure with:
+
+.. code-block:: bash
+
+   $ flexmeasures db upgrade
+
+
+.. note:: If you develop code (and might want to make changes to the data model), you should also check out the maintenance section about database migrations.
+
+You can create users with the ``add user`` command. Check it out:
+
+.. code-block:: bash
+
+   $ flexmeasures add account --help
+   $ flexmeasures add user --help
+
+
+You can create some pre-determined asset types and data sources with this command:
+
+.. code-block:: bash
+
+   $ flexmeasures add initial-structure
+
+You can also create assets in the FlexMeasures UI.
+
+On the command line, you can add many things. Check what data you can add yourself:
+
+.. code-block:: bash
+
+   $ flexmeasures add --help
+
+
+For instance, you can create forecasts for your existing metered data with this command:
+
+.. code-block:: bash
+
+   $ flexmeasures add forecasts --help
+
+
+Check out it's ``--help`` content to learn more. You can set which assets and which time window you want to forecast. Of course, making forecasts takes a while for a larger dataset.
+You can also simply queue a job with this command (and run a worker to process the :ref:`redis-queue`).
+
+Just to note, there are also commands to get rid of data. Check:
+
+.. code-block:: bash
+
+   $ flexmeasures delete --help
+
+Check out the :ref:`cli` documentation for more details.
+
+
+
+Visualize the data model
+--------------------------
+
+You can visualise the data model like this:
+
+.. code-block:: bash
+
+   $ make show-data-model
+
+
+This will generate a picture based on the model code.
+You can also generate picture based on the actual database, see inside the Makefile. 
+
+.. note:: If you encounter "error: externally-managed-environment" when running `make test` and you do it in venv, try `pip cache purge` or use pipx.
+
+Maintenance
+----------------
+
+Maintenance is supported with the alembic tool. It reacts automatically
+to almost all changes in the SQLAlchemy code. With alembic, multiple databases,
+such as development, staging and production databases can be kept in sync.
+
+
+Make first migration
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Run these commands from the repository root directory (read below comments first):
+
+.. code-block:: bash
+
+   $ flexmeasures db init
+   $ flexmeasures db migrate
+   $ flexmeasures db upgrade
+
+
+The first command (\ ``flexmeasures db init``\ ) is only needed here once, it initialises the alembic migration tool.
+The second command generates the SQL for your current db model and the third actually gives you the db structure.
+
+With every migration, you get a new migration step in ``migrations/versions``. Be sure to add that to ``git``\ ,
+as future calls to ``flexmeasures db upgrade`` will need those steps, and they might happen on another computer.
+
+Hint: You can edit these migrations steps, if you want.
+
+Make another migration
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Just to be clear that the ``db init`` command is needed only at the beginning - you usually do, if your model changed:
+
+.. code-block:: bash
+
+   $ flexmeasures db migrate --message "Please explain what you did, it helps for later"
+   $ flexmeasures db upgrade
+
+
+Get database structure updated
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The goal is that on any other computer, you can always execute
+
+.. code-block:: bash
+
+   $ flexmeasures db upgrade
+
+
+to have the database structure up-to-date with all migrations.
+
+Working with the migration history
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The history of migrations is at your fingertips:
+
+.. code-block:: bash
+
+   $ flexmeasures db current
+   $ flexmeasures db history
+
+
+You can move back and forth through the history:
+
+.. code-block:: bash
+
+   $ flexmeasures db downgrade
+   $ flexmeasures db upgrade
+
+
+Both of these accept a specific revision id parameter, as well.
+
+Check out database status
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Log in into the database:
+
+.. code-block:: bash
+
+   $ psql -U flexmeasures --password -h 127.0.0.1 -d flexmeasures
+
+
+with the password from flexmeasures/development_config.py. Check which tables are there:
+
+.. code-block:: sql
+
+   \dt
+
+
+To log out:
+
+.. code-block:: sql
+
+   \q
+
+
+Transaction management
+-----------------------
+
+It is really useful (and therefore an industry standard) to bundle certain database actions within a transaction. Transactions are atomic - either the actions in them all run or the transaction gets rolled back. This keeps the database in a sane state and really helps having expectations during debugging.
+
+Please see the package ``flexmeasures.data.transactional`` for details on how a FlexMeasures developer should make use of this concept.
+If you are writing a script or a view, you will find there the necessary structural help to bundle your work in a transaction.

+ 103 - 0
documentation/host/deployment.rst

@@ -0,0 +1,103 @@
+.. _deployment:
+
+How to deploy FlexMeasures
+===========================
+
+Here you can learn how to get FlexMeasures onto a server.
+
+.. note:: FlexMeasures can be deployed via Docker, where the solver is already installed and there are cloud infrastructures like Kubernetes you'd use. Read more at :ref:`docker-image`. You need other components (e.g. postgres and redis) which are not handled here. See :ref:`docker-compose` for inspiration.
+
+
+
+WSGI configuration
+------------------
+
+On your own computer, ``flexmeasures run`` is a nice way to start FlexMeasures. On a production web server, you want it done the :abbr:`WSGI (Web Server Gateway Interface)` way. 
+
+Here, you'd want to hand FlexMeasures' ``app`` object to a WSGI process, as your platform of choice describes.
+Often, that requires a WSGI script. Below is a minimal example. 
+
+
+.. code-block:: python
+   
+   # use this if you run from source, not needed if you pip-installed FlexMeasures
+   project_home = u'/path/to/your/code/flexmeasures'
+   if project_home not in sys.path:
+      sys.path = [project_home] + sys.path
+   
+   # create flask app - the name "application" has to be passed to the WSGI server
+   from flexmeasures.app import create as create_app
+   application = create_app()
+
+The web server is told about the WSGI script, but also about the object that represents the application.
+For instance, if this script is called wsgi.py, then the relevant argument to the gunicorn server is `wsgi:application`.
+
+A more nuanced one from our practice is this:
+
+.. code-block:: python
+
+   # This file contains the WSGI configuration required to serve up your
+   # web application.
+   # It works by setting the variable 'application' to a WSGI handler of some description.
+   # The crucial part are the last two lines. We add some ideas for possible other logic.
+
+   import os
+   project_home = u'/path/to/your/code/flexmeasures'
+   # use this if you want to load your own ``.env`` file.
+   from dotenv import load_dotenv
+   load_dotenv(os.path.join(project_home, '.env'))
+   # use this if you run from source
+   if project_home not in sys.path:
+      sys.path = [project_home] + sys.path
+   # adapt PATH to find our LP solver if it is installed from source
+   os.environ["PATH"] = os.environ.get("PATH") + ":/home/seita/Cbc-2.9/bin"
+
+   # create flask app - the name "application" has to be passed to the WSGI server
+   from flexmeasures.app import create as create_app
+   application = create_app()
+
+
+Keep in mind that FlexMeasures is based on `Flask <https://flask.palletsprojects.com/>`_, so almost all knowledge on the web on how to deploy a Flask app also helps with deploying FlexMeasures. 
+
+
+.. _installing-a-solver:
+
+Install the linear solver on the server
+---------------------------------------
+
+To compute schedules, FlexMeasures uses the `HiGHS <https://highs.dev/>`_ mixed integer linear optimization solver (FlexMeasures solver by default) or `Cbc <https://github.com/coin-or/Cbc>`_.
+Solvers are used through `Pyomo <http://www.pyomo.org>`_\ , so in principle supporting a `different solver <https://pyomo.readthedocs.io/en/stable/solving_pyomo_models.html#supported-solvers>`_ would be possible.
+
+You tell FlexMeasures with the config setting :ref:`solver-config` which solver to use.
+
+However, the solver also needs to be installed - in addition to FlexMeasures (the Docker image already has it). Here is advice on how to install the two solvers we test internally:
+
+
+.. note:: We default to HiGHS, as it seems more powerful
+
+
+HiGHS can be installed using pip:
+
+.. code-block:: bash
+
+   $ pip install highspy
+
+More information on `the HiGHS website <https://highs.dev/>`_.
+
+Cbc needs to be present on the server where FlexMeasures runs, under the ``cbc`` command.
+
+You can install it on Debian like this:
+
+.. code-block:: bash
+
+   $ apt-get install coinor-cbc
+
+(also available in different popular package managers).
+
+More information is on `the CBC website <https://projects.coin-or.org/Cbc>`_.
+
+If you can't use the package manager on your host, the solver has to be installed from source.
+We provide an example script in ``ci/install-cbc-from-source.sh`` to do that, where you can also
+pass a directory for the installation.
+
+In case you want to install a later version, adapt the version in the script. 

+ 83 - 0
documentation/host/docker.rst

@@ -0,0 +1,83 @@
+.. _docker-image:
+
+Running via Docker
+======================
+
+FlexMeasures can be run via `its docker image <https://hub.docker.com/repository/docker/lfenergy/flexmeasures>`_.
+
+`Docker <https://docs.docker.com/get-docker/>`_ is great to save developers from installation trouble, but also for running FlexMeasures inside modern cloud environments in a scalable manner.
+
+
+.. note:: We also support running all needed parts of a FlexMeasures web service setup via `docker-compose <https://docs.docker.com/compose/>`_, which is helpful for developers and might inform hosting efforts. See :ref:`docker-compose`. 
+
+
+Getting the `flexmeasures` image
+-----------------------------------
+
+You can use versions we host at Docker Hub, e.g.:
+
+.. code-block:: bash
+
+    $ docker pull lfenergy/flexmeasures:latest
+
+
+You can also build the FlexMeasures image yourself, from source:
+
+.. code-block:: bash
+
+    $ docker build -t flexmeasures/my-version . 
+
+The tag is your choice.
+
+
+Running
+-----------------------------------
+
+Running the image (as a container) might work like this (remember to get the image first, see above):
+
+.. code-block:: bash
+
+    $ docker run --env SQLALCHEMY_DATABASE_URI=postgresql://user:pass@localhost:5432/dbname --env SECRET_KEY=blabla --env FLEXMEASURES_ENV=development -p 5000:5000 -d --net=host lfenergy/flexmeasures
+
+.. note:: Don't know what your image is called (its "tag")? We used ``lfenergy/flexmeasures`` here, as that should be the name when pulling it from Docker Hub. You can run ``docker images`` to see which images you have.
+
+.. include:: ../notes/macOS-docker-port-note.rst
+
+The two minimal environment variables to run the container successfully are ``SQLALCHEMY_DATABASE_URI`` and the ``SECRET_KEY``, see :ref:`configuration`. ``FLEXMEASURES_ENV=development`` is needed if you do not have an SSL certificate set up (the default mode is ``production``, and in that mode FlexMeasures requires https for security reasons). If you see too much output, you can also set ``LOGGING_LEVEL=INFO``.
+
+In this example, we connect to a postgres database running on our local computer, so we use the host network. In the docker-compose section below, we use a Docker container for the database, as well.
+
+Browsing ``http://localhost:5000`` should work now and ask you to log in.
+Of course, you might not have created a user. You can use ``docker exec -it <flexmeasures-container-name> bash`` to go inside the container and use the :ref:`cli` to create everything you need.
+
+
+.. _docker_configuration:
+
+Configuration and customization
+-----------------------------------
+
+Using :ref:`configuration` by file is usually what you want to do. It's easier than adding environment variables to ``docker run``. Also, not all settings can be given via environment variables. A good example is the :ref:`mapbox_access_token`, so you can load maps on the dashboard.
+
+To load a configuration file into the container when starting up, we make use of the `instance folder <https://flask.palletsprojects.com/en/2.1.x/config/#instance-folders>`_. You can put a configuration file called ``flexmeasures.cfg`` into a local folder called ``flexmeasures-instance`` and then mount that folder into the container, like this:
+
+.. code-block:: bash
+
+    $ docker run -v $(pwd)/flexmeasures-instance:/app/instance:ro -d --net=host lfenergy/flexmeasures
+
+.. warning:: The location of the instance folder depends on how we serve FlexMeasures. The above works with gunicorn. See the compose file for an alternative (for the FlexMeasures CLI), and you can also read the above link about the instance folder.
+
+Installing plugins within the container
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+At this point, the FlexMeasures container is up and running without including any plugins you might need to use. To integrate a plugin into the container, follow these steps:
+
+1. Copy the plugin into your active FlexMeasures container by executing the following command:
+
+.. code-block:: bash
+
+    docker cp </path/to/plugin-directory> <flexmeasures-container-name>:/app
+
+
+2. Once the plugin is successfully copied proceed to install it, for instance using pip ``docker exec -it <flexmeasures-container-name> bash -c "pip install <path/to-package>"``. Instead, you just need to install the requirements, then run this command ``docker exec -it <flexmeasures-container-name> bash -c "pip install -r <path/to-package/requirements.txt``.
+3. After completing the installation, create a directory named ``instance`` in the container working directory and transfer the FlexMeasures configuration file, ``flexmeasures.cfg``, into it using the ``docker cp`` command. Additionally, ensure that you incorporate your plugin details into the ``flexmeasures.cfg`` file as outlined in the :ref:`plugin-config` section.
+4. Once these steps are finished, halt the container using the ``docker stop <flexmeasures-container-name>`` command, followed by restarting it using ``docker start <flexmeasures-container-name>``. This ensures that the changes take effect. Now, you can make use of the installed plugins within the FlexMeasures Docker container.

+ 64 - 0
documentation/host/error-monitoring.rst

@@ -0,0 +1,64 @@
+.. _host_error_monitoring:
+
+Error monitoring
+=================
+
+When you run a FlexMeasures server, you want to stay on top of things going wrong. We added two ways of doing that:
+
+- You can connect to Sentry, so that all errors will be sent to your Sentry account. Add the token you got from Sentry in the config setting :ref:`sentry_access_token` and you're up and running! 
+- Another source of crucial errors are things that did not even happen! For instance, a (bot) user who is supposed to send data regularly, fails to connect with FlexMeasures. Or, a task to import prices from a day-ahead market, which you depend on later for scheduling, fails silently.
+
+
+Let's look at how to monitor for things not happening in more detail:
+
+
+Monitoring the time users were last seen
+-----------------------------------------
+
+The CLI task ``flexmeasures monitor last-seen`` lets you be alerted if a user has contacted your FlexMeasures instance longer ago than you expect. This is most useful for bot users (a.k.a. scripts).
+
+Here is an example for illustration:
+
+.. code-block:: bash
+
+    $ flexmeasures monitor last-seen --account-role SubscriberToServiceXYZ --user-role bot --maximum-minutes-since-last-seen 100
+
+As you see, users are filtered by roles. You might need to add roles before this works as you want.
+
+.. todo:: Adding roles and assigning them to users and/or accounts is not supported by the CLI or UI yet (besides ``flexmeasures add account-role``). This is `work in progress <https://github.com/FlexMeasures/flexmeasures/projects/18>`_. Right now, it requires you to add roles on the database level. 
+
+
+Monitoring task runs
+---------------------
+
+The CLI task ``flexmeasures monitor latest-run`` lets you be alerted when tasks have not successfully run at least so-and-so many minutes ago.
+The alerts will come in via Sentry, but you can also send them to email addresses with the config setting :ref:`monitoring_mail_recipients`.
+
+For illustration, here is one example of how we monitor the latest run times of tasks on a server ― the below is run in a cron script every hour and checks if every listed task ran 60, 6 or 1440 minutes ago, respectively:
+
+.. code-block:: bash
+
+    $ flexmeasures monitor latest-run --task get_weather_forecasts 60 --task get_recent_meter_data 6  --task import_epex_prices 1440
+
+The first task (get_weather_forecasts) is actually supported within FlexMeasures, while the other two sit in plugins we wrote.
+
+This task status monitoring is enabled by decorating the functions behind these tasks with:
+
+.. code-block:: python
+
+    @task_with_status_report
+    def my_function():
+        ...
+
+Then, FlexMeasures will log if this task ran, and if it succeeded or failed. The result is in the table ``latest_task_runs``, and that's where the ``flexmeasures monitor latest-run`` will look.
+
+.. note:: The decorator should be placed right before the function (after all other decorators).
+
+Per default the function name is used as task name. If the number of tasks accumulate (e.g. by using multiple plugins that each define a task or two), it is useful to come up with more dedicated names. You can add a custom name as argument to the decorator:
+
+.. code-block:: python
+
+    @task_with_status_report("pluginA_myFunction")
+    def my_function():
+        ...
+

+ 369 - 0
documentation/host/installation.rst

@@ -0,0 +1,369 @@
+.. _installation:
+
+Installation & First steps
+=================================
+
+
+This section walks you through the basics of installing FlexMeasures on a computer and running it continuously.
+
+We'll cover the most crucial settings you need to run FlexMeasures step-by-step, both for `pip`-based installation, as well as running via Docker.
+In addition, we'll explain some basics that you'll need:
+
+.. contents:: Table of contents
+    :local:
+    :depth: 1
+
+
+Installing and running FlexMeasures 
+------------------------------------
+
+In a nutshell, what does installation and running look like?
+Well, there are two major ways:
+
+.. tabs::
+
+    .. tab:: via `pip`
+
+        .. code-block:: bash
+
+           $ pip install flexmeasures
+           $ flexmeasures run  # this won't work just yet
+      
+        .. note:: Installation might cause some issues with latest Python versions and Windows, for some pip-dependencies (e.g. ``rq-win``). You might overcome this with a little research, e.g. by `installing from the repo <https://github.com/michaelbrooks/rq-win#installation-and-use>`_.
+
+
+    .. tab:: via `docker`
+      
+        .. code-block:: bash
+    
+           $ docker pull lfenergy/flexmeasures
+           $ docker run -d lfenergy/flexmeasures  # this won't work just yet
+
+        The ``-d`` option keeps FlexMeasures running in the background ("detached"), as it should.
+
+        .. note::  For more information, see :ref:`docker-image` and :ref:`docker-compose`.
+      
+However, FlexMeasures is not a simple tool - it's a web-app, with bells and whistles, like user access and databases.
+We'll need to add a few minimal preparations before running will work, see below. 
+
+
+Make a secret key for sessions and password salts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Set a secret key, which is used to sign user sessions and re-salt their passwords.
+The quickest way is with an environment variable, like this:
+
+.. tabs::
+
+    .. tab:: via `pip`
+
+        .. code-block:: bash
+
+            $ export SECRET_KEY=something-secret
+
+        (on Windows, use ``set`` instead of ``export``\ )
+    
+    .. tab:: via `docker`
+
+        Add the `SECRET_KEY` as an environment variable:
+
+        .. code-block:: bash
+        
+            $ docker run -d --env SECRET_KEY=something-secret lfenergy/flexmeasures
+
+This suffices for a quick start. For an actually secure secret, here is a Pythonic way to generate a good secret key:
+
+.. code-block:: bash
+
+   $ python -c "import secrets; print(secrets.token_urlsafe())"
+
+
+
+Choose the environment
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Set an environment variable to indicate in which environment you are operating (one out of `development|testing|documentation|production`).
+We'll go with ``development`` here:
+
+.. tabs::
+
+    .. tab:: via `pip`
+
+         .. code-block:: bash
+
+            $ export FLEXMEASURES_ENV=development
+
+         (on Windows, use ``set`` instead of ``export``\ )
+
+    .. tab:: via `docker`
+         
+         .. code-block:: bash
+            
+            $ docker run -d --env FLEXMEASURES_ENV=development lfenergy/flexmeasures
+         
+
+The default environment setting is ``production``\ , which will probably not work well on your localhost, as FlexMeasures then expects SSL-encrypted communication. 
+
+
+Tell FlexMeasures where the time series database is
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+* Make sure you have a Postgres (Version 9+) database for FlexMeasures to use. See :ref:`host-data` (section "Getting ready to use") for deeper instructions on this.
+* 
+  Tell ``flexmeasures`` about it:
+
+  .. tabs::
+
+    .. tab:: via `pip`
+
+      .. code-block:: bash
+
+        $ export SQLALCHEMY_DATABASE_URI="postgresql://<user>:<password>@<host-address>[:<port>]/<db-name>"
+
+      (on Windows, use ``set`` instead of ``export``\ )
+      
+    .. tab:: via `docker`
+
+      .. code-block:: bash
+          
+        $ docker run -d --env SQLALCHEMY_DATABASE_URI=postgresql://<user>:<password>@<host-address>:<port>/<db-name> lfenergy/flexmeasures
+      
+  If you install this on localhost, ``host-address`` is ``127.0.0.1`` and the port can be left out.
+
+* 
+  On a fresh database, you can create the data structure for FlexMeasures like this:
+
+  .. tabs::
+
+   .. tab:: via `pip`
+   
+     .. code-block:: bash
+
+       $ flexmeasures db upgrade
+
+   .. tab:: via `docker`
+
+     Go into the container to create the structure:
+
+     .. code-block:: bash
+
+       $ docker exec -it <your-container-id> -c "flexmeasures db upgrade"
+
+
+Use a config file
+^^^^^^^^^^^^^^^^^^^
+
+If you want to consistently use FlexMeasures, we recommend you add the settings we introduced above into a FlexMeasures config file.
+See :ref:`configuration` for a full explanation where that file can live and all the settings.
+
+So far, our config file would look like this:
+
+.. code-block:: python
+
+   SECRET_KEY = "something-secret"
+   FLEXMEASURES_ENV = "development"
+   SQLALCHEMY_DATABASE_URI = "postgresql://<user>:<password>@<host-address>[:<port>]/<db>"
+
+  
+.. tabs::
+
+    .. tab:: via `pip`
+ 
+      Place the file at ``~/.flexmeasures.cfg``. FlexMeasures will look for it there.
+
+    .. tab:: via `docker`
+
+      Save the file as ``flexmeasures-instance/flexmeasures.cfg`` and load it into the container like this (more at :ref:`docker_configuration`):
+
+      .. code-block:: bash
+
+         $ docker run -v $(pwd)/flexmeasures-instance:/app/instance:ro lfenergy/flexmeasures
+
+
+
+Adding data
+---------------
+
+Let's add some data.
+
+From here on, we will not differentiate between `pip` and `docker` installation. When using docker, here are two ways to run these commands:
+
+   .. code-block:: bash
+
+      $ docker exec -it <your-container-name> -c "<command>"
+      $ docker exec -it <your-container-name> bash  # then issue the data-generating commands in the container
+
+
+Add an account & user
+^^^^^^^^^^^^^^^^^^^^^^^
+
+FlexMeasures is a tenant-based platform ― multiple clients can enjoy its services on one server. Let's create a tenant account first: 
+
+.. code-block:: bash
+
+   $ flexmeasures add account --name  "Some company"
+
+This command will tell us the ID of this account. Let's assume it was ``2``.
+
+FlexMeasures is also a web-based platform, so we need to create a user to authenticate:
+
+.. code-block:: bash
+
+   $ flexmeasures add user --username <your-username> --email <your-email-address> --account-id 2 --roles=admin
+
+
+* This will ask you to set a password for the user.
+* Giving the first user the ``admin`` role is probably what you want.
+
+
+Add initial structure
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Populate the database with some standard asset types, user roles etc.: 
+
+.. code-block:: bash
+
+   $ flexmeasures add initial-structure
+
+
+Add your first asset
+^^^^^^^^^^^^^^^^^^^^^^^
+
+There are three ways to add assets:
+
+First, you can use the ``flexmeasures`` :ref:`cli`:
+
+.. code-block:: bash
+
+    $ flexmeasures add asset --name "my basement battery pack" --asset-type-id 3 --latitude 65 --longitude 123.76 --account-id 2
+
+For the asset type ID, I consult ``flexmeasures show asset-types``.
+
+For the account ID, I looked at the output of ``flexmeasures add account`` (the command we issued above) ― I could also have consulted ``flexmeasures show accounts``.
+
+The second way to add an asset is the UI ― head over to ``https://localhost:5000/assets`` (after you started FlexMeasures, see step "Run FlexMeasures" further down) and add a new asset there in a web form.
+
+Finally, you can also use the `POST /api/v3_0/assets <../api/v3_0.html#post--api-v3_0-assets>`_ endpoint in the FlexMeasures API to create an asset.
+
+
+Add your first sensor
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Usually, we are here because we want to measure something with respect to our assets. Each assets can have sensors for that, so let's add a power sensor to our new battery asset, using the ``flexmeasures`` :ref:`cli`:
+
+.. code-block:: bash
+
+   $ flexmeasures add sensor --name power --unit MW --event-resolution 5 --timezone Europe/Amsterdam --asset-id 1 --attributes '{"capacity_in_mw": 7}'
+
+The asset ID I got from the last CLI command, or I could consult ``flexmeasures show account --account-id <my-account-id>``.
+
+.. note: The event resolution is given in minutes. Capacity is something unique to power sensors, so it is added as an attribute.
+
+
+
+Seeing it work and next steps
+--------------------------------------
+
+It's finally time to start running FlexMeasures. This here is the direct form you can use to see if it's working:
+
+.. tabs::
+
+    .. tab:: via `pip`
+
+        .. code-block:: bash
+
+           $ flexmeasures run
+
+    .. tab:: via `docker`
+      
+        .. code-block:: bash
+    
+           # assuming you loaded flexmeasures.cfg (see above)
+           $ docker run lfenergy/flexmeasures
+        
+        .. code-block:: bash
+
+           # or everything on the terminal 
+           $ docker run -d --env FLEXMEASURES_ENV=development --env SECRET_KEY=something-secret --env SQLALCHEMY_DATABASE_URI=postgresql://<user>:<password>@<host-address>:<port>/<db-name> lfenergy/flexmeasures 
+
+
+This might print some warnings, see the next section where we go into more detail. For instance, when you see the dashboard, the map will not work. For that, you'll need to get your :ref:`mapbox_access_token` and add it to your config file.
+
+You can visit ``http://localhost:5000`` now to see if the app's UI works. You should be asked to log in (here you can use the admin user created above) and then see the dashboard.
+
+
+We achieved the main goal of this page, to get FlexMeasures to run.
+Below are some additional steps you might consider.
+
+
+Add time series data (beliefs)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There are three ways to add data:
+
+First, you can load in data from a file (CSV or Excel) via the ``flexmeasures`` :ref:`cli`:
+
+.. code-block:: bash
+   
+   $ flexmeasures add beliefs --file my-data.csv --skiprows 2 --delimiter ";" --source OurLegacyDatabase --sensor-id 1
+
+This assumes you have a file `my-data.csv` with measurements, which was exported from some legacy database, and that the data is about our sensor with ID 1. This command has many options, so do use its ``--help`` function.
+For instance, to add data as forecasts, use the ``--beliefcol`` parameter, to say precisely when these forecasts were made. Or add  ``--horizon`` for rolling forecasts if they all share the same horizon.
+
+Second, you can use the `POST /api/v3_0/sensors/data <../api/v3_0.html#post--api-v3_0-sensors-data>`_ endpoint in the FlexMeasures API to send meter data.
+
+You can also use the API to send forecast data. Similar to the ``add beliefs`` commands, you would use here the fields ``prior`` (to denote time of knowledge of data) or ``horizon`` (for rolling forecast data with equal horizon). Consult the documentation at :ref:`posting_sensor_data`.
+
+Finally, you can tell FlexMeasures to compute forecasts based on existing meter data with the ``flexmeasures add forecasts`` command, here is an example:
+
+.. code-block:: bash
+
+   $ flexmeasures add forecasts --from-date 2020-03-08 --to-date 2020-04-08 --asset-type Asset --asset my-solar-panel
+
+This obviously depends on some conditions (like the right underlying data) being right, consult :ref:`tut_forecasting_scheduling`.
+
+
+
+Set mail settings
+^^^^^^^^^^^^^^^^^
+
+For FlexMeasures to be able to send email to users (e.g. for resetting passwords), you need an email service that can do that (e.g. GMail). Set the MAIL_* settings in your configuration, see :ref:`mail-config`.
+
+.. _install-lp-solver:
+
+Install an LP solver
+^^^^^^^^^^^^^^^^^^^^
+
+For computing schedules, the FlexMeasures platform uses a linear program solver. Currently that is the HiGHS or CBC solvers.
+
+It's already installed in the Docker image. For yourself, you can simply install it like this:
+
+.. code-block:: bash
+
+   $ pip install highspy
+
+Read more on solvers (e.g. how to install a different one) at :ref:`installing-a-solver`.
+
+
+
+Install and configure Redis
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To let FlexMeasures queue forecasting and scheduling jobs, install a `Redis <https://redis.io/>`_ server (or rent one) and configure access to it within FlexMeasures' config file (see above). You can find the necessary settings in :ref:`redis-config`.
+
+Then, start workers in a console (or some other method to keep a long-running process going):
+
+.. code-block:: bash
+
+   $ flexmeasures jobs run-worker --queue forecasting
+   $ flexmeasures jobs run-worker --queue scheduling
+
+
+Where to go from here?
+------------------------
+
+If your data structure is good, you should think about (continually) adding measurement data. This tutorial mentioned how to add data, but :ref:`tut_posting_data` goes deeper with examples and terms & definitions.
+
+Then, you probably want to use FlexMeasures to generate forecasts and schedules! For this, read further in :ref:`tut_forecasting_scheduling`.
+
+One more consideration is to run FlexMeasures in a more professional ways as a we service. Head on to :ref:`deployment`.

+ 26 - 0
documentation/host/modes.rst

@@ -0,0 +1,26 @@
+.. _modes-dev:
+
+Modes
+============
+
+FlexMeasures can be run in specific modes (see the :ref:`modes-config` config setting).
+This is useful for certain special situations. Two are supported out of the box and we document here 
+how FlexMeasures behaves differently in these modes.
+
+Demo
+-------
+
+In this mode, the server is assumed to be used as a demonstration tool. The following adaptations therefore happen in the UI:
+
+- [UI] Logged-in users can view queues on the demo server (usually only admins can do that)
+- [UI] Demo servers often display login credentials, so visitors can try out functionality. Use the :ref:`demo-credentials-config` config setting to do this.
+
+Play
+------
+
+In this mode, the server is assumed to be used to run simulations.
+
+- [API] The ``restoreData`` endpoint is registered, enabling database resets through the API.
+- [UI] On the asset page, the ``sensors_to_show`` attribute can be used to show any sensor from any account, rather than only sensors from assets owned by the user's organization.
+
+.. note:: A former feature of play mode is now a separate config setting. To allow overwriting existing data when saving data to the database, use :ref:`overwrite-config`.

+ 101 - 0
documentation/host/queues.rst

@@ -0,0 +1,101 @@
+
+.. _redis-queue:
+
+Redis Queues
+=============================
+
+Requirements
+-------------
+
+The hard computation work (e.g. forecasting, scheduling) should happen outside of web requests (asynchronously), in job queues accessed by worker processes.
+
+This queueing relies on a Redis server, which has to be installed locally, or used on a separate host. In the latter case, configure :ref:`redis-config` details in your FlexMeasures config file.
+
+Here we assume you have access to a Redis server and configured it (see :ref:`redis-config`).
+The FlexMeasures unit tests use fakeredis to simulate this task queueing, with no configuration required.
+
+.. note:: See also :ref:`docker-compose` for usage of Redis via Docker and a more hands-on tutorial on the queues.
+
+
+Run workers
+-------------
+
+Here is how to run one worker for each kind of job (in separate terminals):
+
+.. code-block:: bash
+
+   $ flexmeasures jobs run-worker --name our-only-worker --queue forecasting|scheduling
+
+Running multiple workers in parallel might be a great idea.
+
+.. code-block:: bash
+
+   $ flexmeasures jobs run-worker --name forecaster --queue forecasting
+   $ flexmeasures jobs run-worker --name scheduler --queue scheduling
+
+You can also clear the job queues:
+
+.. code-block:: bash
+
+   $ flexmeasures jobs clear-queue --queue forecasting
+   $ flexmeasures jobs clear-queue --queue scheduling
+
+
+When the main FlexMeasures process runs (e.g. by ``flexmeasures run``\ ), the queues of forecasting and scheduling jobs can be visited at ``http://localhost:5000/tasks/forecasting`` and ``http://localhost:5000/tasks/schedules``\ , respectively (by admins).
+
+
+
+Inspect the queue and jobs
+------------------------------
+
+The first option to inspect the state of the ``forecasting`` queue should be via the formidable `RQ dashboard <https://github.com/Parallels/rq-dashboard>`_. If you have admin rights, you can access it at ``your-flexmeasures-url/rq/``\ , so for instance ``http://localhost:5000/rq/``. You can also start RQ dashboard yourself (but you need to know the redis server credentials):
+
+.. code-block:: bash
+
+   $ pip install rq-dashboard
+   $ rq-dashboard --redis-host my.ip.addr.ess --redis-password secret --redis-database 0
+
+
+RQ dashboard shows you ongoing and failed jobs, and you can see the error messages of the latter, which is very useful.
+
+Finally, you can also inspect the queue and jobs via a console (\ `see the nice RQ documentation <http://python-rq.org/docs/>`_\ ), which is more powerful. Here is an example of inspecting the finished jobs and their results:
+
+.. code-block:: python
+
+   from redis import Redis
+   from rq import Queue
+   from rq.job import Job
+   from rq.registry import FinishedJobRegistry
+
+   r = Redis("my.ip.addr.ess", port=6379, password="secret", db=2)
+   q = Queue("forecasting", connection=r)
+   finished = FinishedJobRegistry(queue=q)
+
+   finished_job_ids = finished.get_job_ids()
+   print("%d jobs finished successfully." % len(finished_job_ids))
+
+   job1 = Job.fetch(finished_job_ids[0], connection=r)
+   print("Result of job %s: %s" % (job1.id, job1.result))
+
+
+Redis queues on Windows
+---------------------------
+
+On Unix, the rq system is automatically set up as part of FlexMeasures's main setup (the ``rq`` dependency).
+
+However, rq is `not functional on Windows <http://python-rq.org/docs>`_ without the Windows Subsystem for Linux.
+
+On these versions of Windows, FlexMeasures's queuing system uses an extension of Redis Queue called ``rq-win``.
+This is also an automatically installed dependency of FlexMeasures.
+
+However, the Redis server needs to be set up separately. Redis itself does not work on Windows, so it might be easiest to commission a Redis server in the cloud (e.g. on kamatera.com).
+
+If you want to install Redis on Windows itself, it can be set up on a virtual machine as follows:
+
+
+* `Install Vagrant on Windows <https://www.vagrantup.com/intro/getting-started/>`_ and `VirtualBox <https://www.virtualbox.org/>`_
+* Download the `vagrant-redis <https://raw.github.com/ServiceStack/redis-windows/master/downloads/vagrant-redis.zip>`_ vagrant configuration
+* Extract ``vagrant-redis.zip`` in any folder, e.g. in ``c:\vagrant-redis``
+* Set ``config.vm.box = "hashicorp/precise64"`` in the Vagrantfile, and remove the line with ``config.vm.box_url``
+* Run ``vagrant up`` in Command Prompt
+* In case ``vagrant up`` fails because VT-x is not available, `enable it <https://www.howali.com/2017/05/enable-disable-intel-virtualization-technology-in-bios-uefi.html>`_ in your bios `if you can <https://www.intel.com/content/www/us/en/support/articles/000005486/processors.html>`_ (more debugging tips `here <https://forums.virtualbox.org/viewtopic.php?t=92111>`_ if needed)

File diff suppressed because it is too large
+ 285 - 0
documentation/index.rst


+ 43 - 0
documentation/make.bat

@@ -0,0 +1,43 @@
+@ECHO OFF
+
+pushd %~dp0
+
+REM Touch api docs to always trigger a build (otherwise updating a docstring wouldn't be enough to trigger a build)
+
+cd api
+copy * /B+ ,,/Y
+cd ..
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+	set SPHINXBUILD=sphinx-build
+)
+set SOURCEDIR=.
+%set BUILDDIR=_build
+set BUILDDIR=../flexmeasures/ui/static/documentation/
+set SPHINXPROJ=FLEXMEASURES
+
+if "%1" == "" goto help
+
+%SPHINXBUILD% >NUL 2>NUL
+if errorlevel 9009 (
+	echo.
+	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
+	echo.installed, then set the SPHINXBUILD environment variable to point
+	echo.to the full path of the 'sphinx-build' executable. Alternatively you
+	echo.may add the Sphinx directory to PATH.
+	echo.
+	echo.If you don't have Sphinx installed, grab it from
+	echo.http://sphinx-doc.org/
+	exit /b 1
+)
+
+%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+goto end
+
+:help
+%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+
+:end
+popd

+ 4 - 0
documentation/notes/macOS-docker-port-note.rst

@@ -0,0 +1,4 @@
+.. note:: For newer versions of macOS, port 5000 is in use by default by Control Center. You can turn this off by going to System Preferences > Sharing and untick the "Airplay Receiver" box.
+          If you don't want to do this for some reason, you can change the host port in the ``docker run`` command to some other port.
+          For example, to set it to port 5001, change ``-p 5000:5000`` in the command to ``-p 5001:5000``.
+          If you do this, remember that you will have to go to http://localhost:5001 in your browser when you want to inspect the FlexMeasures UI.

+ 10 - 0
documentation/notes/macOS-port-note.rst

@@ -0,0 +1,10 @@
+.. note:: For newer versions of macOS, port 5000 is in use by default by Control Center.
+          You can turn this off by going to System Preferences > Sharing and untick the "Airplay Receiver" box.
+          If you don't want to do this for some reason, you can change the port for locally running FlexMeasures by setting the ``FLASK_RUN_PORT`` environment variable.
+          For example, to set it to port 5001:
+
+          .. code-block:: bash
+
+              $ export FLASK_RUN_PORT=5001  # You can also add this to your local .env
+
+          If you do this, remember that you will have to go to http://localhost:5001 in your browser when you want to inspect the FlexMeasures UI.

+ 314 - 0
documentation/plugin/customisation.rst

@@ -0,0 +1,314 @@
+.. _plugin_customization:
+
+
+Plugin Customization
+=======================
+
+
+Adding your own scheduling algorithm
+-------------------------------------
+
+FlexMeasures comes with in-built scheduling algorithms for often-used use cases. However, you can use your own algorithm, as well.
+
+The idea is that you'd still use FlexMeasures' API to post flexibility states and trigger new schedules to be computed (see :ref:`posting_flex_states`),
+but in the background your custom scheduling algorithm is being used.
+
+Let's walk through an example!
+
+First, we need to write a a class (inhering from the Base Scheduler) with a `schedule` function which accepts arguments just like the in-built schedulers (their code is `here <https://github.com/FlexMeasures/flexmeasures/tree/main/flexmeasures/data/models/planning>`_).
+The following minimal example gives you an idea of some meta information you can add for labeling your data, as well as the inputs and outputs of such a scheduling function:
+
+.. code-block:: python
+
+    from datetime import datetime, timedelta
+    import pandas as pd
+    from pandas.tseries.frequencies import to_offset
+    from flexmeasures import Scheduler, Sensor
+
+
+    class DummyScheduler(Scheduler):
+
+        __author__ = "My Company"
+        __version__ = "2"
+
+        def compute(
+            self,
+            *args,
+            **kwargs
+        ):
+            """
+            Just a dummy scheduler that always plans to consume at maximum capacity.
+            (Schedulers return positive values for consumption, and negative values for production)
+            """
+            return pd.Series(
+                self.sensor.get_attribute("capacity_in_mw"),
+                index=pd.date_range(self.start, self.end, freq=self.resolution, inclusive="left"),
+            )
+    
+        def deserialize_config(self):
+            """Do not care about any flex config sent in."""
+            self.config_deserialized = True
+
+
+.. note:: It's possible to add arguments that describe the asset flexibility model and the flexibility (EMS) context in more detail.
+          For example, for storage assets we support various state-of-charge parameters. For details on flexibility model and context,
+          see :ref:`describing_flexibility` and the `[POST] /sensors/(id)/schedules/trigger <../api/v3_0.html#post--api-v3_0-sensors-(id)-schedules-trigger>`_ endpoint.
+        
+
+Finally, make your scheduler be the one that FlexMeasures will use for certain sensors:
+
+
+.. code-block:: python
+
+    from flexmeasures import Sensor
+
+    scheduler_specs = {
+        "module": "flexmeasures.data.tests.dummy_scheduler",  # or a file path, see note below
+        "class": "DummyScheduler",
+    }
+    
+    my_sensor = Sensor.query.filter(Sensor.name == "My power sensor on a flexible asset").one_or_none()
+    my_sensor.attributes["custom-scheduler"] = scheduler_specs
+
+
+From now on, all schedules (see :ref:`tut_forecasting_scheduling`) which are requested for this sensor should
+get computed by your custom function! For later lookup, the data will be linked to a new data source with the name "My Opinion".
+
+.. note:: To describe the module, we used an importable module here (actually a custom scheduling function we use to test this).
+          You can also provide a full file path to the module, e.g. "/path/to/my_file.py".
+
+
+.. todo:: We're planning to use a similar approach to allow for custom forecasting algorithms, as well.
+
+
+Deploying your plugin via Docker
+----------------------------------
+
+You can extend the FlexMeasures Docker image with your plugin's logic.
+
+Imagine your plugin package (with an ``__init__.py`` file, one of the setups we discussed in :ref:`plugin_showcase`) is called ``flexmeasures_testplugin``.
+Then, this is a minimal possible Dockerfile ― containers based on this will serve FlexMeasures (see the original Dockerfile in the FlexMeasures repository) with the plugin logic, like endpoints:
+
+.. code-block:: docker
+
+    FROM lfenergy/flexmeasures
+
+    COPY flexmeasures_testplugin/ /app/flexmeasures_testplugin
+    ENV FLEXMEASURES_PLUGINS="/app/flexmeasures_testplugin"
+
+You can of course also add multiple plugins this way.
+
+If you also want to install your requirements, you could for instance add these layers:
+
+.. code-block:: docker
+
+    COPY requirements/app.in /app/requirements/flexmeasures_testplugin.txt
+    RUN pip3 install --no-cache-dir -r requirements/flexmeasures_testplugin.txt
+
+.. note:: No need to install flexmeasures here, as the Docker image we are based on already installed FlexMeasures from code. If you pip3-install your plugin here (assuming it's on Pypi), check if it recognizes that FlexMeasures installation as it should.
+
+
+
+Adding your own style sheets
+----------------------------
+
+You can style your plugin's pages in a distinct way by adding your own style-sheet. This happens by overwriting FlexMeasures ``styles`` block. Add to your plugin's base template (see above):
+
+.. code-block:: html 
+
+    {% block styles %}
+        {{ super() }}
+        <!-- Our client styles -->
+        <link rel="stylesheet" href="{{ url_for('our_client_bp.static', filename='css/style.css')}}">
+    {% endblock %}
+
+This will find `css/styles.css` if you add that folder and file to your Blueprint's static folder.
+
+.. note:: This styling will only apply to the pages defined in your plugin (to pages based on your own base template). To apply a styling to all other pages which are served by FlexMeasures, consider using the config setting :ref:`extra-css-config`. 
+
+
+Adding config settings
+----------------------------
+
+FlexMeasures can automatically check for you if any custom config settings, which your plugin is using, are present.
+This can be very useful in maintaining installations of FlexMeasures with plugins.
+Config settings can be registered by setting the (optional) ``__settings__`` attribute on your plugin module:
+
+.. code-block:: python
+
+    __settings__ = {
+        "MY_PLUGIN_URL": {
+            "description": "URL used by my plugin for x.",
+            "level": "error",
+        },
+        "MY_PLUGIN_TOKEN": {
+            "description": "Token used by my plugin for y.",
+            "level": "warning",
+            "message_if_missing": "Without this token, my plugin will not do y.",
+            "parse_as": str,
+        },
+        "MY_PLUGIN_COLOR": {
+            "description": "Color used to override the default plugin color.",
+            "level": "info",
+        },
+    }
+
+Alternatively, use ``from my_plugin import __settings__`` in your plugin module, and create ``__settings__.py`` with:
+
+.. code-block:: python
+
+    MY_PLUGIN_URL = {
+        "description": "URL used by my plugin for x.",
+        "level": "error",
+    }
+    MY_PLUGIN_TOKEN = {
+        "description": "Token used by my plugin for y.",
+        "level": "warning",
+        "message_if_missing": "Without this token, my plugin will not do y.",
+        "parse_as": str,
+    }
+    MY_PLUGIN_COLOR = {
+        "description": "Color used to override the default plugin color.",
+        "level": "info",
+    }
+
+Finally, you might want to override some FlexMeasures configuration settings from within your plugin.
+Some examples for possible settings are named on this page, e.g. the custom style (see above) or custom logo (see below).
+There is a `record_once` function on Blueprints which can help with this. An example:
+
+.. code-block:: python
+
+    @our_client_bp.record_once
+    def record_logo_path(setup_state):
+        setup_state.app.config[
+            "FLEXMEASURES_MENU_LOGO_PATH"
+        ] = "/path/to/my/logo.svg"
+    
+
+
+Using a custom favicon icon
+----------------------------
+
+The favicon might be an important part of your customisation. You probably want your logo to be used.
+
+First, your blueprint needs to know about a folder with static content (this is fairly common ― it's also where you'd put your own CSS or JavaScript files):
+
+.. code-block:: python
+
+    our_client_bp = Blueprint(
+        "our_client",
+        "our_client",
+        static_folder="our_client/ui/static",
+    )
+
+Put your icon file in that folder. The exact path may depend on how you set your plugin directories up, but this is how a blueprint living in its own directory could work.
+
+Then, overwrite the ``/favicon.ico`` route which FlexMeasures uses to get the favicon from:
+
+.. code-block:: python
+
+    from flask import send_from_directory
+    from flexmeasures.ui import flexmeasures_ui
+
+    @flexmeasures_ui.route("/favicon.ico")
+    def favicon():
+        return send_from_directory(
+            our_client_bp.static_folder,
+            "img/favicon.png",
+            mimetype="image/png",
+        )
+
+Here we assume your favicon is a PNG file. You can also use a classic `.ico` file, then your mime type probably works best as ``image/x-icon``.
+
+
+Customizing the breadcrumbs
+---------------------------------
+
+On asset and sensor pages, we show breadcrumbs on top (e.g. Account -> Asset -> ChildAsset -> Sensor).
+Say you want to adapt this, so that some asset has a unique breadcrumb path.
+
+Add an attribute to an asset or sensor named "breadcrumb_ancestry", e.g.:
+
+.. code-block:: python
+
+    my_asset.attributes["breadcrumb_ancestry"] = 
+        [
+            {"url": my_url, "name": "Top-level", "type": "Asset"},
+            {"url": another_url, "name": "2nd-level", "type": "Asset"}
+        ]
+
+Then the page will show these two breadcrumbs.
+
+.. note:: For child assets without their own custom attribute, their breadcrumbs will also have these breadcrumbs on the left, but they will add their own breadcrumbs as usual.
+
+
+In the same way, you can customize the siblings that are shown as drop-down for the current (right-most) breadcrumb.
+For this, the attribute is named "breadcrumb_siblings" and follows the same syntax. One use case might be to set it to empty (``[]``).
+
+
+Validating arguments in your CLI commands with marshmallow
+-----------------------------------------------------------
+
+Arguments to CLI commands can be validated using `marshmallow <https://marshmallow.readthedocs.io/>`_.
+FlexMeasures is using this functionality (via the ``MarshmallowClickMixin`` class) and also defines some custom field schemas.
+We demonstrate this here, and also show how you can add your own custom field schema:
+
+.. code-block:: python
+
+    from datetime import datetime
+
+    import click
+    from flexmeasures.data.schemas import AwareDateTimeField
+    from flexmeasures.data.schemas.utils import MarshmallowClickMixin
+    from marshmallow import fields
+
+    class CLIStrField(fields.Str, MarshmallowClickMixin):
+        """
+        String field validator, made usable for CLI functions.
+        You could also define your own validations here.
+        """
+
+    @click.command("meet")
+    @click.option(
+        "--where",
+        required=True,
+        type=CLIStrField(),
+        help="(Required) Where we meet",
+    )
+    @click.option(
+        "--when",
+        required=False,
+        type=AwareDateTimeField(format="iso"),  # FlexMeasures already made this field suitable for CLI functions
+        help="[Optional] When we meet (expects timezone-aware ISO 8601 datetime format)",
+    )
+    def schedule_meeting(
+        where: str,
+        when: datetime | None = None,
+    ):
+        print(f"Okay, see you {where} on {when}.")
+
+
+Customising the login page teaser
+----------------------------------
+
+FlexMeasures shows an image carousel next to its login form (see ``ui/templates/admin/login_user.html``).
+
+You can overwrite this content by adding your own login template and defining the ``teaser`` block yourself, e.g.:
+
+.. code-block:: html
+
+    {% extends "admin/login_user.html" %}
+
+    {% block teaser %}
+
+        <h1>Welcome to my plugin!</h1>
+
+    {% endblock %}
+
+Place this template file in the template folder of your plugin blueprint (see above). Your template must have a different filename than "login_user", so FlexMeasures will find it properly!
+
+Finally, add this config setting to your FlexMeasures config file (using the template filename you chose, obviously):
+
+ .. code-block:: python
+
+    SECURITY_LOGIN_USER_TEMPLATE = "my_user_login.html"

+ 35 - 0
documentation/plugin/introduction.rst

@@ -0,0 +1,35 @@
+.. _plugins:
+
+Writing Plugins
+====================
+
+You can extend FlexMeasures with functionality like UI pages, API endpoints, CLI functions and custom scheduling algorithms.
+This is eventually how energy flexibility services are built on top of FlexMeasures!
+
+In an nutshell, a FlexMeasures plugin adds functionality via one or more `Flask Blueprints <https://flask.palletsprojects.com/en/1.1.x/tutorial/views/>`_.
+
+
+How to make FlexMeasures load your plugin
+------------------------------------------
+
+Use the config setting :ref:`plugin-config` to list your plugin(s).
+
+A setting in this list can:
+
+1. point to a plugin folder containing an __init__.py file
+2. be the name of an installed module (i.e. in a Python console `import <module_name>` would work)
+
+Each plugin defines at least one Blueprint object. These will be registered with the Flask app,
+so their functionality (e.g. routes) becomes available.
+
+We'll discuss an example below.
+
+In that example, we use the first option from above to tell FlexMeasures about the plugin. It is the simplest way to start playing around.
+
+The second option (the plugin being an importable Python package) allows for more professional software development. For instance, it is more straightforward in that case to add code hygiene, version management and dependencies (your plugin can depend on a specific FlexMeasures version and other plugins can depend on yours).
+
+To hit the ground running with that approach, we provide a `CookieCutter template <https://github.com/FlexMeasures/flexmeasures-plugin-template>`_.
+It also includes a few Blueprint examples and best practices.
+
+
+Continue reading the :ref:`plugin_showcase` or possibilities to do :ref:`plugin_customization`.

+ 158 - 0
documentation/plugin/showcase.rst

@@ -0,0 +1,158 @@
+.. _plugin_showcase:
+
+
+Plugin showcase
+==================
+
+Here is a showcase file which constitutes a FlexMeasures plugin called ``our_client``.
+
+* We demonstrate adding a view, which can be rendered using the FlexMeasures base templates.
+* We also showcase a CLI function which has access to the FlexMeasures `app` object. It can be called via ``flexmeasures our-client test``. 
+
+We first create the file ``<some_folder>/our_client/__init__.py``. This means that ``our_client`` is the plugin folder and becomes the plugin name.
+
+With the ``__init__.py`` below, plus the custom Jinja2 template, ``our_client`` is a complete plugin.
+
+.. code-block:: python
+
+    __version__ = "2.0"
+
+    from flask import Blueprint, render_template, abort
+
+    from flask_security import login_required
+    from flexmeasures.ui.utils.view_utils import render_flexmeasures_template
+
+
+    our_client_bp = Blueprint('our-client', __name__,
+                              template_folder='templates')
+
+    # Showcase: Adding a view
+
+    @our_client_bp.route('/')
+    @our_client_bp.route('/my-page')
+    @login_required
+    def my_page():
+        msg = "I am a FlexMeasures plugin !"
+        # Note that we render via the in-built FlexMeasures way
+        return render_flexmeasures_template(
+            "my_page.html",
+            message=msg,
+        )
+
+
+    # Showcase: Adding a CLI command
+
+    import click
+    from flask import current_app
+    from flask.cli import with_appcontext
+
+
+    our_client_bp.cli.help = "Our client commands"
+
+    @our_client_bp.cli.command("test")
+    @with_appcontext
+    def our_client_test():
+        print(f"I am a CLI command, part of FlexMeasures: {current_app}")
+
+
+.. note:: You can overwrite FlexMeasures routing in your plugin. In our example above, we are using the root route ``/``. FlexMeasures registers plugin routes before its own, so in this case visiting the root URL of your app will display this plugged-in view (the same you'd see at `/my-page`).
+
+.. note:: The ``__version__`` attribute on our module is being displayed in the standard FlexMeasures UI footer, where we show loaded plugins. Of course, it can also be useful for your own maintenance.
+
+
+The template would live at ``<some_folder>/our_client/templates/my_page.html``, which works just as other FlexMeasures templates (they are Jinja2 templates):
+
+.. code-block:: html
+
+    {% extends "base.html" %}
+
+    {% set active_page = "my-page" %}
+
+    {% block title %} Our client dashboard {% endblock %}
+
+    {% block divs %}
+    
+        <!-- This is where your custom content goes... -->
+
+        {{ message }}
+
+    {% endblock %}
+
+
+.. note:: Plugin views can also be added to the FlexMeasures UI menu ― just name them in the config setting :ref:`menu-config`. In this example, add ``my-page``. This also will make the ``active_page`` setting in the above template useful (highlights the current page in the menu).
+
+Starting the template with ``{% extends "base.html" %}`` integrates your page content into the FlexMeasures UI structure. You can also extend a different base template. For instance, we find it handy to extend ``base.html`` with a custom base template, to extend the footer, as shown below:
+
+ .. code-block:: html
+
+    {% extends "base.html" %}
+
+    {% block copyright_notice %}
+
+    Created by <a href="https://seita.nl/">Seita Energy Flexibility</a>,
+    in cooperation with <a href="https://ourclient.nl/">Our Client</a>
+    &copy
+    <script>var CurrentYear = new Date().getFullYear(); document.write(CurrentYear)</script>.
+    
+    {% endblock copyright_notice %}
+
+We'd name this file ``our_client_base.html``. Then, we'd extend our page template from ``our_client_base.html``, instead of ``base.html``.
+
+
+Using other code files in your non-package plugin
+----------------------------
+
+Say you want to include other Python files in your plugin, importing them in your ``__init__.py`` file.
+With this file-only version of loading the plugin (if your plugin isn't imported as a package),
+this is a bit tricky.
+
+But it can be achieved if you put the plugin path on the import path. Do it like this in your ``__init__.py``:
+
+.. code-block:: python
+
+    import os
+    import sys
+
+    HERE = os.path.dirname(os.path.abspath(__file__))
+    sys.path.insert(0, HERE)
+
+    from my_other_file import my_function
+
+
+
+Notes on writing tests for your plugin
+----------------------------
+
+Good software practice is to write automatable tests. We encourage you to also do this in your plugin.
+We do, and our CookieCutter template for plugins (see above) has simple examples how that can work for the different use cases
+(i.e. UI, API, CLI).
+
+However, there are two caveats to look into:
+
+* Your tests need a FlexMeasures app context. FlexMeasure's app creation function provides a way to inject a list of plugins directly. The following could be used for instance in your ``app`` fixture within the top-level ``conftest.py`` if you are using pytest:
+
+.. code-block:: python
+
+    from flexmeasures.app import create as create_flexmeasures_app
+    from .. import __name__
+
+    test_app = create_flexmeasures_app(env="testing", plugins=[f"../"{__name__}])
+
+* Test frameworks collect tests from your code and therefore might import your modules. This can interfere with the registration of routes on your Blueprint objects during plugin registration. Therefore, we recommend reloading your route modules, after the Blueprint is defined and before you import them. For example:
+
+.. code-block:: python
+
+    my_plugin_ui_bp: Blueprint = Blueprint(
+        "MyPlugin-UI",
+        __name__,
+        template_folder="my_plugin/ui/templates",
+        static_folder="my_plugin/ui/static",
+        url_prefix="/MyPlugin",
+    )
+    # Now, before we import this dashboard module, in which the "/dashboard" route is attached to my_plugin_ui_bp,
+    # we make sure it's being imported now, *after* the Blueprint's creation.
+    importlib.reload(sys.modules["my_plugin.my_plugin.ui.views.dashboard"])
+    from my_plugin.ui.views import dashboard
+
+The packaging path depends on your plugin's package setup, of course.
+

+ 294 - 0
documentation/tut/building_uis.rst

@@ -0,0 +1,294 @@
+.. _tut_building_uis:
+
+Building custom UIs
+========================
+
+FlexMeasures provides its own UI (see :ref:`dashboard`), but it is a back office platform first.
+Most energy service companies already have their own user-facing system.
+We therefore made it possible to incorporate information from FlexMeasures in custom UIs.
+
+This tutorial will show how the FlexMeasures API can be used from JavaScript to extract information and display it in a browser (using HTML). We'll extract information about users, assets and even whole plots!
+
+.. contents:: Table of contents
+    :local:
+    :depth: 1
+
+
+.. note:: We'll use standard JavaScript for this tutorial, in particular the `fetch <https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch>`_ functionality, which many browsers support out-of-the-box these days. You might want to use more high-level frameworks like jQuery, Angular, React or VueJS for your frontend, of course.
+
+
+Get an authentication token
+-----------------------
+
+FlexMeasures provides the `[POST] /api/requestAuthToken <../api/v2_0.html#post--api-v2_0-requestAuthToken>`_ endpoint, as discussed in :ref:`api_auth`.
+Here is a JavaScript function to call it:
+
+.. code-block:: JavaScript
+
+    var flexmeasures_domain =  "http://localhost:5000";    
+    
+    function getAuthToken(){
+        return fetch(flexmeasures_domain + '/api/requestAuthToken',
+            {
+                method: "POST",
+                mode: "cors", 
+                headers:
+                {
+                    "Content-Type": "application/json",
+                },
+                body: JSON.stringify({"email": email, "password": password})  
+            }
+            )
+            .then(function(response) { return response.json(); })
+            .then(console.log("Got auth token from FlexMeasures server ..."));
+    }
+
+It only expects you to set ``email`` and ``password`` somewhere (you could also pass them to the function, your call). In addition, we expect here that ``flexmeasures_domain`` is set to the FlexMeasures server you interact with, for example "https://company.flexmeasures.io". 
+
+We'll see how to make use of the ``getAuthToken`` function right away, keep on reading.
+
+
+
+
+Load user information
+-----------------------
+
+Let's say we are interested in a particular user's meta data. For instance, which email address do they have and which timezone are they operating in? 
+
+Given we have set a variable called ``userId``, here is some code to find out and display that information in a simple HTML table:
+
+
+.. code-block:: html
+
+    <h1>User info</h1>
+    <p>
+        Email address: <span id="user_email"></span>
+    </p>
+    <p>
+        Time zone: <span id="user_timezone"></span>
+    </p>
+
+.. code-block:: JavaScript
+
+    function loadUserInfo(userId, authToken) {
+        fetch(flexmeasures_domain + '/api/v2_0/user/' + userId,
+            {
+                method: "GET",
+                mode: "cors",
+                headers:
+                    {
+                    "Content-Type": "application/json",
+                    "Authorization": authToken
+                    },
+            }
+        )
+        .then(console.log("Got user data from FlexMeasures server ..."))
+        .then(function(response) { return response.json(); })
+        .then(function(userInfo) {
+            document.querySelector('#user_email').innerHTML = userInfo.email;
+            document.querySelector('#user_timezone').innerHTML = userInfo.timezone;
+        })            
+    }
+
+    document.onreadystatechange = () => {
+        if (document.readyState === 'complete') {
+            getAuthToken()
+            .then(function(response) {
+                var authToken = response.auth_token;
+                loadUserInfo(userId, authToken);
+            })
+        }
+    }
+           
+The result looks like this in your browser:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/user_info.png
+    :align: center
+..    :scale: 40%
+
+
+From FlexMeasures, we are using the `[GET] /user <../api/v3_0.html#get--api-v3_0-user-(id)>`_ endpoint, which loads information about one user.
+Browse its documentation to learn about other information you could get.
+
+
+Load asset information
+-----------------------
+
+Similarly, we can load asset information. Say we have a variable ``accountId`` and we want to show which assets FlexMeasures administrates for that account.
+
+For the example below, we've used the ID of the account from our toy tutorial, see :ref:`toy tutorial<tut_toy_schedule>`.
+
+
+.. code-block:: html
+
+    <style>
+        #assetTable th, #assetTable td {
+            border-right: 1px solid gray;
+            padding-left: 5px;
+            padding-right: 5px;
+        }
+    </style>
+
+.. code-block:: html
+
+    <table id="assetTable">
+        <thead>
+          <tr>
+            <th>Asset name</th>
+            <th>ID</th>
+            <th>Latitude</th>
+            <th>Longitude</th>
+          </tr>
+        </thead>
+        <tbody></tbody>
+    </table>
+
+
+.. code-block:: JavaScript
+    
+    function loadAssets(accountId, authToken) {
+        var params = new URLSearchParams();
+        params.append("account_id", accountId);
+        fetch(flexmeasures_domain + '/api/v3_0/assets?' + params.toString(),
+            {
+                method: "GET",
+                mode: "cors",
+                headers:
+                    {
+                    "Content-Type": "application/json",
+                    "Authorization": authToken
+                    },
+            }
+        )
+        .then(console.log("Got asset data from FlexMeasures server ..."))
+        .then(function(response) { return response.json(); })
+        .then(function(rows) {
+            rows.forEach(row => {
+            const tbody = document.querySelector('#assetTable tbody');
+            const tr = document.createElement('tr');
+            tr.innerHTML = `<td>${row.name}</td><td>${row.id}</td><td>${row.latitude}</td><td>${row.longitude}</td>`;
+            tbody.appendChild(tr);
+            });
+        })            
+    }
+
+    document.onreadystatechange = () => {
+        if (document.readyState === 'complete') {
+            getAuthToken()
+            .then(function(response) {
+                var authToken = response.auth_token;
+                loadAssets(accountId, authToken);
+            })
+        }
+    }
+
+           
+The result looks like this in your browser:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/asset_info.png
+    :align: center
+..    :scale: 40%
+
+
+ 
+From FlexMeasures, we are using the `[GET] /assets <../api/v3_0.html#get--api-v3_0-assets>`_ endpoint, which loads a list of assets.
+Note how, unlike the user endpoint above, we are passing a query parameter to the API (``account_id``).
+We are only displaying a subset of the information which is available about assets.
+Browse the endpoint documentation to learn other information you could get.
+
+For a listing of public assets, replace `/api/v3_0/assets` with `/api/v3_0/assets/public`.
+
+
+Embedding charts
+------------------------
+
+Creating charts from data can consume lots of development time.
+FlexMeasures can help here by delivering ready-made charts.
+In this tutorial, we'll embed a chart with electricity prices.
+
+First, we define a div tag for the chart and a basic layout (full width). We also load the visualization libraries we need (more about that below), and set up a custom formatter we use in FlexMeasures charts.
+
+.. code-block:: html
+
+    <script src="https://d3js.org/d3.v6.min.js"></script>
+    <script src="https://cdn.jsdelivr.net/npm/vega@5.22.1"></script>
+    <script src="https://cdn.jsdelivr.net/npm/vega-lite@5.2.0"></script>
+    <script src="https://cdn.jsdelivr.net/npm/vega-embed@6.20.8"></script>
+    <script>
+        vega.expressionFunction('quantityWithUnitFormat', function(datum, params) {
+            return d3.format(params[0])(datum) + " " + params[1];
+        });
+    </script>
+
+    <div id="sensor-chart" style="width: 100%;"></div>
+
+Now we define a JavaScript function to ask the FlexMeasures API for a chart and then embed it:
+
+.. code-block:: JavaScript
+
+    function embedChart(params, authToken, sensorId, divId){
+        fetch(
+            flexmeasures_domain + '/api/dev/sensor/' + sensorId + '/chart?include_data=true&' + params.toString(),
+            {
+                method: "GET",
+                mode: "cors",
+                headers:
+                    {
+                    "Content-Type": "application/json",
+                    "Authorization": authToken
+                    }
+            }
+        )
+        .then(function(response) {return response.json();})
+        .then(function(data) {vegaEmbed(divId, data)})
+    }
+
+This function allows us to request a chart (actually, a JSON specification of a chart that can be interpreted by vega-lite), and then embed it within a ``div`` tag of our choice.
+
+From FlexMeasures, we are using the `GET /api/dev/sensor/(id)/chart/ <../api/dev.html#get--api-dev-sensor-(id)-chart->`_ endpoint.
+Browse the endpoint documentation to learn more about it.
+
+.. note:: Endpoints in the developer API are still under development and are subject to change in new releases.
+
+Here are some common parameter choices for our JavaScript function:
+
+.. code-block:: JavaScript
+
+    var params = new URLSearchParams();
+    params.append("width", 400); // an integer number of pixels; without it, the chart will be scaled to the full width of the container (note that we set the div width to 100%)
+    params.append("height", 400); // an integer number of pixels; without it, a FlexMeasures default is used
+    params.append("event_starts_after", '2022-10-01T00:00+01'); // only fetch events from midnight October 1st
+    params.append("event_ends_before", '2022-10-08T00:00+01'); // only fetch events until midnight October 8th
+    params.append("beliefs_before", '2022-10-03T00:00+01'); // only fetch beliefs prior to October 3rd (time travel)
+
+
+As FlexMeasures uses `the Vega-Lite Grammar of Interactive Graphics <https://vega.github.io/vega-lite/>`_ internally, we also need to import this library to render the chart (see the ``script`` tags above). It's crucial to note that FlexMeasures is not transferring images across HTTP here, just information needed to render them.
+
+.. note:: It's best to match the visualization library versions you use in your frontend to those used by FlexMeasures. These are set by the FLEXMEASURES_JS_VERSIONS config (see :ref:`configuration`) with defaults kept in ``flexmeasures/utils/config_defaults``.
+
+Now let's call this function when the HTML page is opened, to embed our chart:
+
+.. code-block:: JavaScript
+
+    document.onreadystatechange = () => {
+        if (document.readyState === 'complete') {
+            getAuthToken()
+            .then(function(response) {
+                var authToken = response.auth_token;
+
+                var params = new URLSearchParams();
+                params.append("event_starts_after", '2022-01-01T00:00+01');
+                embedChart(params, authToken, 1, '#sensor-chart');
+            })
+        }
+    }
+
+The parameters we pass in describe what we want to see: all data for sensor 3 since 2022.
+If you followed our :ref:`toy tutorial<tut_toy_schedule>` on a fresh FlexMeasures installation, sensor 1 contains market prices (authenticate with the toy-user to gain access).
+
+           
+The result looks like this in your browser:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/plotting-prices.png
+    :align: center
+..    :scale: 40%

+ 148 - 0
documentation/tut/flex-model-v2g.rst

@@ -0,0 +1,148 @@
+.. _tut_v2g:
+
+A flex-modeling tutorial for storage: Vehicle-to-grid
+------------------------------------------------------
+
+The most powerful concept of FlexMeasures is the flex-model. We feel it is time to pay more attention to it and illustrate its effects.
+
+As a demonstration of how to construct a suitable flex model for a given use case, let us for a moment consider a use case where FlexMeasures is asked (through API calls) to compute :abbr:`V2G (vehicle-to-grid)` schedules.
+(For a more general introduction to flex modeling, see :ref:`describing_flexibility`.)
+
+In this example, the client is interested in the following:
+
+1. :ref:`battery_protection`: Protect the battery from degradation by constraining any cycling between 25% and 85% of its available storage capacity.
+2. :ref:`car_reservations`: Ensure a minimum :abbr:`SoC (state of charge)` of 95% based on a reservation calendar for the car.
+3. :ref:`earning_by_cycling`: Use the car battery to earn money (given some dynamic tariff) so long as the above constraints are met.
+
+The following chart visualizes how constraints 1 and 2 can be formulated within a flex model, such that the resulting scheduling problem becomes feasible. A solid line shows a feasible solution, and a dashed line shows an infeasible solution.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/v2g_minima_maxima.png
+    :align: center
+|
+
+
+.. _battery_protection:
+
+Battery protection
+==================
+
+Let's consider a car battery with a storage capacity of 60 kWh, to be scheduled in 5-minute intervals.
+Constraining the cycling to occur within a static 25-85% SoC range can be modelled through the following ``soc-min`` and ``soc-max`` fields of the flex model:
+
+.. code-block:: json
+
+    {
+        "flex-model": {
+            "soc-min": "15 kWh",
+            "soc-max": "51 kWh"
+        }
+    }
+
+A starting SoC below 15 kWh (25%) will lead to immediate charging to get within limits (as shown above).
+Likewise, a starting SoC above 51 kWh (85%) would lead to immediate discharging.
+Setting a SoC target outside of the static range leads to an infeasible problem and will be rejected by the FlexMeasures API.
+
+The soc-min and soc-max settings are constant constraints.
+To enable a temporary target SoC of more than 85% (for car reservations, see the next section), it is necessary to relax the ``soc-max`` field to 60 kWh (100%), and to instead use the ``soc-maxima`` field to convey the desired upper limit for regular cycling:
+
+.. code-block:: json
+
+    {
+        "flex-model": {
+            "soc-min": "15 kWh",
+            "soc-max": "60 kWh",
+            "soc-maxima": [
+                {
+                    "value": "51 kWh",
+                    "start": "2024-02-04T10:35:00+01:00",
+                    "end": "2024-02-05T04:25:00+01:00"
+                }
+            ]
+        }
+    }
+
+The maxima constraints should be relaxed—or withheld entirely—within some time window before any SoC target (as shown above).
+This time window should be at least wide enough to allow the target to be reached in time, and can be made wider to allow the scheduler to take advantage of favourable market prices along the way.
+
+
+.. _car_reservations:
+
+Car reservations
+================
+
+Given a reservation for 8 AM on February 5th, constraint 2 can be modelled through the following (additional) ``soc-minima`` constraint:
+
+.. code-block:: json
+
+    {
+        "flex-model": {
+            "soc-minima": [
+                {
+                    "value": "57 kWh",
+                    "datetime": "2024-02-05T08:00:00+01:00"
+                }
+            ]
+        }
+    }
+
+This constraint also signals that if the car is not plugged out of the Charge Point at 8 AM, the scheduler is in principle allowed to start discharging immediately afterwards.
+To make sure the car remains at or above 95% SoC for some time, additional soc-minima constraints should be set accordingly, taking into account the scheduling resolution (here, 5 minutes). For example, to keep it charged (nearly) fully until 8.15 AM:
+
+.. code-block:: json
+
+    {
+        "flex-model": {
+            "soc-minima": [
+                {
+                    "value": "57 kWh",
+                    "start": "2024-02-05T08:00:00+01:00",
+                    "end": "2024-02-05T08:15:00+01:00"
+                }
+            ]
+        }
+    }
+
+The car may still charge and discharge within those 15 minutes, but it won't go below 95%.
+Alternatively, to keep the car from discharging altogether during that time, limit the ``production-capacity`` (likewise, use the ``consumption-capacity`` to prevent any charging):
+
+.. code-block:: json
+
+    {
+        "flex-model": {
+            "soc-minima": [
+                {
+                    "value": "57 kWh",
+                    "datetime": "2024-02-05T08:00:00+01:00"
+                }
+            ],
+            "production-capacity": [
+                {
+                    "value": "0 kW",
+                    "start": "2024-02-05T08:00:00+01:00",
+                    "end": "2024-02-05T08:15:00+01:00"
+                }
+            ]
+        }
+    }
+
+.. note:: In case the ``soc-minima`` field defines partially overlapping time periods, FlexMeasures automatically resolves this by selecting the maximum. Likewise, the minimum is selected for partially overlapping time periods in the ``soc-maxima``, ``power-capacity``, ``production-capacity`` and ``consumption-capacity`` flex-model fields, and also in the ``site-power-capacity``, ``site-production-capacity`` and ``site-consumption-capacity`` flex-context fields.
+
+.. _earning_by_cycling:
+
+Earning by cycling
+==================
+
+To provide an incentive for cycling the battery in response to market prices, the ``consumption-price`` and ``production-price`` fields of the flex context may be used, which define the sensor IDs under which the price data is stored that is relevant to the given site:
+
+.. code-block:: json
+
+    {
+        "flex-context": {
+            "consumption-price": {"sensor": 41},
+            "production-price": {"sensor": 42}
+        }
+    }
+
+
+We hope this demonstration helped to illustrate the flex-model of the storage scheduler. Until now, optimizing storage (like batteries) has been the sole focus of these tutorial series.
+In :ref:`tut_toy_schedule_process`, we'll turn to something different: the optimal timing of processes with fixed energy work and duration.

+ 214 - 0
documentation/tut/forecasting_scheduling.rst

@@ -0,0 +1,214 @@
+.. _tut_forecasting_scheduling:
+
+Forecasting & scheduling
+========================
+
+Once FlexMeasures contains data (see :ref:`tut_posting_data`), you can enjoy its forecasting and scheduling services.
+Let's take a look at how FlexMeasures users can access information from these services, and how you (if you are hosting FlexMeasures yourself) can set up the data science queues for this.
+
+.. contents:: Table of contents
+    :local:
+    :depth: 1
+
+If you want to learn more about the actual algorithms used in the background, head over to :ref:`scheduling` and :ref:`forecasting`.
+
+.. note:: FlexMeasures comes with in-built scheduling algorithms. You can use your own algorithm, as well, see :ref:`plugin-customization`.
+
+
+Maintaining the queues
+------------------------------------
+
+.. note:: If you are not hosting FlexMeasures yourself, skip right ahead to :ref:`how_queue_forecasting` or :ref:`getting_prognoses`.
+
+Here we assume you have access to a Redis server and configured it (see :ref:`redis-config`).
+
+Start to run one worker for each kind of job (in a separate terminal):
+
+.. code-block:: bash
+
+   $ flexmeasures jobs run-worker --queue forecasting
+   $ flexmeasures jobs run-worker --queue scheduling
+
+
+You can also clear the job queues:
+
+.. code-block:: bash
+
+   $ flexmeasures jobs clear-queue --queue forecasting
+   $ flexmeasures jobs clear-queue --queue scheduling
+
+
+When the main FlexMeasures process runs (e.g. by ``flexmeasures run``\ ), the queues of forecasting and scheduling jobs can be visited at ``http://localhost:5000/tasks/forecasting`` and ``http://localhost:5000/tasks/schedules``\ , respectively (by admins).
+
+When forecasts and schedules have been generated, they should be visible at ``http://localhost:5000/assets/<id>``.
+
+
+.. note:: You can run workers who process jobs on different computers than the main server process. This can be a great architectural choice. Just keep in mind to use the same databases (postgres/redis) and to stick to the same FlexMeasures version on both.
+
+
+.. _how_queue_forecasting:
+
+How forecasting jobs are queued
+------------------
+
+A forecasting job is an order to create forecasts based on measurements.
+A job can be about forecasting one point in time or about forecasting a range of points.
+
+In FlexMeasures, the usual way of creating forecasting jobs would be right in the moment when new power, weather or price data arrives through the API (see :ref:`tut_posting_data`).
+So technically, you don't have to do anything to keep fresh forecasts.
+
+The decision which horizons to forecast is currently also taken by FlexMeasures. For power data, FlexMeasures makes this decision depending on the asset resolution. For instance, a resolution of 15 minutes leads to forecast horizons of 1, 6, 24 and 48 hours. For price data, FlexMeasures chooses to forecast prices forward 24 and 48 hours
+These are decent defaults, and fixing them has the advantage that schedulers (see below) will know what to expect. However, horizons will probably become more configurable in the near future of FlexMeasures.
+
+You can also add forecasting jobs directly via the CLI. We explain this practice in the next section. 
+
+
+
+Historical forecasts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There might be reasons to add forecasts of past time ranges. For instance, for visualization of past system behavior and to check how well the forecasting models have been doing on a longer stretch of data.
+
+If you host FlexMeasures yourself, we provide a CLI task for adding forecasts for whole historic periods. This is an example call:
+
+Here we request 6-hour forecasts to be made for two sensors, for a period of two days:
+
+.. code-block:: bash
+
+    $ flexmeasures add forecasts --sensor 2 --sensor 3 \
+        --from-date 2015-02-01 --to-date 2015-08-31 \
+        --horizon 6 --as-job
+
+This is half a year of data, so it will take a while.
+
+It can be good advice to dispatch this work in smaller chunks.
+Alternatively, note the ``--as-job`` parameter.
+If you use it, the forecasting jobs will be queued and picked up by worker processes (see above). You could run several workers (e.g. one per CPU) to get this work load done faster.
+
+Run ``flexmeasures add forecasts --help`` for more information.
+
+
+.. _how_queue_scheduling:
+
+How scheduling jobs are queued
+------------------
+
+In FlexMeasures, a scheduling job is an order to plan optimised actions for flexible devices.
+It usually involves a linear program that combines a state of energy flexibility with forecasted data to draw up a consumption or production plan ahead of time.
+
+There are two ways to queue a scheduling job:
+
+First, we can add a scheduling job to the queue via the API.
+We already learned about the `[POST] /schedules/trigger <../api/v3_0.html#post--api-v3_0-sensors-(id)-schedules-trigger>`_ endpoint in :ref:`posting_flex_states`, where we saw how to post a flexibility state (in this case, the state of charge of a battery at a certain point in time).
+
+Here, we extend that (storage) example with an additional target value, representing a desired future state of charge.
+
+.. code-block:: json
+
+    {
+        "start": "2015-06-02T10:00:00+00:00",
+        "flex-model": {
+            "soc-at-start": "12.1 kWh",
+            "soc-targets": [
+                {
+                    "value": "25 kWh",
+                    "datetime": "2015-06-02T16:00:00+00:00"
+                }
+        }
+    }
+
+
+We now have described the state of charge at 10am to be ``"12.1 kWh"``. In addition, we requested that it should be ``"25 kWh"`` at 4pm.
+For instance, this could mean that a car should be charged at 90% at that time.
+
+If FlexMeasures receives this message, a scheduling job will be made and put into the queue. In turn, the scheduling job creates a proposed schedule. We'll look a bit deeper into those further down in :ref:`getting_schedules`.
+
+.. note:: Even without a target state of charge, FlexMeasures will create a scheduling job. The flexible device can then be used with more freedom to reach the system objective (e.g. buy power when it is cheap, store it, and sell back when it's expensive).
+
+
+A second way to add scheduling jobs is via the CLI, so this is available for people who host FlexMeasures themselves:
+
+.. code-block:: bash
+
+    $ flexmeasures add schedule for-storage --sensor 1 --consumption-price-sensor 2 \
+        --start 2022-07-05T07:00+01:00 --duration PT12H \
+        --soc-at-start 50% --roundtrip-efficiency 90% --as-job
+
+Here, the ``--as-job`` parameter makes the difference for queueing ― without it, the schedule is computed right away.
+
+Run ``flexmeasures add schedule for-storage --help`` for more information.
+
+
+.. _getting_prognoses:
+
+Getting power forecasts (prognoses)
+-----------------
+
+Prognoses (the USEF term used for power forecasts) are used by FlexMeasures to determine the best control signals to valorise on balancing opportunities.
+
+You can access forecasts via the FlexMeasures API at `[GET] /sensors/data <../api/v3_0.html#get--api-v3_0-sensors-data>`_.
+Getting them might be useful if you want to use prognoses in your own system, or to check their accuracy against meter data, i.e. the realised power measurements.
+The FlexMeasures UI also visualizes prognoses and meter data next to each other.
+
+A prognosis can be requested at a URL looking like this:
+
+.. code-block:: html
+
+    https://company.flexmeasures.io/api/<version>/sensors/data
+
+This example requests a prognosis for 24 hours, with a rolling horizon of 6 hours before realisation.
+
+.. code-block:: json
+
+    {
+        "type": "GetPrognosisRequest",
+        "sensor": "ea1.2021-01.io.flexmeasures.company:fm1.1",
+        "start": "2015-01-01T00:00:00+00:00",
+        "duration": "PT24H",
+        "horizon": "PT6H",
+        "resolution": "PT15M",
+        "unit": "MW"
+    }
+
+
+.. _getting_schedules:
+
+Getting schedules (control signals)
+-----------------------
+
+We saw above how FlexMeasures can create optimised schedules with control signals for flexible devices (see :ref:`posting_flex_states`). You can access the schedules via the `[GET] /schedules/<uuid> <../api/v3_0.html#get--api-v3_0-sensors-(id)-schedules-(uuid)>`_ endpoint. The URL then looks like this:
+
+.. code-block:: html
+
+    https://company.flexmeasures.io/api/<version>/sensors/<id>/schedules/<uuid>
+
+Here, the schedule's Universally Unique Identifier (UUID) should be filled in that is returned in the `[POST] /schedules/trigger <../api/v3_0.html#post--api-v3_0-sensors-(id)-schedules-trigger>`_ response.
+Schedules can be queried by their UUID for up to 1 week after they were triggered (ask your host if you need to keep them around longer).
+Afterwards, the exact schedule can still be retrieved through the `[GET] /sensors/data <../api/v3_0.html#get--api-v3_0-sensors-data>`_, using precise filter values for ``start``, ``prior`` and ``source``.
+
+The following example response indicates that FlexMeasures planned ahead 45 minutes for the requested battery power sensor.
+The list of consecutive power values represents the target consumption of the battery (negative values for production).
+Each value represents the average power over a 15 minute time interval.
+
+.. sourcecode:: json
+
+        {
+            "values": [
+                2.15,
+                3,
+                2
+            ],
+            "start": "2015-06-02T10:00:00+00:00",
+            "duration": "PT45M",
+            "unit": "MW"
+        }
+
+How to interpret these control signals?
+
+One way of reaching the target consumption in this example is to let the battery start to consume with 2.15 MW at 10am,
+increase its consumption to 3 MW at 10.15am and decrease its consumption to 2 MW at 10.30am.
+
+However, because the targets values represent averages over 15-minute time intervals, the battery still has some degrees of freedom.
+For example, the battery might start to consume with 2.1 MW at 10.00am and increase its consumption to 2.25 at 10.10am,
+increase its consumption to 5 MW at 10.15am and decrease its consumption to 2 MW at 10.20am.
+That should result in the same average values for each quarter-hour.

File diff suppressed because it is too large
+ 298 - 0
documentation/tut/posting_data.rst


+ 24 - 0
documentation/tut/scripts/Readme.md

@@ -0,0 +1,24 @@
+# Scripts to run tutorials
+
+The tutorials in the docs are for you to run step by step, command by command,
+so that every step clarifies more of what FlexMeasures is for, and what it can do for you.
+
+However, sometimes one might want to run through them all.
+We scripted the tutorials, so they can be automated. They don't come with a guarantee.
+
+For us, they are actually a step in [our release checklist](https://github.com/FlexMeasures/tsc/blob/main/RELEASE.md) before we upload a new version to Pypi.
+
+We run these tests in the docker compose stack:
+
+    docker compose build
+    docker compose up
+    ./documentation/tut/scripts/run-tutorial-in-docker.sh
+    ./documentation/tut/scripts/run-tutorial2-in-docker.sh
+    ./documentation/tut/scripts/run-tutorial3-in-docker.sh
+    ./documentation/tut/scripts/run-tutorial4-in-docker.sh
+
+- One still needs to check the output (no errors?) and plotted data (plots like we expect?)
+- These need to be run in order so the sensor IDs match (just like when you run them from the docs)
+- Need to start over? `docker rm --force flexmeasures-dev-db-1`, then `down` and `up` with your compose stack..
+- We try to keep these script in sync with the tutorials. But as you can imagine, this is hard, as is keeping docs up to date in general.
+- At least, this might see some regular use by us. The tutorial in the docs sees more usage by new users, who sometimes tell us what they found.

+ 42 - 0
documentation/tut/scripts/run-tutorial-in-docker.sh

@@ -0,0 +1,42 @@
+#!/bin/bash
+
+echo "[TUTORIAL-RUNNER] loading prices..."
+TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+echo "Hour,Price
+${TOMORROW}T00:00:00,10
+${TOMORROW}T01:00:00,11
+${TOMORROW}T02:00:00,12
+${TOMORROW}T03:00:00,15
+${TOMORROW}T04:00:00,18
+${TOMORROW}T05:00:00,17
+${TOMORROW}T06:00:00,10.5
+${TOMORROW}T07:00:00,9
+${TOMORROW}T08:00:00,9.5
+${TOMORROW}T09:00:00,9
+${TOMORROW}T10:00:00,8.5
+${TOMORROW}T11:00:00,10
+${TOMORROW}T12:00:00,8
+${TOMORROW}T13:00:00,5
+${TOMORROW}T14:00:00,4
+${TOMORROW}T15:00:00,4
+${TOMORROW}T16:00:00,5.5
+${TOMORROW}T17:00:00,8
+${TOMORROW}T18:00:00,12
+${TOMORROW}T19:00:00,13
+${TOMORROW}T20:00:00,14
+${TOMORROW}T21:00:00,12.5
+${TOMORROW}T22:00:00,10
+${TOMORROW}T23:00:00,7" > prices-tomorrow.csv
+
+docker cp prices-tomorrow.csv flexmeasures-server-1:/app
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures add beliefs --sensor 1 --source toy-user prices-tomorrow.csv --timezone Europe/Amsterdam"
+
+echo "[TUTORIAL-RUNNER] creating schedule ..."
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures add schedule for-storage --sensor 2 --consumption-price-sensor 1 \
+    --start ${TOMORROW}T07:00+01:00 --duration PT12H --soc-at-start 50% \
+    --roundtrip-efficiency 90%"
+# We also want to use --as-job here (testing the queuing), but for some reason using exec with -c and a command, the container can't see the redis port
+# You can also exec into the container in a bash session, then define TOMORROW (and maybe add prices if not done yet) and run this commans with --as-job 
+
+echo "[TUTORIAL-RUNNER] displaying schedule..."
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor 2 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H"

+ 50 - 0
documentation/tut/scripts/run-tutorial2-in-docker.sh

@@ -0,0 +1,50 @@
+#!/bin/bash
+
+echo "[TUTORIAL-RUNNER] loading solar production data..."
+
+TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+echo "Hour,Price
+${TOMORROW}T00:00:00,0.0
+${TOMORROW}T01:00:00,0.0
+${TOMORROW}T02:00:00,0.0
+${TOMORROW}T03:00:00,0.0
+${TOMORROW}T04:00:00,0.01
+${TOMORROW}T05:00:00,0.03
+${TOMORROW}T06:00:00,0.06
+${TOMORROW}T07:00:00,0.1
+${TOMORROW}T08:00:00,0.14
+${TOMORROW}T09:00:00,0.17
+${TOMORROW}T10:00:00,0.19
+${TOMORROW}T11:00:00,0.21
+${TOMORROW}T12:00:00,0.22
+${TOMORROW}T13:00:00,0.21
+${TOMORROW}T14:00:00,0.19
+${TOMORROW}T15:00:00,0.17
+${TOMORROW}T16:00:00,0.14
+${TOMORROW}T17:00:00,0.1
+${TOMORROW}T18:00:00,0.06
+${TOMORROW}T19:00:00,0.03
+${TOMORROW}T20:00:00,0.01
+${TOMORROW}T21:00:00,0.0
+${TOMORROW}T22:00:00,0.0
+${TOMORROW}T23:00:00,0.0" > solar-tomorrow.csv
+
+docker cp solar-tomorrow.csv flexmeasures-server-1:/app
+
+echo "[TUTORIAL-RUNNER] adding source ..."
+docker exec -it flexmeasures-server-1 flexmeasures add source --name "toy-forecaster" --type forecaster
+echo "[TUTORIAL-RUNNER] adding beliefs ..."
+docker exec -it flexmeasures-server-1 flexmeasures add beliefs --sensor 3 --source 4 solar-tomorrow.csv --timezone Europe/Amsterdam
+
+echo "[TUTORIAL-RUNNER] showing beliefs ..."
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor 3 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H"
+
+echo "[TUTORIAL-RUNNER] update schedule taking solar into account ..."
+docker exec -it flexmeasures-server-1 flexmeasures add schedule for-storage --sensor 2 --consumption-price-sensor 1 \
+    --inflexible-device-sensor 3 \
+    --start ${TOMORROW}T07:00+01:00 --duration PT12H \
+    --soc-at-start 50% --roundtrip-efficiency 90%
+
+
+echo "[TUTORIAL-RUNNER] showing schedule ..."
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor 2 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H"

+ 25 - 0
documentation/tut/scripts/run-tutorial3-in-docker.sh

@@ -0,0 +1,25 @@
+#!/bin/bash
+
+TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+
+echo "[TUTORIAL-RUNNER] Setting up toy account with reporters..."
+docker exec -it flexmeasures-server-1  flexmeasures add toy-account --kind process
+
+
+echo "[TUTORIAL-RUNNER] Creating three process schedules ..."
+docker exec -it flexmeasures-server-1 flexmeasures add schedule for-process --sensor 4 --consumption-price-sensor 1\
+  --start ${TOMORROW}T00:00:00+02:00 --duration PT24H --process-duration PT4H \
+  --process-power 0.2MW --process-type INFLEXIBLE \
+  --forbid "{\"start\" : \"${TOMORROW}T15:00:00+02:00\", \"duration\" : \"PT1H\"}"
+
+docker exec -it flexmeasures-server-1 flexmeasures add schedule for-process --sensor 5 --consumption-price-sensor 1\
+  --start ${TOMORROW}T00:00:00+02:00 --duration PT24H --process-duration PT4H \
+  --process-power 0.2MW --process-type BREAKABLE \
+  --forbid "{\"start\" : \"${TOMORROW}T15:00:00+02:00\", \"duration\" : \"PT1H\"}"
+
+docker exec -it flexmeasures-server-1 flexmeasures add schedule for-process --sensor 6 --consumption-price-sensor 1\
+  --start ${TOMORROW}T00:00:00+02:00 --duration PT24H --process-duration PT4H \
+  --process-power 0.2MW --process-type SHIFTABLE \
+  --forbid "{\"start\" : \"${TOMORROW}T15:00:00+02:00\", \"duration\" : \"PT1H\"}"
+
+echo "Now visit http://localhost:5000/assets/5/graphs to see all three schedules."

+ 99 - 0
documentation/tut/scripts/run-tutorial4-in-docker.sh

@@ -0,0 +1,99 @@
+#!/bin/bash
+
+TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+
+echo "[TUTORIAL-RUNNER] Setting up toy account with reporters..."
+docker exec -it flexmeasures-server-1  flexmeasures add toy-account --kind reporter
+
+
+echo "[TUTORIAL-RUNNER] Show grid connection capacity (sensor 7)..."
+docker exec -it flexmeasures-server-1 flexmeasures show beliefs --sensor 7 --start ${TOMORROW}T00:00:00+02:00 --duration PT24H --resolution PT1H
+
+docker exec -it flexmeasures-server-1 flexmeasures show data-sources --show-attributes --id 6
+
+echo "[TUTORIAL-RUNNER] Configure reporter ..."
+
+echo "
+{
+   'weights' : {
+       'grid connection capacity' : 1.0,
+       'PV' : -1.0,
+   }
+}" > headroom-config.json
+docker cp headroom-config.json flexmeasures-server-1:/app
+
+echo "
+{
+    'input' : [{'name' : 'grid connection capacity','sensor' : 7},
+               {'name' : 'PV', 'sensor' : 3}],
+    'output' : [{'sensor' : 8}]
+}" > headroom-parameters.json
+docker cp headroom-parameters.json flexmeasures-server-1:/app
+
+
+echo "[TUTORIAL-RUNNER] add report ..."
+
+docker exec -it flexmeasures-server-1 flexmeasures add report --reporter AggregatorReporter \
+   --parameters headroom-parameters.json --config headroom-config.json \
+   --start-offset DB,1D --end-offset DB,2D \
+   --resolution PT15M
+
+
+echo "[TUTORIAL-RUNNER] showing reported data ..."
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor 8 --start ${TOMORROW}T00:00:00+01:00 --duration PT24H"
+
+
+echo "[TUTORIAL-RUNNER] now the inflexible process ..."
+
+echo "
+{
+    'input' : [{'sensor' : 4}],
+    'output' : [{'sensor' : 9}]
+}" > inflexible-parameters.json
+
+docker cp inflexible-parameters.json flexmeasures-server-1:/app
+
+docker exec -it flexmeasures-server-1 flexmeasures add report --source 6 \
+   --parameters inflexible-parameters.json \
+   --start-offset DB,1D --end-offset DB,2D
+
+echo "[TUTORIAL-RUNNER] showing reported data ..."
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor 9 --start ${TOMORROW}T00:00:00+01:00 --duration PT24H"
+
+
+echo "[TUTORIAL-RUNNER] now the breakable process ..."
+
+echo "
+{
+    'input' : [{'sensor' : 5}],
+    'output' : [{'sensor' : 10}]
+}" > breakable-parameters.json
+
+docker cp breakable-parameters.json flexmeasures-server-1:/app
+
+docker exec -it flexmeasures-server-1 flexmeasures add report --source 6 \
+   --parameters breakable-parameters.json \
+   --start-offset DB,1D --end-offset DB,2D
+
+echo "[TUTORIAL-RUNNER] showing reported data ..."
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor 10 --start ${TOMORROW}T00:00:00+01:00 --duration PT24H"
+
+
+
+echo "[TUTORIAL-RUNNER] now the breakable process ..."
+
+echo "
+{
+    'input' : [{'sensor' : 6}],
+    'output' : [{'sensor' : 11}]
+}" > shiftable-parameters.json
+
+docker cp shiftable-parameters.json flexmeasures-server-1:/app
+
+docker exec -it flexmeasures-server-1 flexmeasures add report --source 6 \
+   --parameters shiftable-parameters.json \
+   --start-offset DB,1D --end-offset DB,2D
+
+echo "[TUTORIAL-RUNNER] showing reported data ..."
+docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor 11 --start ${TOMORROW}T00:00:00+01:00 --duration PT24H"
+

+ 129 - 0
documentation/tut/toy-example-expanded.rst

@@ -0,0 +1,129 @@
+.. _tut_toy_schedule_expanded:
+
+
+
+Toy example II: Adding solar production and limited grid connection
+====================================================================
+
+
+So far we haven't taken into account any other devices that consume or produce electricity. The battery was free to use all available capacity towards the grid. 
+
+What if other devices will be using some of that capacity? Our schedules need to reflect that, so we stay within given limits.
+
+.. note:: The capacity is given by ``site-power-capacity``, an attribute we placed on the battery asset earlier (see :ref:`tut_toy_schedule`). We will tell FlexMeasures to take the solar production into account (using ``--inflexible-device-sensor``) for this capacity limit.
+
+We'll now add solar production forecast data and then ask for a new schedule, to see the effect of solar on the available headroom for the battery.
+
+
+Adding PV production forecasts
+------------------------------
+
+First, we'll create a new CSV file with solar forecasts (MW, see the setup for sensor 3 in part I of this tutorial) for tomorrow.
+
+.. code-block:: bash
+
+    $ TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+    $ echo "Hour,Price
+    $ ${TOMORROW}T00:00:00,0.0
+    $ ${TOMORROW}T01:00:00,0.0
+    $ ${TOMORROW}T02:00:00,0.0
+    $ ${TOMORROW}T03:00:00,0.0
+    $ ${TOMORROW}T04:00:00,0.01
+    $ ${TOMORROW}T05:00:00,0.03
+    $ ${TOMORROW}T06:00:00,0.06
+    $ ${TOMORROW}T07:00:00,0.1
+    $ ${TOMORROW}T08:00:00,0.14
+    $ ${TOMORROW}T09:00:00,0.17
+    $ ${TOMORROW}T10:00:00,0.19
+    $ ${TOMORROW}T11:00:00,0.21
+    $ ${TOMORROW}T12:00:00,0.22
+    $ ${TOMORROW}T13:00:00,0.21
+    $ ${TOMORROW}T14:00:00,0.19
+    $ ${TOMORROW}T15:00:00,0.17
+    $ ${TOMORROW}T16:00:00,0.14
+    $ ${TOMORROW}T17:00:00,0.1
+    $ ${TOMORROW}T18:00:00,0.06
+    $ ${TOMORROW}T19:00:00,0.03
+    $ ${TOMORROW}T20:00:00,0.01
+    $ ${TOMORROW}T21:00:00,0.0
+    $ ${TOMORROW}T22:00:00,0.0
+    $ ${TOMORROW}T23:00:00,0.0" > solar-tomorrow.csv
+
+Then, we read in the created CSV file as beliefs data.
+This time, different to above, we want to use a new data source (not the user) ― it represents whoever is making these solar production forecasts.
+We create that data source first, so we can tell `flexmeasures add beliefs` to use it.
+Setting the data source type to "forecaster" helps FlexMeasures to visually distinguish its data from e.g. schedules and measurements.
+
+.. note:: The ``flexmeasures add source`` command also allows to set a model and version, so sources can be distinguished in more detail. But that is not the point of this tutorial. See ``flexmeasures add source --help``.
+
+.. code-block:: bash
+
+    $ flexmeasures add source --name "toy-forecaster" --type forecaster
+    Added source <Data source 4 (toy-forecaster)>
+    $ flexmeasures add beliefs --sensor 3 --source 4 solar-tomorrow.csv --timezone Europe/Amsterdam
+    Successfully created beliefs
+
+The one-hour CSV data is automatically resampled to the 15-minute resolution of the sensor that is recording solar production. We can see solar production in the `FlexMeasures UI <http://localhost:5000/sensors/3>`_ :
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-production.png
+    :align: center
+|
+
+.. note:: The ``flexmeasures add beliefs`` command has many options to make sure the read-in data is correctly interpreted (unit, timezone, delimiter, etc). But that is not the point of this tutorial. See ``flexmeasures add beliefs --help``.
+
+
+Trigger an updated schedule
+----------------------------
+
+Now, we'll reschedule the battery while taking into account the solar production. This will have an effect on the available headroom for the battery, given the ``site-power-capacity`` limit discussed earlier.
+
+.. code-block:: bash
+
+    $ flexmeasures add schedule for-storage --sensor 2 --consumption-price-sensor 1 \
+        --inflexible-device-sensor 3 \
+        --start ${TOMORROW}T07:00+02:00 --duration PT12H \
+        --soc-at-start 50% --roundtrip-efficiency 90%
+    New schedule is stored.
+
+We can see the updated scheduling in the `FlexMeasures UI <http://localhost:5000/sensors/2>`_ :
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-charging-with-solar.png
+    :align: center
+|
+
+The `graphs page for the battery <http://localhost:5000/assets/3/graphs>`_ now shows the solar data, too:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/asset-view-with-solar.png
+    :align: center
+|
+
+Though this schedule is quite similar, we can see that it has changed from `the one we computed earlier <https://raw.githubusercontent.com/FlexMeasures/screenshots/main/tut/toy-schedule/asset-view-without-solar.png>`_ (when we did not take solar into account).
+
+First, during the sunny hours of the day, when solar power is being send to the grid, the battery's output (at around 9am and 11am) is now lower, as the battery shares the ``site-power-capacity`` with the solar production. In the evening (around 7pm), when solar power is basically not present anymore, battery discharging to the grid is still at its previous levels.
+
+Second, charging of the battery is also changed a bit (around 10am), as less can be discharged later.
+
+Moreover, we can use reporters to compute the capacity headroom (see :ref:`tut_toy_schedule_reporter` for more details). The image below shows that the scheduler is respecting the capacity limits.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-headroom-pv.png
+    :align: center
+|
+
+In the case of the scheduler that we ran in the previous tutorial, which did not yet consider the PV, the discharge power would have exceeded the headroom:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-headroom-nopv.png
+    :align: center
+|
+
+.. note:: You can add arbitrary sensors to a chart using the asset UI or the attribute ``sensors_to_show``. See :ref:`view_asset-data` for more.
+
+A nice feature is that you can check the data connectivity status of your building asset. Now that we have made the schedule, both lamps are green. You can also view it in `FlexMeasures UI <http://localhost:5000/assets/2/status>`_ :
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/screenshot_building_status.png
+    :align: center
+|
+
+We hope this part of the tutorial shows how to incorporate a limited grid connection rather easily with FlexMeasures. There are more ways to model such settings, but this is a straightforward one.
+
+This tutorial showed a quick way to add an inflexible load (like solar power) and a grid connection.
+In :ref:`tut_v2g`, we will temporarily pause giving you tutorials you can follow step-by-step. We feel it is time to pay more attention to the power of the flex-model, and illustrate its effects.

+ 82 - 0
documentation/tut/toy-example-from-scratch.rst

@@ -0,0 +1,82 @@
+.. _tut_toy_schedule:
+
+Toy example I: Scheduling a battery, from scratch
+===============================================
+
+Let's walk through an example from scratch! We'll optimize a 12h-schedule for a battery that is half full.
+
+Okay, let's get started!
+
+.. note:: You can copy the commands by hovering on the top right corner of code examples. You'll copy only the commands, not the output!
+
+.. note:: If you haven't run through :ref:`tut_install_load_data` yet, do that first. There, we added power prices for a 24h window.
+
+
+
+
+Make a schedule
+---------------------------------------
+
+After going through the setup, we can finally create the schedule, which is the main benefit of FlexMeasures (smart real-time control).
+
+We'll ask FlexMeasures for a schedule for our (dis)charging sensor (ID 2). We also need to specify what to optimize against. Here we pass the Id of our market price sensor (ID 1).
+To keep it short, we'll only ask for a 12-hour window starting at 7am. Finally, the scheduler should know what the state of charge of the battery is when the schedule starts (50%) and what its roundtrip efficiency is (90%).
+
+.. code-block:: bash
+
+    $ flexmeasures add schedule for-storage --sensor 2 --start ${TOMORROW}T07:00+01:00 --duration PT12H \
+        --soc-at-start 50% --roundtrip-efficiency 90%
+    New schedule is stored.
+
+Great. Let's see what we made:
+
+.. code-block:: bash
+
+    $ flexmeasures show beliefs --sensor 2 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H
+    Beliefs for Sensor 'discharging' (ID 2).
+    Data spans 12 hours and starts at 2022-03-04 07:00:00+01:00.
+    The time resolution (x-axis) is 15 minutes.
+    ┌────────────────────────────────────────────────────────────┐
+    │   ▐            ▐▀▀▌                                     ▛▀▀│ 0.5MW
+    │   ▞▌           ▌  ▌                                     ▌  │
+    │   ▌▌           ▌  ▐                                    ▗▘  │
+    │   ▌▌           ▌  ▐                                    ▐   │
+    │  ▐ ▐          ▐   ▐                                    ▐   │
+    │  ▐ ▐          ▐   ▝▖                                   ▞   │
+    │  ▌ ▐          ▐    ▌                                   ▌   │
+    │ ▐  ▝▖         ▌    ▌                                   ▌   │
+    │▀▘───▀▀▀▀▖─────▌────▀▀▀▀▀▀▀▀▀▌─────▐▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▘───│ 0.0MW
+    │         ▌    ▐              ▚     ▌                        │
+    │         ▌    ▞              ▐    ▗▘                        │
+    │         ▌    ▌              ▐    ▞                         │
+    │         ▐   ▐               ▝▖   ▌                         │
+    │         ▐   ▐                ▌  ▗▘                         │
+    │         ▐   ▌                ▌  ▐                          │
+    │         ▝▖  ▌                ▌  ▞                          │
+    │          ▙▄▟                 ▐▄▄▌                          │ -0.5MW
+    └────────────────────────────────────────────────────────────┘
+               10           20           30          40
+                            ██ discharging
+
+
+Here, negative values denote output from the grid, so that's when the battery gets charged.
+
+We can also look at the charging schedule in the `FlexMeasures UI <http://localhost:5000/sensors/2>`_ (reachable via the asset page for the battery):
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-charging.png
+    :align: center
+|
+
+Recall that we only asked for a 12 hour schedule here. We started our schedule *after* the high price peak (at 4am) and it also had to end *before* the second price peak fully realized (at 8pm). Our scheduler didn't have many opportunities to optimize, but it found some. For instance, it does buy at the lowest price (at 2pm) and sells it off at the highest price within the given 12 hours (at 6pm).
+
+The `battery's graph dashboard <http://localhost:5000/assets/3/graphs>`_ shows both prices and the schedule.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/asset-view-without-solar.png
+    :align: center
+|
+
+.. note:: The ``flexmeasures add schedule for-storage`` command also accepts state-of-charge targets, so the schedule can be more sophisticated.
+   And even more control over schedules is possible through the ``flex-model`` in our API. But that is not the point of this tutorial.
+   See ``flexmeasures add schedule for-storage --help`` for available CLI options, :ref:`describing_flexibility` for all flex-model fields or check out the :ref:`tut_v2g` for a tangible example of modelling storage constraints.
+
+This tutorial showed the fastest way to a schedule. In :ref:`tut_toy_schedule_expanded`, we'll go further into settings with more realistic ingredients: solar panels and a limited grid connection.

+ 129 - 0
documentation/tut/toy-example-process.rst

@@ -0,0 +1,129 @@
+.. _tut_toy_schedule_process:
+
+Toy example III: Computing schedules for processes
+====================================================
+
+Until this point we've been using a static battery, one of the most flexible energy assets, to reduce electricity bills. A battery can modulate rather freely, and both charge and discharge.
+
+
+However, in some settings, we can reduce electricity bills by **just** smartly timing the necessary work that we know we have to do. We call this work a "process". In other words, if the process can be displaced, by breaking it into smaller consumption periods or shifting its start time, the process run can match the lower price hours better.
+
+For example, we could have a load that consumes energy at a constant rate (e.g. 200kW) for a fixed duration (e.g. 4h), but there's some flexibility in the start time. In that case, we could find the optimal start time in order to minimize the energy cost.
+
+Examples of flexible processes are: 
+    - Water irrigation in agriculture
+    - Mechanical pulping in the paper industry
+    - Water pumping in waste water management
+    - Cooling for the food industry
+
+
+For consumers under :abbr:`ToU (Time of Use)` tariffs, FlexMeasures `ProcessScheduler` can plan the start time of the process to minimize the overall cost of energy.
+Alternatively, it can create a consumption plan to minimize the CO₂ emissions.
+
+
+In this tutorial, you'll learn how to schedule processes using three different policies: INFLEXIBLE, BREAKABLE and SHIFTABLE. 
+
+Moreover, we'll touch upon the use of time restrictions to avoid scheduling a process in certain times of the day.
+
+
+Setup
+.....
+
+
+Before moving forward, we'll add the `process` asset and three sensors to store the schedules resulting from following three different policies.
+
+.. code-block:: bash
+
+    $ flexmeasures add toy-account --kind process
+    
+        User with email toy-user@flexmeasures.io already exists in account Docker Toy Account.
+        The sensor recording day-ahead prices is day-ahead prices (ID: 1).
+        Created <GenericAsset 5: 'toy-process' (process)>
+        Created <Sensor 4: Power (Inflexible), unit: MW res.: 0:15:00>
+        Created <Sensor 5: Power (Breakable), unit: MW res.: 0:15:00>
+        Created <Sensor 6: Power (Shiftable), unit: MW res.: 0:15:00>
+        The sensor recording the power of the inflexible load is Power (Inflexible) (ID: 4).
+        The sensor recording the power of the breakable load is Power (Breakable) (ID: 5).
+        The sensor recording the power of the shiftable load is Power (Shiftable) (ID: 6).
+
+
+Trigger an updated schedule
+----------------------------
+
+In this example, we are planning to consume at a 200kW constant power for a period of 4h. 
+
+This load is to be schedule for tomorrow, except from the period from 3pm to 4pm (imposed using the ``--forbid`` flag).
+
+
+Now we are ready to schedule a process. Let's start with the INFLEXIBLE policy, the simplest.
+
+.. code-block:: bash
+
+    flexmeasures add schedule for-process --sensor 4 --consumption-price-sensor 1\
+      --start ${TOMORROW}T00:00:00+02:00 --duration PT24H --process-duration PT4H \
+      --process-power 0.2MW --process-type INFLEXIBLE \ 
+      --forbid "{\"start\" : \"${TOMORROW}T15:00:00+02:00\", \"duration\" : \"PT1H\"}"
+
+Under the INFLEXIBLE policy, the process starts as soon as possible, in this case, coinciding with the start of the planning window.
+
+Following the INFLEXIBLE policy, we'll schedule the same 4h block using a BREAKABLE policy.
+
+.. code-block:: bash
+
+    flexmeasures add schedule for-process --sensor 5 --consumption-price-sensor 1\
+      --start ${TOMORROW}T00:00:00+02:00 --duration PT24H --process-duration PT4H \
+      --process-power 0.2MW --process-type BREAKABLE \ 
+      --forbid "{\"start\" : \"${TOMORROW}T15:00:00+02:00\", \"duration\" : \"PT1H\"}"
+ 
+The BREAKABLE policy splits or breaks the process into blocks that can be scheduled discontinuously. The smallest possible unit is (currently) determined by the sensor's resolution. 
+
+Finally, we'll schedule the process using the SHIFTABLE policy.
+
+.. code-block:: bash
+
+    flexmeasures add schedule for-process --sensor 6 --consumption-price-sensor 1\
+      --start ${TOMORROW}T00:00:00+02:00 --duration PT24H --process-duration PT4H \
+      --process-power 0.2MW --process-type SHIFTABLE \ 
+      --forbid "{\"start\" : \"${TOMORROW}T15:00:00+02:00\", \"duration\" : \"PT1H\"}"
+ 
+
+Results
+---------
+
+The image below shows the resulting schedules following each of the three policies.
+You will see similar results in your `FlexMeasures UI <http://localhost:5000/assets/5/graphs>`_. 
+
+ 
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/asset-view-process.png
+    :align: center
+|
+
+
+In the first policy, there's no flexibility and it needs to schedule the process as soon as possible. 
+Meanwhile, in the BREAKABLE policy, the consumption blocks surrounds the time restriction to consume in the cheapest hours. Among the three polices, the BREAKABLE policy can achieve the best 
+Finally, in the SHIFTABLE policy, the process is shifted to capture the best prices, avoiding the time restrictions.
+
+
+Let's list the power price the policies achieved for each of the four blocks they scheduled:
+
+.. _table-process:
+
++-------------------------+------------+-----------+-----------+
+|          Block          | INFLEXIBLE | BREAKABLE | SHIFTABLE |
++=========================+============+===========+===========+
+|            1            |   10.00    |   5.00    |   10.00   |
++-------------------------+------------+-----------+-----------+
+|            2            |   11.00    |   4.00    |   8.00    |
++-------------------------+------------+-----------+-----------+
+|            3            |   12.00    |   5.50    |   5.00    |
++-------------------------+------------+-----------+-----------+
+|            4            |   15.00    |   7.00    |   4.00    |
++-------------------------+------------+-----------+-----------+
+| Average Price (EUR/MWh) |   12.00    |   5.37    |   6.75    |
++-------------------------+------------+-----------+-----------+
+|    Total Cost (EUR)     |    9.60    |   4.29    |   5.40    |
++-------------------------+------------+-----------+-----------+
+
+Quantitatively, comparing the total cost of running the process under each policy, the BREAKABLE policy achieves the best results. This is because it can fit much more consumption blocks in the cheapest hours.
+
+This tutorial showed a quick way to optimize the activation of processes. In :ref:`tut_toy_schedule_reporter`, we'll turn away from scheduling, and towards another important FlexMeasures feature: using *reporters* to apply transformations to sensor data.

+ 257 - 0
documentation/tut/toy-example-reporter.rst

@@ -0,0 +1,257 @@
+.. _tut_toy_schedule_reporter:
+
+Toy example IV: Computing reports
+=====================================
+
+So far, we have worked on scheduling batteries and processes. Now, we are moving to one of the other three pillars of FlexMeasures: reporting. 
+
+In essence, reporters apply arbitrary transformations to data coming from some sensors (multiple inputs) and save the results to other sensors (multiple outputs). In practice, this allows to compute KPIs (such as profit and total daily energy production), to apply operations to beliefs (e.g. changing the sign of a power sensor for some time period), among other things.
+
+.. note:: 
+    Currently, FlexMeasures comes with the following reporters:
+        - `PandasReporter`: applies arbitrary `Pandas <https://pandas.pydata.org>`_ methods to sensor data. 
+        - `AggregatorReporter`: combines data from multiple sensors into one using any of the methods supported by the Pandas `aggregate` function (e.g. sum, average, max, min...).
+        - `ProfitOrLossReporter`: computes the profit/loss due to an energy flow under a specific tariff.
+
+    Moreover, it's possible to implement your custom reporters in plugins. Instructions for this to come.
+
+Now, coming back to the tutorial, we are going to use the `AggregatorReporter` and the `ProfitOrLossReporter`.
+In the first part, we'll use the `AggregatorReporter` to compute the (discharge) headroom of the battery in :ref:`tut_toy_schedule_expanded`. That way, we can verify the maximum power at which the battery can discharge at any point of time.
+In the second part, we'll use the `ProfitOrLossReporter` to compute the costs of operating the process of Tut. Part III in the different policies.
+
+Before getting to the meat of the tutorial, we need to set up up all the entities. Instead of having to do that manually (e.g. using commands such as ``flexmeasures add sensor``), we have prepared a command that does that automatically.
+
+Setup
+.....
+
+Just as in previous sections, we need to run the command ``flexmeasures add toy-account``, but this time with a different value for *kind*:
+
+.. code-block:: bash
+
+    $ flexmeasures add toy-account --kind reporter
+
+Under the hood, this command is adding the following entities:
+    - A sensor that stores the capacity of the grid connection (with a resolution of one year, so storing just one value:) ).
+    - A power sensor, `headroom`, to store the remaining capacity for the battery. This is where we'll store the report.
+    - A `ProfitOrLossReporter` configured to use the prices that we set up in Tut. Part II.
+    - Three sensors to register the profits/losses from running the three different processes of Tut. Part III.
+
+.. note:: The above command should also print out the IDs of these sensors. We will use these IDs verbatim in this tutorial.
+
+Let's check it out! 
+
+Run the command below to show the values for our newly-created `grid connection capacity`:
+
+.. code-block:: bash
+
+    $ TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+    $ flexmeasures show beliefs --sensor 7 --start ${TOMORROW}T00:00:00+02:00 --duration PT24H --resolution PT1H
+      
+      Beliefs for Sensor 'grid connection capacity' (ID 7).
+        Data spans a day and starts at 2023-08-14 00:00:00+02:00.
+        The time resolution (x-axis) is an hour.
+        ┌────────────────────────────────────────────────────────────┐
+        │                                                            │ 
+        │                                                            │ 
+        │                                                            │ 
+        │                                                            │ 
+        │                                                            │ 1.0MW
+        │                                                            │ 
+        │                                                            │ 
+        │                                                            │ 
+        │▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀│ 0.5MW
+        │                                                            │ 
+        │                                                            │ 
+        │                                                            │ 
+        │▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁│ 0.0MW
+        │                                                            │ 
+        │                                                            │ 
+        │                                                            │ 
+        │                                                            │ -0.5MW
+        └────────────────────────────────────────────────────────────┘
+                5            10            15           20
+                        ██ grid connection capacity
+
+
+Moreover, we can check the freshly created source `<Source id=6>`, which defines the `ProfitOrLossReporter` with the required configuration.
+You'll notice that the `config` is under the `data_generator` field.
+That's because reporters belong to a bigger category of classes that also contains the `Schedulers` and `Forecasters`.
+
+.. code-block:: bash
+
+    $ flexmeasures show data-sources --show-attributes --id 6
+
+        type: reporter
+        ========
+
+         ID  Name          User ID    Model                 Version    Attributes
+       ----  ------------  ---------  --------------------  ---------  ------------------------------------------
+          6  FlexMeasures             ProfitOrLossReporter             {
+                                                                           "data_generator": {
+                                                                               "config": {
+                                                                                   "consumption_price_sensor": 1,
+                                                                                   "loss_is_positive": true
+                                                                               }
+                                                                           }
+                                                                       }
+
+
+Compute headroom
+-------------------
+
+In this case, the discharge headroom is nothing but the difference between the grid connection capacity and the PV power.
+To compute that quantity, we can use the `AggregatorReporter` using the weights to make the PV to subtract the grid connection capacity.
+
+In practice, we need to create the `config` and `parameters`:
+
+.. code-block:: bash
+
+    $ echo "
+    $ {
+    $    'weights' : {
+    $        'grid connection capacity' : 1.0,
+    $        'PV' : -1.0,
+    $    }
+    $ }" > headroom-config.json
+
+
+.. code-block:: bash
+
+    $ echo "
+    $ {
+    $     'input' : [{'name' : 'grid connection capacity','sensor' : 7},
+    $                {'name' : 'PV', 'sensor' : 3}],
+    $     'output' : [{'sensor' : 8}]
+    $ }" > headroom-parameters.json
+
+The output sensor (ID: 8) is actually the one created just to store that information - the headroom our battery has when considering solar production.
+
+Finally, we can create the report with the following command:
+
+.. code-block:: bash
+
+    $ flexmeasures add report --reporter AggregatorReporter \
+       --parameters headroom-parameters.json --config headroom-config.json \
+       --start-offset DB,1D --end-offset DB,2D \
+       --resolution PT15M
+
+Now we can visualize the diminished headroom in the following `link <http://localhost:5000/sensors/8/graphs>`_, which should resemble the following image:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-headroom.png
+    :align: center
+|
+
+The graph shows that the capacity of the grid is at full disposal for the battery when there's no sun (thus no PV generation), while at noon the battery can only discharge at 280kW max.
+
+Process scheduler profit
+-------------------------
+
+For the second part of this tutorial, we are going to use the `ProfitOrLossReporter` to compute the losses (defined as `cost - revenue`) of operating the process from Tut.
+Part III, under the three different policies: INFLEXIBLE, BREAKABLE and SHIFTABLE.
+
+In addition, we'll explore another way to invoke reporters: data generators.
+Without going too much into detail, data generators create new data.
+The thee main types are: `Reporters`, `Schedulers` and `Forecasters`.
+This will come handy as the three reports that we are going to create share the same `config`.
+The `config` defines the price sensor to use and sets the reporter to work in **losses** mode, which means that it will return costs as positive values and revenue as negative values.
+
+Still, we need to define the parameters.
+The three reports share the same structure for the parameters with the following fields:
+
+    - `input`: sensor that stores the power/energy flow. The number of sensors is limited to 1.
+    - `output`: sensor to store the report. We can provide sensors with different resolutions to store the same results at different time scales.
+
+.. note::
+    It's possible to define the `config` and `parameters` in JSON or YAML formats.
+
+After setting up `config` and `parameters`, we can invoke the reporter using the command ``flexmeasures add report``.
+The command takes the data source id, the files containing the parameters and the timing parameters (start and end).
+For this particular case, we make use of the offsets to indicate that we want the report to encompass the day of tomorrow.
+
+Inflexible process
+^^^^^^^^^^^^^^^^^^^
+
+Define parameters in a JSON file:
+
+.. code-block:: bash
+
+    $ echo "
+    $ {
+    $     'input' : [{'sensor' : 4}],
+    $     'output' : [{'sensor' : 9}]
+    $ }" > inflexible-parameters.json
+
+Create report:
+
+.. code-block:: bash
+
+    $ flexmeasures add report --source 6 \
+       --parameters inflexible-parameters.json \
+       --start-offset DB,1D --end-offset DB,2D
+
+
+Check the results `here <http://localhost:5000/sensors/9>`_. The image should be similar to the one below.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-inflexible.png
+    :align: center
+|
+
+
+Breakable process
+^^^^^^^^^^^^^^^^^^^
+Define parameters in a JSON file:
+
+.. code-block:: bash
+
+    $ echo "
+    $ {
+    $     'input' : [{'sensor' : 5}],
+    $     'output' : [{'sensor' : 10}]
+    $ }" > breakable-parameters.json
+
+Create report:
+
+.. code-block:: bash
+
+    $ flexmeasures add report --source 6 \
+       --parameters breakable-parameters.json \
+       --start-offset DB,1D --end-offset DB,2D
+
+Check the results `here <http://localhost:5000/sensors/10>`_. The image should be similar to the one below.
+
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-breakable.png
+    :align: center
+|
+
+Shiftable process
+^^^^^^^^^^^^^^^^^^^
+
+Define parameters in a JSON file:
+
+.. code-block:: bash
+
+    $ echo "
+    $ {
+    $     'input' : [{'sensor' : 6}],
+    $     'output' : [{'sensor' : 11}]
+    $ }" > shiftable-parameters.json
+
+Create report:
+
+.. code-block:: bash
+
+    $ flexmeasures add report --source 6 \
+       --parameters shiftable-parameters.json \
+       --start-offset DB,1D --end-offset DB,2D
+
+Check the results `here <http://localhost:5000/sensors/11>`_. The image should be similar to the one below.
+
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-shiftable.png
+    :align: center
+|
+
+
+Now, we can compare the results of the reports to the ones we computed manually in :ref:`this table <table-process>`). Keep in mind that the
+report is showing the profit of each 15min period and adding them all shows that it matches with our previous results.

+ 348 - 0
documentation/tut/toy-example-setup.rst

@@ -0,0 +1,348 @@
+.. _tut_install_load_data:
+
+Toy example: Introduction and setup
+===================================
+
+This page is a starting point of a series of tutorials that will help you get practical experience with FlexMeasures.
+
+Let's walk through an example from scratch! We'll ... 
+
+- install FlexMeasures
+- create an account
+- load hourly prices
+
+What do you need? Your own computer, with one of two situations: either you have `Docker <https://www.docker.com/>`_ or your computer supports Python 3.8+, pip and PostgresDB. The former might be easier, see the installation step below. But you choose.
+
+Below are the ``flexmeasures`` CLI commands we'll run, and which we'll explain step by step. There are some other crucial steps for installation and setup, so this becomes a complete example from scratch, but this is the meat:
+
+.. code-block:: bash
+
+    # setup an account with a user, assets for battery & solar and an energy market (ID 1)
+    $ flexmeasures add toy-account
+    # load prices to optimise the schedule against
+    $ flexmeasures add beliefs --sensor 1 --source toy-user prices-tomorrow.csv --timezone Europe/Amsterdam
+
+
+Okay, let's get started!
+
+
+.. note:: You can copy the commands by hovering on the top right corner of code examples. You'll copy only the commands, not the output!
+
+Install Flexmeasures and the database
+---------------------------------------
+
+.. tabs::
+
+  .. tab:: Docker
+
+        If `docker <https://www.docker.com/>`_ is running on your system, you're good to go. Otherwise, see `here <https://docs.docker.com/get-docker/>`_.
+
+        We start by installing the FlexMeasures platform, and then use Docker to run a postgres database and tell FlexMeasures to create all tables.
+
+        .. code-block:: bash
+
+            $ docker pull lfenergy/flexmeasures:latest
+            $ docker pull postgres
+            $ docker network create flexmeasures_network
+
+        .. note:: A tip on Linux/macOS ― You might have the ``docker`` command, but need `sudo` rights to execute it.
+                  ``alias docker='sudo docker'`` enables you to still run this tutorial.
+
+        After running these commands, we can start the Postgres database and the FlexMeasures app with the following commands:
+
+        .. code-block:: bash
+
+            $ docker run --rm --name flexmeasures-tutorial-db -e POSTGRES_PASSWORD=fm-db-passwd -e POSTGRES_DB=flexmeasures-db -d --network=flexmeasures_network postgres:latest
+            $ docker run --rm --name flexmeasures-tutorial-fm --env SQLALCHEMY_DATABASE_URI=postgresql://postgres:fm-db-passwd@flexmeasures-tutorial-db:5432/flexmeasures-db --env SECRET_KEY=notsecret --env FLEXMEASURES_ENV=development --env LOGGING_LEVEL=INFO -d --network=flexmeasures_network -p 5000:5000 lfenergy/flexmeasures
+
+        When the app has started, the FlexMeasures UI should be available at http://localhost:5000 in your browser.
+
+        .. include:: ../notes/macOS-docker-port-note.rst
+
+        To establish the FlexMeasures database structure, execute:
+
+        .. code-block:: bash
+
+            $ docker exec flexmeasures-tutorial-fm bash -c "flexmeasures db upgrade"
+
+        Now - what's *very important* to remember is this: The rest of this tutorial will happen *inside* the ``flexmeasures-tutorial-fm`` container! This is how you hop inside the container and run a terminal there:
+
+        .. code-block:: bash
+
+            $ docker exec -it flexmeasures-tutorial-fm bash
+
+        To leave the container session, hold CTRL-D or type "exit".
+
+        To stop the containers, you can type
+
+        .. code-block:: bash
+
+            $ docker stop flexmeasures-tutorial-db
+            $ docker stop flexmeasures-tutorial-fm
+
+        To start the containers again, do this (note that re-running the `docker run` commands above *deletes and re-creates* all data!):
+
+        .. code-block:: bash
+
+            $ docker start flexmeasures-tutorial-db
+            $ docker start flexmeasures-tutorial-fm
+
+        .. note:: Got docker-compose? You could run this tutorial with 5 containers :) ― Go to :ref:`docker-compose-tutorial`.
+
+  .. tab:: On your PC
+
+        This example is from scratch, so we'll assume you have nothing prepared but a (Unix) computer with Python (3.8+) and two well-known developer tools, `pip <https://pip.pypa.io>`_ and `postgres <https://www.postgresql.org/download/>`_.
+
+        We'll create a database for FlexMeasures:
+
+        .. code-block:: bash
+
+            $ sudo -i -u postgres
+            $ createdb -U postgres flexmeasures-db
+            $ createuser --pwprompt -U postgres flexmeasures-user      # enter your password, we'll use "fm-db-passwd"
+            $ exit
+
+        Then, we can install FlexMeasures itself, set some variables and tell FlexMeasures to create all tables:
+
+        .. code-block:: bash
+
+            $ pip install flexmeasures
+            $ pip install highspy
+            $ export SQLALCHEMY_DATABASE_URI="postgresql://flexmeasures-user:fm-db-passwd@localhost:5432/flexmeasures-db" SECRET_KEY=notsecret LOGGING_LEVEL="INFO" DEBUG=0
+            $ export FLEXMEASURES_ENV="development"
+            $ flexmeasures db upgrade
+
+        .. note:: When installing with ``pip``, on some platforms problems might come up (e.g. macOS, Windows). One reason is that FlexMeasures requires some libraries with lots of C code support (e.g. Numpy). One way out is to use Docker, which uses a prepared Linux image, so it'll definitely work.
+
+        In case you want to re-run the tutorial, then it's recommended to delete the old database and create a fresh one. Run the following command to create a clean database with a new user, where it is optional. If you don't provide the user, then the default `postgres` user will be used to create the database.
+
+        .. code-block:: bash
+
+            $ make clean-db db_name=flexmeasures-db [db_user=flexmeasures]
+
+        To start the web application, you can run:
+
+        .. code-block:: bash
+
+            $ flexmeasures run
+
+        When started, the FlexMeasures UI should be available at http://localhost:5000 in your browser.
+
+        .. include:: ../notes/macOS-port-note.rst
+
+
+Add some structural data
+---------------------------------------
+
+The data we need for our example is both structural (e.g. a company account, a user, an asset) and numeric (we want market prices to optimize against).
+
+Let's create the structural data first.
+
+FlexMeasures offers a command to create a toy account with a battery:
+
+.. code-block:: bash
+
+    $ flexmeasures add toy-account --kind battery
+
+    Generic asset type `solar` created successfully.
+    Generic asset type `wind` created successfully.
+    Generic asset type `one-way_evse` created successfully.
+    Generic asset type `two-way_evse` created successfully.
+    Generic asset type `battery` created successfully.
+    Generic asset type `building` created successfully.
+    Generic asset type `process` created successfully.
+    Creating account Toy Account ...
+    Toy account Toy Account with user toy-user@flexmeasures.io created successfully. You might want to run `flexmeasures show account --id 1`
+    Adding transmission zone type ...
+    Adding NL transmission zone ...
+    Created day-ahead prices
+    The sensor recording day-ahead prices is day-ahead prices (ID: 1).
+    Created <GenericAsset None: 'toy-battery' (battery)>
+    Created discharging
+    Created <GenericAsset None: 'toy-solar' (solar)>
+    Created production
+    The sensor recording battery discharging is discharging (ID: 2).
+    The sensor recording solar forecasts is production (ID: 3).
+
+
+
+And with that, we're done with the structural data for this tutorial!
+
+If you want, you can inspect what you created:
+
+.. code-block:: bash
+
+    $ flexmeasures show account --id 1
+
+    ===========================
+    Account Toy Account (ID: 1)
+    ===========================
+
+    Account has no roles.
+
+    All users:
+    
+    ID  Name      Email                     Last Login    Last Seen    Roles
+    ----  --------  ------------------------  ------------  -----------  -------------
+    1  toy-user  toy-user@flexmeasures.io  None          None         account-admin
+
+    All assets:
+    
+    ID  Name           Type     Location
+    ----  -----------  -------  -----------------
+    2  toy-building   building  (52.374, 4.88969)
+    3  toy-battery    battery   (52.374, 4.88969)
+    4  toy-solar      solar     (52.374, 4.88969)
+
+.. code-block:: bash
+
+    $ flexmeasures show asset --id 2
+
+    =========================
+    Asset toy-building (ID: 2)
+    =========================
+
+    Type      Location           Attributes
+    -------   -----------------  ----------------------------
+    building  (52.374, 4.88969)
+
+    ====================================
+    Child assets of toy-building (ID: 2)
+    ====================================
+
+    Id       Name               Type
+    -------  -----------------  ----------------------------
+    3        toy-battery        battery
+    4        toy-solar          solar
+
+    No sensors in asset ...
+
+    $ flexmeasures show asset --id 3
+
+    ===================================
+    Asset toy-battery (ID: 3)
+    Child of asset toy-building (ID: 2)
+    ===================================
+    Type     Location           Flex-Context                      Sensors to show      Attributes
+    -------  -----------------  --------------------------------  -------------------  -----------------------
+    battery  (52.374, 4.88969)  consumption-price: {'sensor': 1}  Prices: [1]          capacity_in_mw: 500 kVA
+                                                                  Power flows: [3, 2]  min_soc_in_mwh: 0.05
+                                                                                       max_soc_in_mwh: 0.45
+
+    ====================================
+    Child assets of toy-battery (ID: 3)
+    ====================================
+
+    No children assets ...
+
+    All sensors in asset:
+    
+    ID  Name         Unit    Resolution    Timezone          Attributes
+    ----  -----------  ------  ------------  ----------------  ------------
+    2  discharging  MW      15 minutes    Europe/Amsterdam
+    
+
+Yes, that is quite a large battery :) 
+You can also see that the asset has some meta information about its scheduling. Within :ref:`flex_context`, we noted where to find the relevant optimization signal for electricity consumption (Sensor 1, which stores day-ahead prices). 
+
+.. note:: Obviously, you can use the ``flexmeasures`` command to create your own, custom account and assets. See :ref:`cli`. And to create, edit or read asset data via the API, see :ref:`v3_0`.
+
+We can also look at the battery asset in the UI of FlexMeasures (in Docker, the FlexMeasures web server already runs, on your PC you can start it with ``flexmeasures run``).
+Visit `http://localhost:5000/ <http://localhost:5000/>`_ (username is "toy-user@flexmeasures.io", password is "toy-password"):
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/asset-view-dashboard.png
+    :align: center
+|
+
+.. note:: You won't see the map tiles, as we have not configured the :ref:`MAPBOX_ACCESS_TOKEN`. If you have one, you can configure it via ``flexmeasures.cfg`` (for Docker, see :ref:`docker_configuration`).
+
+
+.. _tut_toy_schedule_price_data:
+
+Add some price data
+---------------------------------------
+
+Now to add price data. First, we'll create the CSV file with prices (EUR/MWh, see the setup for sensor 1 above) for tomorrow.
+
+.. code-block:: bash
+
+    $ TOMORROW=$(date --date="next day" '+%Y-%m-%d')
+    $ echo "Hour,Price
+    $ ${TOMORROW}T00:00:00,10
+    $ ${TOMORROW}T01:00:00,11
+    $ ${TOMORROW}T02:00:00,12
+    $ ${TOMORROW}T03:00:00,15
+    $ ${TOMORROW}T04:00:00,18
+    $ ${TOMORROW}T05:00:00,17
+    $ ${TOMORROW}T06:00:00,10.5
+    $ ${TOMORROW}T07:00:00,9
+    $ ${TOMORROW}T08:00:00,9.5
+    $ ${TOMORROW}T09:00:00,9
+    $ ${TOMORROW}T10:00:00,8.5
+    $ ${TOMORROW}T11:00:00,10
+    $ ${TOMORROW}T12:00:00,8
+    $ ${TOMORROW}T13:00:00,5
+    $ ${TOMORROW}T14:00:00,4
+    $ ${TOMORROW}T15:00:00,4
+    $ ${TOMORROW}T16:00:00,5.5
+    $ ${TOMORROW}T17:00:00,8
+    $ ${TOMORROW}T18:00:00,12
+    $ ${TOMORROW}T19:00:00,13
+    $ ${TOMORROW}T20:00:00,14
+    $ ${TOMORROW}T21:00:00,12.5
+    $ ${TOMORROW}T22:00:00,10
+    $ ${TOMORROW}T23:00:00,7" > prices-tomorrow.csv
+
+This is time series data, in FlexMeasures we call *"beliefs"*. Beliefs can also be sent to FlexMeasures via API or imported from open data hubs like `ENTSO-E <https://github.com/SeitaBV/flexmeasures-entsoe>`_ or `OpenWeatherMap <https://github.com/SeitaBV/flexmeasures-openweathermap>`_. However, in this tutorial we'll show how you can read data in from a CSV file. Sometimes that's just what you need :)
+
+.. code-block:: bash
+
+    $ flexmeasures add beliefs --sensor 1 --source toy-user prices-tomorrow.csv --timezone Europe/Amsterdam
+    Successfully created beliefs
+
+In FlexMeasures, all beliefs have a data source. Here, we use the username of the user we created earlier. We could also pass a user ID, or the name of a new data source we want to use for CLI scripts.
+
+.. note:: Attention: We created and imported prices where the times have no time zone component! That happens a lot. FlexMeasures can localize them for you to a given timezone. Here, we localized the data to the timezone of the price sensor - ``Europe/Amsterdam`` - so the start time for the first price is `2022-03-03 00:00:00+01:00` (midnight in Amsterdam).
+
+Let's look at the price data we just loaded:
+
+.. code-block:: bash
+
+    $ flexmeasures show beliefs --sensor 1 --start ${TOMORROW}T00:00:00+01:00 --duration PT24H
+    
+    Beliefs for Sensor 'day-ahead prices' (ID 1).
+    Data spans a day and starts at 2022-03-03 00:00:00+01:00.
+    The time resolution (x-axis) is an hour.
+    ┌────────────────────────────────────────────────────────────┐
+    │       ▗▀▚▖                                                 │
+    │      ▗▘  ▝▖                                                │
+    │      ▞    ▌                                                │
+    │     ▟     ▐                                                │ 15EUR/MWh
+    │    ▗▘     ▝▖                                      ▗        │
+    │   ▗▘       ▚                                    ▄▞▘▚▖      │
+    │   ▞        ▐                                  ▄▀▘   ▝▄     │
+    │ ▄▞          ▌                                ▛        ▖    │
+    │▀            ▚                               ▐         ▝▖   │
+    │             ▝▚            ▖                ▗▘          ▝▖  │ 10EUR/MWh
+    │               ▀▄▄▞▀▄▄   ▗▀▝▖               ▞            ▐  │
+    │                      ▀▀▜▘  ▝▚             ▗▘             ▚ │
+    │                              ▌            ▞               ▌│
+    │                              ▝▖          ▞                ▝│
+    │                               ▐         ▞                  │
+    │                                ▚      ▗▞                   │ 5EUR/MWh
+    │                                 ▀▚▄▄▄▄▘                    │
+    └────────────────────────────────────────────────────────────┘
+               5            10            15           20
+                         ██ day-ahead prices
+
+
+
+Again, we can also view these prices in the `FlexMeasures UI <http://localhost:5000/sensors/1>`_:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-prices.png
+    :align: center
+|
+
+.. note:: Technically, these prices for tomorrow may be forecasts (depending on whether you are running through this tutorial before or after the day-ahead market's gate closure). You can also use FlexMeasures to compute forecasts yourself. See :ref:`tut_forecasting_scheduling`.
+
+

+ 26 - 0
documentation/views/account.rst

@@ -0,0 +1,26 @@
+Account overview
+==================
+
+This is the account overview page:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_account.png
+    :align: center
+..    :scale: 40%
+
+|
+|
+
+This is the current User overview page:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot-user-overview.png
+    :align: center
+..    :scale: 40%
+
+|
+|
+
+This is the account audit log page:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot-account-auditlog.PNG
+    :align: center
+..    :scale: 40%

+ 34 - 0
documentation/views/admin.rst

@@ -0,0 +1,34 @@
+.. _admin:
+
+**************
+Administration
+**************
+
+The administrator can see assets and users here.
+
+Assets
+------
+
+Listing all assets:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_assets.png
+    :align: center
+..    :scale: 40%
+
+
+
+Users
+-----
+
+Listing all users:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_users.png
+    :align: center
+..    :scale: 40%
+
+
+Viewing one user:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_user.png
+    :align: center
+..    :scale: 40%

+ 163 - 0
documentation/views/asset-data.rst

@@ -0,0 +1,163 @@
+.. _view_asset-data:
+
+*********************
+Assets  
+*********************
+
+The asset page is divided into different views. The default selection is the "Context" view. The views are:
+
+.. contents::
+    :local:
+    :depth: 1
+|
+
+
+.. _view_asset_context:
+
+Context page
+-------------------
+
+
+On the context page, you see the asset in its structure (with its parent and children, if they exist), or its location on a map.
+In addition, you can do the following:
+
+- Click the "Show sensors" button to view the list of the sensors associated with the asset.
+- Click "Edit flex-context" to edit the flex-context of the asset.
+- Click the "Add child asset" button to add a child to the current asset.
+- Set a given page as default by clicking the checkbox on the top right of the page.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_asset_context.png
+    :align: center
+..    :scale: 40%
+
+|
+
+
+Show sensors
+^^^^^^^^^^^^
+The sensors associated with the asset are shown in a list. 
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_asset_sensors.png
+    :align: center
+..   :scale: 40%
+
+|
+
+
+Editing an asset's flex-context
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+
+Per asset, you can set fields in :ref:`the flex-context <flex_context>`, which will influence how scheduling works on this asset. The flex context dialogue allows you to define either fixed values or sensors (for dynamic values / time series). Initially, no fields are set.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot-asset-editflexcontext.png
+    :align: center
+..    :scale: 40%
+
+|
+
+Flex context overview
+"""""""""""""""""""""""
+
+* **Left Panel:** Displays a list of currently configured fields.
+* **Right Panel:** Shows details of the selected field and provides a form to modify its value.
+
+Adding a field
+"""""""""""""""
+1.  **Select Field:** Choose the desired field from the dropdown menu in the top right corner of the modal.
+2.  **Add Field:** Click the "Add Field" button next to the dropdown.
+3.  The field will be added to the list in the left panel.
+
+Setting a field value
+"""""""""""""""""""""
+
+1.  **Select Field (if it is not selected yet):** Click on the field in the left panel.
+2.  **Set Value:** In the right panel, use the provided form to set the field's value.
+
+    * Some fields may only accept a sensor value.
+    * Other fields may accept either a sensor or a fixed value.
+
+|
+
+.. _view_asset_graphs:
+
+Graphs page
+-----------
+
+The graph page is a separate page that shows data (measurements/forecasts) which are relevant to the asset.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_asset_graphs.png
+    :align: center
+..    :scale: 40%
+
+|
+
+Editing the graphs dashboard
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Click the "Edit Graph" button to open the graph editor.
+
+Use the "Add Graph" button to create graphs. For each graph, you can select one or more sensors, from all available sensors associated with the asset, including public sensors, and add them to your plot.  
+
+You can overlay data from multiple sensors on a single graph. To do this, click on an existing plot and add more sensors from the available options on the right. 
+
+Finally, it is possible to set custom titles for any sensor graph by clicking on the "edit" button right next to the default or current title.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot-asset-editgraph.png
+    :align: center
+..    :scale: 40%
+
+|
+
+Internally, the asset has a `sensors_to_show`` field, which controls which sensor data appears in the plot. This can also be set by a script. The accepted format is a dictionary with a graph title and a lists of sensor IDs (e.g. `[{"title": "Power", "sensor": 2}, {"title": "Costs", "sensors": [5,6] }]`).
+
+
+.. _view_asset_properties:
+
+Properties page
+---------------
+
+The properties page allows you to view and edit the properties of the asset.
+
+You can also delete the asset by clicking on the "Delete this asset" button.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_asset_properties.png
+    :align: center
+..    :scale: 40%
+
+|
+
+.. _view_asset_status:
+
+Status page
+-----------
+
+For each asset, you can also visit a status page to see if your data connectivity and recent jobs are okay.
+
+For data connectivity, all sensors on the asset's graph page and from its flex context are tracked.
+
+Below is a fictious example, where the toy battery (from our tutorial) has schedules discharging data, but also some added by a user, and wind production data is part of the battery's flex context. There have been three succesful scheduling jobs.
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_status_page.png
+    :align: center
+..    :scale: 40%
+
+|
+   
+Hovering over the traffic light will tell you how long ago this most recent entry is and why the light is red, yellow or green. For jobs, you can also get more information (e.g. error message).
+
+
+.. _view_asset_auditlog:
+
+Audit log 
+---------
+
+The audit log lets you see who made what changes to the asset over time. 
+This is how the audit log looks for the history of actions taken on an asset:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot-auditlog.PNG
+    :align: center
+..    :scale: 40%
+
+|
+

+ 45 - 0
documentation/views/dashboard.rst

@@ -0,0 +1,45 @@
+.. _dashboard:
+
+*********
+Dashboard
+*********
+
+The dashboard shows where the user's assets are located and how many different asset types are connected to the platform.
+The view serves to quickly identify the status of assets, such as whether there are upcoming opportunities to valorise on flexibility activations.
+In particular, the page contains:
+
+.. contents::
+    :local:
+    :depth: 1
+|
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_dashboard.png
+    :align: center
+..    :scale: 40%
+
+
+.. _dashboard_map:
+
+Interactive map of assets
+=========================
+
+The map shows all of the user's assets with icons for each asset type.
+Hovering over an asset allows users to see its name and ownership, and clicking on an asset allows the user to navigate to its page to see more details, for instance forecasts.
+
+
+.. _dashboard_summary:
+
+Summary of asset types
+======================
+
+The summary below the map lists all asset types that the user has hooked up to the platform and how many of each there are.
+Clicking on the asset type name leads to the asset's page, where its data is shown.
+
+
+Grouping by accounts
+=====================
+
+.. note:: This is a feature for user with role ``admin`` or ``admin-reader``.
+
+By default, the map is layered by asset type. However, on the bottom right admins can also switch to grouping by accounts.
+Then, map layers will contain the assets owned by accounts, and you can easily see who you're serving with what.

+ 36 - 0
documentation/views/sensors.rst

@@ -0,0 +1,36 @@
+.. _view_sensors:
+
+*********************
+Sensors
+*********************
+
+
+Each sensor also has its own page:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_sensor.png
+    :align: center
+..    :scale: 40%
+
+|
+|
+
+Next to line plots, data can sometimes be more usefully displayed as heatmaps.
+Heatmaps are great ways to spot the hotspots of activity. Usually heatmaps are actually geographical maps. In our context, the most interesting background is time ― so we'd like to see activity hotspots on a map of time intervals.
+
+We chose the "time map" of weekdays. From our experience, this is where you see the most interesting activity hotspots at a glance. For instance, that mornings often experience peaks. Or that Tuesday afternoons have low energy use, for some reason.
+
+Here is what it looks like for one week of temperature data:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/heatmap-week-temperature.png
+    :align: center
+    
+It's easy to see which days had milder temperatures.
+
+And here are 4 days of (dis)-charging patterns in Seita's V2GLiberty project:
+
+.. image:: https://github.com/FlexMeasures/screenshots/raw/main/heatmap-week-charging.png
+    :align: center
+    
+Charging (blue) mostly happens in sunshine hours, discharging during high-price hours (morning & evening)
+
+So on a technical level, the daily heatmap is essentially a heatmap of the sensor's values, with dates on the y-axis and time of day on the x-axis. For individual devices, it gives an insight into the device's running times. A new button lets users switch between charts.

+ 3 - 0
flexmeasures/Readme.md

@@ -0,0 +1,3 @@
+# The FlexMeasures package
+
+This directory packages the code which gets installed as `flexmeasures`.

+ 0 - 0
flexmeasures/__init__.py


Some files were not shown because too many files changed in this diff