eta_utility.eta_x.envs.base_env_live module

class eta_utility.eta_x.envs.base_env_live.BaseEnvLive(env_id: int, config_run: ConfigOptRun, verbose: int = 2, callback: Callable | None = None, *, scenario_time_begin: datetime | str, scenario_time_end: datetime | str, episode_duration: TimeStep | str, sampling_time: TimeStep | str, max_errors: int = 10, render_mode: str | None = None, **kwargs: Any)[source]

Bases: BaseEnv, ABC

Base class for Live Connector environments. The class will prepare the initialization of the LiveConnect class and provide facilities to automatically read step results and reset the connection.

Parameters:
  • env_id – Identification for the environment, useful when creating multiple environments.

  • config_run – Configuration of the optimization run.

  • verbose – Verbosity to use for logging.

  • callback – callback which should be called after each episode.

  • scenario_time_begin – Beginning time of the scenario.

  • scenario_time_end – Ending time of the scenario.

  • episode_duration – Duration of the episode in seconds.

  • sampling_time – Duration of a single time sample / time step in seconds.

  • max_errors – Maximum number of connection errors before interrupting the optimization process.

  • render_mode – Renders the environments to help visualise what the agent see, examples modes are “human”, “rgb_array”, “ansi” for text.

  • kwargs – Other keyword arguments (for subclasses).

abstract property config_name: str

Name of the live_connect configuration

live_connector: LiveConnect

Instance of the Live Connector.

live_connect_config: Path | Sequence[Path] | dict[str, Any] | None

Path or Dict to initialize the live connector.

max_error_count: int

Maximum error count when connections in live connector are aborted.

step(action: np.ndarray) StepResult[source]

Perform one time step and return its results. This is called for every event or for every time step during the simulation/optimization run. It should utilize the actions as supplied by the agent to determine the new state of the environment. The method must return a five-tuple of observations, rewards, terminated, truncated, info.

This also updates self.state and self.state_log to store current state information.

Note

This function always returns 0 reward. Therefore, it must be extended if it is to be used with reinforcement learning agents. If you need to manipulate actions (discretization, policy shaping, …) do this before calling this function. If you need to manipulate observations and rewards, do this after calling this function.

Parameters:

action – Actions to perform in the environment.

Returns:

The return value represents the state of the environment after the step was performed.

  • observations: A numpy array with new observation values as defined by the observation space. Observations is a np.array() (numpy array) with floating point or integer values.

  • reward: The value of the reward function. This is just one floating point value.

  • terminated: Boolean value specifying whether an episode has been completed. If this is set to true, the reset function will automatically be called by the agent or by eta_i.

  • truncated: Boolean, whether the truncation condition outside the scope is satisfied. Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call the reset function.

  • info: Provide some additional info about the state of the environment. The contents of this may be used for logging purposes in the future but typically do not currently serve a purpose.

reset(*, seed: int | None = None, options: dict[str, Any] | None = None) tuple[ObservationType, dict[str, Any]][source]

Resets the environment to an initial internal state, returning an initial observation and info.

This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the seed parameter otherwise if the environment already has a random number generator and reset() is called with seed=None, the RNG is not reset. When using the environment in conjunction with stable_baselines3, the vectorized environment will take care of seeding your custom environment automatically.

For Custom environments, the first line of reset() should be super().reset(seed=seed) which implements the seeding correctly.

Note

Don’t forget to store and reset the episode_timer.

Parameters:
  • seed – The seed that is used to initialize the environment’s PRNG (np_random). If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. (default: None)

  • options – Additional information to specify how the environment is reset (optional, depending on the specific environment) (default: None)

Returns:

Tuple of observation and info. The observation of the initial state will be an element of observation_space (typically a numpy array) and is analogous to the observation returned by step(). Info is a dictionary containing auxiliary information complementing observation. It should be analogous to the info returned by step().

close() None[source]

Close the environment. This should always be called when an entire run is finished. It should be used to close any resources (i.e. simulation models) used by the environment.

Default behavior for the Live_Connector environment is to do nothing.