eta_utility.eta_x.envs.julia_env module
- class eta_utility.eta_x.envs.julia_env.JuliaEnv(env_id: int, config_run: ConfigOptRun, verbose: int = 2, callback: Callable | None = None, *, scenario_time_begin: datetime | str, scenario_time_end: datetime | str, episode_duration: TimeStep | str, sampling_time: TimeStep | str, julia_env_file: pathlib.Path | str, render_mode: str | None = None, **kwargs: Any)[source]
Bases:
BaseEnv
TODO: UPDATE DOCUMENTATION! Abstract environment definition, providing some basic functionality for concrete environments to use. The class implements and adapts functions from gymnasium.Env. It provides additional functionality as required by the ETA-X framework and should be used as the starting point for new environments.
The initialization of this superclass performs many of the necessary tasks, required to specify a concrete environment. Read the documentation carefully to understand, how new environments can be developed, building on this starting point.
There are some attributes that must be set and some methods that must be implemented to satisfy the interface. This is required to create concrete environments. The required attributes are:
version: Version number of the environment.
description: Short description string of the environment.
action_space: The action space of the environment (see also gymnasium.spaces for options).
observation_space: The observation space of the environment (see also gymnasium.spaces for options).
The gymnasium interface requires the following methods for the environment to work correctly within the framework. Consult the documentation of each method for more detail.
step()
reset()
close()
- Parameters:
env_id – Identification for the environment, useful when creating multiple environments.
config_run – Configuration of the optimization run.
verbose – Verbosity to use for logging.
callback – callback which should be called after each episode.
scenario_time_begin – Beginning time of the scenario.
scenario_time_end – Ending time of the scenario.
episode_duration – Duration of the episode in seconds.
sampling_time – Duration of a single time sample / time step in seconds.
render_mode – Renders the environments to help visualise what the agent see, examples modes are “human”, “rgb_array”, “ansi” for text.
kwargs – Other keyword arguments (for subclasses).
- version = '1.0'
- description = 'This environment uses a julia file to perform its functions.'
- julia_env_path: pathlib.Path
Root Path to the julia file.
- first_update(observations: ndarray) ndarray [source]
Perform the first update and set values in simulation model to the observed values.
- Parameters:
observations – Observations of another environment.
- Returns:
Full array of observations.
- update(observations: ndarray) ndarray [source]
Update the optimization model with observations from another environment.
- Parameters:
observations – Observations from another environment
- Returns:
Full array of current observations
- step(action: np.ndarray) StepResult [source]
Perform one time step and return its results. This is called for every event or for every time step during the simulation/optimization run. It should utilize the actions as supplied by the agent to determine the new state of the environment. The method must return a five-tuple of observations, rewards, terminated, truncated, info.
Note
Do not forget to increment n_steps and n_steps_longtime.
- Parameters:
action – Actions taken by the agent.
- Returns:
The return value represents the state of the environment after the step was performed.
observations: A numpy array with new observation values as defined by the observation space. Observations is a np.array() (numpy array) with floating point or integer values.
reward: The value of the reward function. This is just one floating point value.
terminated: Boolean value specifying whether an episode has been completed. If this is set to true, the reset function will automatically be called by the agent or by eta_i.
truncated: Boolean, whether the truncation condition outside the scope is satisfied. Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call the reset function.
info: Provide some additional info about the state of the environment. The contents of this may be used for logging purposes in the future but typically do not currently serve a purpose.
- reset(*, seed: int | None = None, options: dict[str, Any] | None = None) tuple[np.ndarray, dict[str, Any]] [source]
Resets the environment to an initial internal state, returning an initial observation and info.
This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the
seed
parameter otherwise if the environment already has a random number generator andreset()
is called withseed=None
, the RNG is not reset. When using the environment in conjunction with stable_baselines3, the vectorized environment will take care of seeding your custom environment automatically.For Custom environments, the first line of
reset()
should besuper().reset(seed=seed)
which implements the seeding correctly.Note
Don’t forget to store and reset the episode_timer.
- Parameters:
seed – The seed that is used to initialize the environment’s PRNG (np_random). If the environment does not already have a PRNG and
seed=None
(the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG andseed=None
is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. (default: None)options – Additional information to specify how the environment is reset (optional, depending on the specific environment) (default: None)
- Returns:
Tuple of observation and info. The observation of the initial state will be an element of
observation_space
(typically a numpy array) and is analogous to the observation returned bystep()
. Info is a dictionary containing auxiliary information complementingobservation
. It should be analogous to theinfo
returned bystep()
.
- close() None [source]
Close the environment. This should always be called when an entire run is finished. It should be used to close any resources (i.e. simulation models) used by the environment.
- render(**kwargs: Any) None [source]
Render the environment
The set of supported modes varies per environment. Some environments do not support rendering at all. By convention in Farama gymnasium, if mode is:
human: render to the current display or terminal and return nothing. Usually for human consumption.
rgb_array: Return a numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).