Version: 3.x
rasa.utils.tensorflow.temp_keras_modules
TmpKerasModel Objects
class TmpKerasModel(Model)
Temporary solution. Keras model that uses a custom data adapter inside fit.
fit
@traceback_utils.filter_traceback
def fit(x: Optional[Union[np.ndarray, tf.Tensor, tf.data.Dataset,
tf.keras.utils.Sequence]] = None,
y: Optional[Union[np.ndarray, tf.Tensor, tf.data.Dataset,
tf.keras.utils.Sequence]] = None,
batch_size: Optional[int] = None,
epochs: int = 1,
verbose: int = 1,
callbacks: Optional[List[Callback]] = None,
validation_split: float = 0.0,
validation_data: Optional[Any] = None,
shuffle: bool = True,
class_weight: Optional[Dict[int, float]] = None,
sample_weight: Optional[np.ndarray] = None,
initial_epoch: int = 0,
steps_per_epoch: Optional[int] = None,
validation_steps: Optional[int] = None,
validation_batch_size: Optional[int] = None,
validation_freq: int = 1,
max_queue_size: int = 10,
workers: int = 1,
use_multiprocessing: bool = False) -> History
Trains the model for a fixed number of epochs (iterations on a dataset).
Arguments:
x
- Input data. It could be:- A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).
- A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
- A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
- A
tf.data
dataset. Should return a tuple of either(inputs, targets)
or(inputs, targets, sample_weights)
. - A generator or
keras.utils.Sequence
returning(inputs, targets)
or(inputs, targets, sample_weights)
. - A
tf.keras.utils.experimental.DatasetCreator
, which wraps a callable that takes a single argument of typetf.distribute.InputContext
, and returns atf.data.Dataset
.DatasetCreator
should be used when users prefer to specify the per-replica batching and sharding logic for theDataset
. Seetf.keras.utils.experimental.DatasetCreator
doc for more information. A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If usingtf.distribute.experimental.ParameterServerStrategy
, onlyDatasetCreator
type is supported forx
.
y
- Target data. Like the input datax
, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent withx
(you cannot have Numpy inputs and tensor targets, or inversely). Ifx
is a dataset, generator, orkeras.utils.Sequence
instance,y
should not be specified (since targets will be obtained fromx
).batch_size
- Integer orNone
. Number of samples per gradient update. If unspecified,batch_size
will default to 32. Do not specify thebatch_size
if your data is in the form of datasets, generators, orkeras.utils.Sequence
instances (since they generate batches).epochs
- Integer. Number of epochs to train the model. An epoch is an iteration over the entirex
andy
data provided (unless thesteps_per_epoch
flag is set to something other than None). Note that in conjunction withinitial_epoch
,epochs
is to be understood as "final epoch". The model is not trained for a number of iterations given byepochs
, but merely until the epoch of indexepochs
is reached.verbose
- 'auto', 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. 'auto' defaults to 1 for most cases, but 2 when used withParameterServerStrategy
. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).callbacks
- List ofkeras.callbacks.Callback
instances. List of callbacks to apply during training. Seetf.keras.callbacks
. Notetf.keras.callbacks.ProgbarLogger
andtf.keras.callbacks.History
callbacks are created automatically and need not be passed intomodel.fit
.tf.keras.callbacks.ProgbarLogger
is created or not based onverbose
argument tomodel.fit
. Callbacks with batch-level calls are currently unsupported withtf.distribute.experimental.ParameterServerStrategy
, and users are advised to implement epoch-level calls instead with an appropriatesteps_per_epoch
value.validation_split
- Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in thex
andy
data provided, before shuffling. This argument is not supported whenx
is a dataset, generator orkeras.utils.Sequence
instance.validation_split
is not yet supported withtf.distribute.experimental.ParameterServerStrategy
.validation_data
- Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided usingvalidation_split
orvalidation_data
is not affected by regularization layers like noise and dropout.validation_data
will overridevalidation_split
.validation_data
could be:- A tuple
(x_val, y_val)
of Numpy arrays or tensors. - A tuple
(x_val, y_val, val_sample_weights)
of NumPy arrays. - A
tf.data.Dataset
. - A Python generator or
keras.utils.Sequence
returning(inputs, targets)
or(inputs, targets, sample_weights)
.validation_data
is not yet supported withtf.distribute.experimental.ParameterServerStrategy
.
- A tuple
shuffle
- Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored whenx
is a generator or an object of tf.data.Dataset. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect whensteps_per_epoch
is notNone
.class_weight
- Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class.sample_weight
- Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape(samples, sequence_length)
, to apply a different weight to every timestep of every sample. This argument is not supported whenx
is a dataset, generator, orkeras.utils.Sequence
instance, instead provide the sample_weights as the third element ofx
.initial_epoch
- Integer. Epoch at which to start training (useful for resuming a previous training run).steps_per_epoch
- Integer orNone
. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the defaultNone
is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is atf.data
dataset, and 'steps_per_epoch' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify thesteps_per_epoch
argument. Ifsteps_per_epoch=-1
the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When usingtf.distribute.experimental.ParameterServerStrategy
:steps_per_epoch=None
is not supported.
validation_steps
- Only relevant ifvalidation_data
is provided and is atf.data
dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If 'validation_steps' is None, validation will run until thevalidation_data
dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If 'validation_steps' is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.validation_batch_size
- Integer orNone
. Number of samples per validation batch. If unspecified, will default tobatch_size
. Do not specify thevalidation_batch_size
if your data is in the form of datasets, generators, orkeras.utils.Sequence
instances (since they generate batches).validation_freq
- Only relevant if validation data is provided. Integer orcollections.abc.Container
instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g.validation_freq=2
runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g.validation_freq=[1, 2, 10]
runs validation at the end of the 1st, 2nd, and 10th epochs.max_queue_size
- Integer. Used for generator orkeras.utils.Sequence
input only. Maximum size for the generator queue. If unspecified,max_queue_size
will default to 10.workers
- Integer. Used for generator orkeras.utils.Sequence
input only. Maximum number of processes to spin up when using process-based threading. If unspecified,workers
will default to 1.use_multiprocessing
- Boolean. Used for generator orkeras.utils.Sequence
input only. IfTrue
, use process-based threading. If unspecified,use_multiprocessing
will default toFalse
. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. Unpacking behavior for iterator-like inputs: A common pattern is to pass a tf.data.Dataset, generator, or tf.keras.utils.Sequence to thex
argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as 'x'. When yielding dicts, they should still adhere to the top-level tuple structure. e.g.({"x0": x0, "x1": x1}, y)
. Keras will not attempt to separate features, targets, and weights from the keys of a single dict. A notable unsupported data type is the namedtuple. The reason is that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:namedtuple("example_tuple", ["y", "x"])
it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:namedtuple("other_tuple", ["x", "y", "z"])
where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element tox
. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)
Returns:
A History
object. Its History.history
attribute is
a record of training loss values and metrics values
at successive epochs, as well as validation loss values
and validation metrics values (if applicable).
Raises:
RuntimeError
- 1. If the model was never compiled or, 2. Ifmodel.fit
is wrapped intf.function
.ValueError
- In case of mismatch between the provided input data and what the model expects or when the input data is empty.
CustomDataHandler Objects
class CustomDataHandler(data_adapter.DataHandler)
Handles iterating over epoch-level tf.data.Iterator
objects.
enumerate_epochs
def enumerate_epochs() -> Generator[Tuple[int, Iterator], None, None]
Yields (epoch, tf.data.Iterator)
.