TimeDistributed and Dense

The following code is explained in Keras:

# as the first layer in a model
model = Sequential()
model.add(TimeDistributed(Dense(8), input_shape=(10.16)))
# now model.output_shape == (None, 10, 8)
Copy the code

As you can see from the above code, TimeDistributed works with Dense, which is mainly used in one-to-many, many-to-many situations. Input_shape = (10,16), meaning step size is 10, dimension of each step is 16, (i.e., attribute length of each data is 16)

First, TimeDistributed (Dense (8), input_shape = (10,16)) was used to change the dimension of each step from 16 to 8 without changing the step size

If the batch input shape of this layer is then (50, 10, 16), then the output after this layer is (50, 10, 8).

This is the explanation given by keras official website

model = Sequential()
model.add(Dense(32, input_dim=32))
# now: model.output_shape == (None, 32)
# note: `None` is the batch dimension

model.add(RepeatVector(3))
# now: model.output_shape == (None, 3, 32)
Copy the code

If the input shape is (None,32), after adding RepeatVector(3) layer, the output will be (None,3,32),RepeatVector will not change our step size, but change the dimension of each step (i.e. attribute length).