I'm trying to build a network for Actor-Critic method as described here. Specifically, I'm trying to connect the last fully-connected layer with ReLU activation to two output layers for policy and value functions. But I can't understand from documentation how to do this with a graph model of tiny-dnn.
(See edit)
What I tried (this is wrong):
layers::input in(size_inputs);
layers::fc h1(size_inputs, size_hidden);
layers::fc h2(size_hidden, size_hidden);
layers::fc h3(size_hidden, size_hidden);
layers::fc h4(size_hidden, size_hidden);
layers::fc out_policy(size_hidden, size_ouputs);
layers::fc out_value(size_hidden, 1);
activation::leaky_relu activation_h;
activation::softmax activation_out_policy;
layers::linear activation_out_value(1);
auto &t1 = in << h1 << activation_h;
auto &t2 = t1 << h2 << activation_h;
auto &t3 = t2 << h3 << activation_h;
auto &t4 = t3 << h4 << activation_h;
auto &t5 = t4 << (out_policy,out_value);
construct_graph(m_network, {&in}, {&out_policy, &out_value});
(gives "vector subscript out of range" error in connect function, at "auto out_shape = head->out_shape()[head_index];" during the last call to << operator)
Edit: Oh, I'm an idiot, but the docs could provide a fuller example... First, the lifetime of network components should be the same as the lifetime of the network itself - it's not obvious. Second, this actually works, to a point. It constructs a network that produces two outputs when run, but the softmax output is all wrong - it returns negative numbers.
auto &t1 = in << h1 << activation_h;
auto &t2 = t1 << h2 << activation_h;
auto &t3 = t2 << h3 << activation_h;
auto &t4 = t3 << h4 << activation_h;
auto &t5 = t4 << out_policy;
auto &t6 = t4 << out_value;
construct_graph(m_network, {&in}, {&out_policy, &out_value});