Second opinion needed on predictions from neural network [on hold] - python

So I have created a neural network in Python, but my predictions seem like they are just copying the previous value and so are just slightly shifted. Does this happen a lot using a neural network? First time really using one so not too sure if this is normal or if my results are terrible.
Any information would be helpful.


Convolutional Neural Network algorithm

I am curious to know whether there is any reference to explain the CNN algorithm very clearly so that is easy to be translated into the machine code? In other words, I want to write my very own CNN.
I am aware of many frameworks that are already written to do the job. But I am curious to write my very own one.

Q-Learning Neural Network in Lasagne

I'm just beginning to experiment with neural networks and was hoping to create a neural network capable of learning to play the game Gomoku via q-learning. After reading through some of the Lasagne tutorials and API, I unsure how to proceed with my project. Also, looking at the mnist example that comes with Lasagne, I'm uncertain what code, if any, applies to what I'm trying to do. So I guess my question is, what Lasagne code do I need to create and train such a network? I don't need the most effective solution; something simple and comprehensible for a beginner would be much appreciated.
Some additional details:
I would like to have two instances of the network play against each other
I've written a basic program that can take in player moves (a single integer value in range(0, total board positions - 1)) to simulate a Gomoku game and return a victor, which should be necessary for providing reinforcement to the networks

Multiple artificial neural networks

I am trying to set up a Multiple Artificial Neural Network as you can see here on image (a):
I want that each of the networks work independently on its own domain. The single networks must be built and trained for their specific task. The final decision will be make on the results of the individual networks, often called expert networks or agents.
Because of privacy, I could not share my data.
I try to set up this with Tensorflow in Python. Do you have an idea of ​​how I would do it if that is achievable? At the moment I have not found any examples of this.
The way to go about this is to just take the outputs of the two networks and concatenate the resulting output tensors (and reshape them if needed) and then pass them into the final network. Take a look at here for the concatenation documentation and here for an example of taking the output from one network and feeding it into another. This should give you a place to start from.
As for (a), it is simple, just train the networks before hand and load them when you are training the final network. Then do the concatenation on the outputs.
Hope this helps

sknn multi layer perceptron classifier

I am using the following neural net classifier in python
from sknn.mlp import Layer,Classifier
nn = mlp.Classifier(
mlp.Layer("Tanh", units=n_feat/8),
mlp.Layer("Sigmoid", units=n_feat/16),
mlp.Layer("Softmax", units=n_targets)],
which is working just fine.My question is that how to proceed if I require for example 100,200 or 500 hidden layers? Do I have to specify each layer here manually or someone has better Idea in python for MLP?
You could create some loop-based mechanism to build the list of layers I suppose, but there's a bigger issue here. A standard MLP with hundreds of layers is likely to be extremely expensive to train - both in terms of computational speed as well as memory usage. MLPs typically only have one or two hidden layers, or occasionally a few more. But for problems that can truly benefit from more hidden layers, it becomes important to incorporate some of the lessons learned in the field of deep learning. For example, for object classification on images, using all fully-connected layers is incredibly inefficient, because you're interested in identifying spatially-local patterns, and therefore interactions between spatially-distant pixels or regions is largely noise. (This is a perfect case for using a deep convolutional neural net.)
Although some very deep networks have been created, it's worth pointing out that even Google's very powerful Inception-v3 model is only 42-layers deep. Anyway, if you're interested in building deep models, I'd recommend reading this Deep Learning book. From what I've read of it, it seems to be a very good introduction. Hope that helps!

TensorFlow: simple recurrent neural network

I've built some neural networks with TensorFlow, like basic MLPs and convolutional neural networks. Now I want to move on to recurrent neural networks. However, I'm not experienced in natural language processing. Therefore the TensorFlow NLP tutorials for RNNs are not easy to read for me (and not really interesting, too).
Basically I want to start off with something simple, not a LSTM.
How would one build a simple recurrent neural network, like an Elman network, in TensorFlow?
I were only able to find GRU- or LSTM RNN examples for TensorFlow, mostly for NLP. Does anyone know of some simple recurrent neural network tutorials or examples for TensorFlow?
This figure shows a basic Elman network, which is often simply called SRN (simple recurrent network):
One option is to use the built-in RNNCell located in tensorflow/python/ops/
If you don't want to do that you can make your own RNN. The RNN will train using back-propagation through time. Try unrolling the network a fixed number of steps, e.g. consider input sequences of length ten. Then you can write a loop in python to do all of the matrix multiplications for each step of the network. Each time you can take the output from the previous step and concatenate it with the input to that step. It will not be too many lines of code to get this working.