Dismiss Join GitHub today GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together. Sign up
I have installed keras on a redhat 6 server, it is really a attractive framework cause it is really easy to build a deep neural network. However, I found my keras use only single thread(or single core). I ran examples given in the source code package, such as "mnist_mlp.py", and used "top" command, found the usage of CPU is 100%, never more than CPU, I have 6 cores, each with four threads, no GPU,
The nb_epoch argument has been renamed epochs everywhere. The methods fit_generator, evaluate_generator and predict_generator now work by drawing a number of batches from a generator (number of training steps), rather than a number of samples. samples_per_epoch was changed to steps_per_epoch in fit_generator. It now refers to the number of batches an epoch is considered as done. nb_val_samples was
>>> model = Sequential() model = Sequential() >>> model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(3, None, None))) ) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/nfs/isicvlnas01/share/anaconda/lib/python2.7/site-packages/keras/models.py", line 422, in add layer(x) File "/nfs/isicvlnas01/share/anaconda/lib/python2.7/site-packages/keras/engine/topology.p
I am already aware of some discussions on how to use Keras for very large datasets (>1,000,000 images) such as this and this. However, for my scenario, I can't figure out the appropriate way to use the ImageDataGenerator or write my own dataGenerator. Specifically, I have the following four questions: From this link: when we do datagen.fit(X_sample), do we assume that X_sample is a big enough chun
problem 1: i changed my code from fit to fit_generator, however, i found it converged slower than using fit. Using fit it converges at 5th epoch, however, it doesn't converge even at 10th epoch when i change to fit_generator. Moreover, when validation_data also uses fit_generator, the performance becomes much worse. Is my generator function has errors? def batch_iter(x, y, batch_size): """ Generat
Hi! Keras: 2.0.4 I recently spent some time trying to build metrics for multi-class classification outputting a per class precision, recall and f1 score. I want to have a metric that's correctly aggregating the values out of the different batches and gives me a result on the global training process with a per class granularity. The way I understand it is currently working is by calling the functio
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く