File: softmaxwithloss.md

package info (click to toggle)
caffe 1.0.0%2Bgit20180821.99bd997-8
  • links: PTS, VCS
  • area: main
  • in suites: bullseye
  • size: 16,288 kB
  • sloc: cpp: 61,586; python: 5,783; makefile: 599; sh: 559
file content (33 lines) | stat: -rw-r--r-- 1,442 bytes parent folder | download | duplicates (5)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
title: Softmax with Loss Layer
---

# Softmax with Loss Layer

* Layer type: `SoftmaxWithLoss`
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1SoftmaxWithLossLayer.html)
* Header: [`./include/caffe/layers/softmax_loss_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/softmax_loss_layer.hpp)
* CPU implementation: [`./src/caffe/layers/softmax_loss_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/softmax_loss_layer.cpp)
* CUDA GPU implementation: [`./src/caffe/layers/softmax_loss_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/softmax_loss_layer.cu)

The softmax loss layer computes the multinomial logistic loss of the softmax of its inputs. It's conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.

## Parameters

* Parameters (`SoftmaxParameter softmax_param`)
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):

{% highlight Protobuf %}
{% include proto/SoftmaxParameter.txt %}
{% endhighlight %}

* Parameters (`LossParameter loss_param`)
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):

{% highlight Protobuf %}
{% include proto/LossParameter.txt %}
{% endhighlight %}

## See also

* [Softmax layer](softmax.html)