Skip to content

Commit fdb978a

Browse files
askhadelinkerzhang
authored andcommitted
Promote Thresholded Relu Op (#1856)
* promote threasholded relu * removing thresholdedrelu from old.cc * Update test models * doc updates * update test coverage doc * bug fix * update doc
1 parent ce33262 commit fdb978a

File tree

9 files changed

+209
-208
lines changed

9 files changed

+209
-208
lines changed

docs/Changelog.md

Lines changed: 38 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -4501,43 +4501,6 @@ This version of the operator has been available since version 1 of the default O
45014501
<dd>Constrain input and output types to float tensors.</dd>
45024502
</dl>
45034503

4504-
### <a name="ThresholdedRelu-1"></a>**ThresholdedRelu-1**</a>
4505-
4506-
ThresholdedRelu takes one input data (Tensor<T>) and produces one output data
4507-
(Tensor<T>) where the rectified linear function, y = x for x > alpha, y = 0 otherwise,
4508-
is applied to the tensor elementwise.
4509-
4510-
#### Version
4511-
4512-
No versioning maintained for experimental ops.
4513-
#### Attributes
4514-
4515-
<dl>
4516-
<dt><tt>alpha</tt> : float (default is 1.0)</dt>
4517-
<dd>Threshold value</dd>
4518-
</dl>
4519-
4520-
#### Inputs
4521-
4522-
<dl>
4523-
<dt><tt>X</tt> : T</dt>
4524-
<dd>Input tensor</dd>
4525-
</dl>
4526-
4527-
#### Outputs
4528-
4529-
<dl>
4530-
<dt><tt>Y</tt> : T</dt>
4531-
<dd>Output tensor</dd>
4532-
</dl>
4533-
4534-
#### Type Constraints
4535-
4536-
<dl>
4537-
<dt><tt>T</tt> : tensor(float16), tensor(float), tensor(double)</dt>
4538-
<dd>Constrain input and output types to float tensors.</dd>
4539-
</dl>
4540-
45414504
### <a name="Tile-1"></a>**Tile-1**</a>
45424505

45434506
Repeat the elements of a tensor along an axis.
@@ -9606,6 +9569,44 @@ This version of the operator has been available since version 10 of the default
96069569
#### Type Constraints
96079570

96089571

9572+
### <a name="ThresholdedRelu-10"></a>**ThresholdedRelu-10**</a>
9573+
9574+
ThresholdedRelu takes one input data (Tensor<T>) and produces one output data
9575+
(Tensor<T>) where the rectified linear function, y = x for x > alpha, y = 0 otherwise,
9576+
is applied to the tensor elementwise.
9577+
9578+
#### Version
9579+
9580+
This version of the operator has been available since version 10 of the default ONNX operator set.
9581+
9582+
#### Attributes
9583+
9584+
<dl>
9585+
<dt><tt>alpha</tt> : float (default is 1.0)</dt>
9586+
<dd>Threshold value</dd>
9587+
</dl>
9588+
9589+
#### Inputs
9590+
9591+
<dl>
9592+
<dt><tt>X</tt> : T</dt>
9593+
<dd>Input tensor</dd>
9594+
</dl>
9595+
9596+
#### Outputs
9597+
9598+
<dl>
9599+
<dt><tt>Y</tt> : T</dt>
9600+
<dd>Output tensor</dd>
9601+
</dl>
9602+
9603+
#### Type Constraints
9604+
9605+
<dl>
9606+
<dt><tt>T</tt> : tensor(float16), tensor(float), tensor(double)</dt>
9607+
<dd>Constrain input and output types to float tensors.</dd>
9608+
</dl>
9609+
96099610
### <a name="TopK-10"></a>**TopK-10**</a>
96109611

96119612
Retrieve the top-K elements along a specified axis. Given an input tensor of

docs/Operators.md

Lines changed: 93 additions & 92 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,7 @@
120120
* <a href="#Tan">Tan</a>
121121
* <a href="#Tanh">Tanh</a>
122122
* <a href="#TfIdfVectorizer">TfIdfVectorizer</a>
123+
* <a href="#ThresholdedRelu">ThresholdedRelu</a>
123124
* <a href="#Tile">Tile</a>
124125
* <a href="#TopK">TopK</a>
125126
* <a href="#Transpose">Transpose</a>
@@ -132,7 +133,6 @@
132133
* <sub>experimental</sub> <a href="#GivenTensorFill">GivenTensorFill</a>
133134
* <sub>experimental</sub> <a href="#Scale">Scale</a>
134135
* <sub>experimental</sub> <a href="#ScaledTanh">ScaledTanh</a>
135-
* <sub>experimental</sub> <a href="#ThresholdedRelu">ThresholdedRelu</a>
136136

137137
**Operators with function registered:**
138138
* <a href="#MeanVarianceNormalization">MeanVarianceNormalization</a>
@@ -12310,6 +12310,98 @@ expect(node, inputs=[input], outputs=[output], name='test_tfidfvectorizer_tf_uni
1231012310
</details>
1231112311

1231212312

12313+
### <a name="ThresholdedRelu"></a><a name="thresholdedrelu">**ThresholdedRelu**</a>
12314+
12315+
ThresholdedRelu takes one input data (Tensor<T>) and produces one output data
12316+
(Tensor<T>) where the rectified linear function, y = x for x > alpha, y = 0 otherwise,
12317+
is applied to the tensor elementwise.
12318+
12319+
#### Version
12320+
12321+
This version of the operator has been available since version 10 of the default ONNX operator set.
12322+
12323+
#### Attributes
12324+
12325+
<dl>
12326+
<dt><tt>alpha</tt> : float (default is 1.0)</dt>
12327+
<dd>Threshold value</dd>
12328+
</dl>
12329+
12330+
#### Inputs
12331+
12332+
<dl>
12333+
<dt><tt>X</tt> : T</dt>
12334+
<dd>Input tensor</dd>
12335+
</dl>
12336+
12337+
#### Outputs
12338+
12339+
<dl>
12340+
<dt><tt>Y</tt> : T</dt>
12341+
<dd>Output tensor</dd>
12342+
</dl>
12343+
12344+
#### Type Constraints
12345+
12346+
<dl>
12347+
<dt><tt>T</tt> : tensor(float16), tensor(float), tensor(double)</dt>
12348+
<dd>Constrain input and output types to float tensors.</dd>
12349+
</dl>
12350+
12351+
12352+
#### Examples
12353+
12354+
<details>
12355+
<summary>default</summary>
12356+
12357+
```python
12358+
default_alpha = 1.0
12359+
node = onnx.helper.make_node(
12360+
'ThresholdedRelu',
12361+
inputs=['x'],
12362+
outputs=['y']
12363+
)
12364+
x = np.random.randn(3, 4, 5).astype(np.float32)
12365+
y = np.clip(x, default_alpha, np.inf)
12366+
y[y == default_alpha] = 0
12367+
12368+
expect(node, inputs=[x], outputs=[y],
12369+
name='test_thresholdedrelu_default')
12370+
```
12371+
12372+
</details>
12373+
12374+
12375+
<details>
12376+
<summary>thresholdedrelu</summary>
12377+
12378+
```python
12379+
alpha = 2.0
12380+
node = onnx.helper.make_node(
12381+
'ThresholdedRelu',
12382+
inputs=['x'],
12383+
outputs=['y'],
12384+
alpha=alpha
12385+
)
12386+
12387+
x = np.array([-1.5, 0., 1.2, 2.0, 2.2]).astype(np.float32)
12388+
y = np.clip(x, alpha, np.inf) # expected output [0., 0., 0., 0., 2.2]
12389+
y[y == alpha] = 0
12390+
12391+
expect(node, inputs=[x], outputs=[y],
12392+
name='test_thresholdedrelu_example')
12393+
12394+
x = np.random.randn(3, 4, 5).astype(np.float32)
12395+
y = np.clip(x, alpha, np.inf)
12396+
y[y == alpha] = 0
12397+
12398+
expect(node, inputs=[x], outputs=[y],
12399+
name='test_thresholdedrelu')
12400+
```
12401+
12402+
</details>
12403+
12404+
1231312405
### <a name="Tile"></a><a name="tile">**Tile**</a>
1231412406

1231512407
Constructs a tensor by tiling a given tensor.
@@ -13104,94 +13196,3 @@ No versioning maintained for experimental ops.
1310413196
</dl>
1310513197

1310613198

13107-
### <sub>experimental</sub> <a name="ThresholdedRelu"></a><a name="thresholdedrelu">**ThresholdedRelu**</a>
13108-
13109-
ThresholdedRelu takes one input data (Tensor<T>) and produces one output data
13110-
(Tensor<T>) where the rectified linear function, y = x for x > alpha, y = 0 otherwise,
13111-
is applied to the tensor elementwise.
13112-
13113-
#### Version
13114-
13115-
No versioning maintained for experimental ops.
13116-
#### Attributes
13117-
13118-
<dl>
13119-
<dt><tt>alpha</tt> : float (default is 1.0)</dt>
13120-
<dd>Threshold value</dd>
13121-
</dl>
13122-
13123-
#### Inputs
13124-
13125-
<dl>
13126-
<dt><tt>X</tt> : T</dt>
13127-
<dd>Input tensor</dd>
13128-
</dl>
13129-
13130-
#### Outputs
13131-
13132-
<dl>
13133-
<dt><tt>Y</tt> : T</dt>
13134-
<dd>Output tensor</dd>
13135-
</dl>
13136-
13137-
#### Type Constraints
13138-
13139-
<dl>
13140-
<dt><tt>T</tt> : tensor(float16), tensor(float), tensor(double)</dt>
13141-
<dd>Constrain input and output types to float tensors.</dd>
13142-
</dl>
13143-
13144-
13145-
#### Examples
13146-
13147-
<details>
13148-
<summary>default</summary>
13149-
13150-
```python
13151-
default_alpha = 1.0
13152-
node = onnx.helper.make_node(
13153-
'ThresholdedRelu',
13154-
inputs=['x'],
13155-
outputs=['y']
13156-
)
13157-
x = np.random.randn(3, 4, 5).astype(np.float32)
13158-
y = np.clip(x, default_alpha, np.inf)
13159-
y[y == default_alpha] = 0
13160-
13161-
expect(node, inputs=[x], outputs=[y],
13162-
name='test_thresholdedrelu_default')
13163-
```
13164-
13165-
</details>
13166-
13167-
13168-
<details>
13169-
<summary>thresholdedrelu</summary>
13170-
13171-
```python
13172-
alpha = 2.0
13173-
node = onnx.helper.make_node(
13174-
'ThresholdedRelu',
13175-
inputs=['x'],
13176-
outputs=['y'],
13177-
alpha=alpha
13178-
)
13179-
13180-
x = np.array([-1.5, 0., 1.2, 2.0, 2.2]).astype(np.float32)
13181-
y = np.clip(x, alpha, np.inf) # expected output [0., 0., 0., 0., 2.2]
13182-
y[y == alpha] = 0
13183-
13184-
expect(node, inputs=[x], outputs=[y],
13185-
name='test_thresholdedrelu_example')
13186-
13187-
x = np.random.randn(3, 4, 5).astype(np.float32)
13188-
y = np.clip(x, alpha, np.inf)
13189-
y[y == alpha] = 0
13190-
13191-
expect(node, inputs=[x], outputs=[y],
13192-
name='test_thresholdedrelu')
13193-
```
13194-
13195-
</details>
13196-
13197-

0 commit comments

Comments
 (0)