Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 7th No.38】为Paddle代码转换工具新增API转换规则(第5组) #6885

Open
wants to merge 12 commits into
base: develop
Choose a base branch
from
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ Paddle 无此 API,需要组合实现。

```python
# PyTorch 写法
y = x.float_power(2)
y = x.float_power(y)

# Paddle 写法
y = x.cast(paddle.float64).pow(2)
y = x.cast(paddle.float64).pow(y)
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
## [无参数]torch.Tensor.isneginf

### [torch.Tensor.isneginf ](https://pytorch.org/docs/stable/generated/torch.Tensor.isneginf.html#torch.Tensor.isneginf)

```python
torch.Tensor.isneginf()
```

### [paddle.Tensor.isneginf](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/Tensor_cn.html#isneginf-name-none)

```python
paddle.Tensor.isneginf(name=None)
```

两者功能一致,无参数。
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
## [无参数]torch.Tensor.isposinf

### [torch.Tensor.isposinf](https://pytorch.org/docs/stable/generated/torch.Tensor.isposinf.html#torch.Tensor.isposinf)

```python
torch.Tensor.isposinf()
```

### [paddle.Tensor.isposinf](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/Tensor_cn.html#isposinf-name-none)

```python
paddle.Tensor.isposinf(name=None)
```

两者功能一致,无参数。
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
## [无参数]torch.Tensor.isreal

### [torch.Tensor.isreal](https://pytorch.org/docs/stable/generated/torch.Tensor.isreal.html#torch.Tensor.isreal)

```python
Tensor.isreal()
```

### [paddle.Tensor.isreal](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/Tensor_cn.html#isreal-name-none)

```python
paddle.Tensor.isreal(name=None)
```

两者功能一致,无参数。
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
## [组合替代实现]torch.Tensor.positive

[torch.Tensor.positive](https://pytorch.org/docs/stable/generated/torch.Tensor.positive.html#torch.Tensor.positive)

```python
torch.Tensor.positive()
```

判断 `input` 是否是 bool 类型的 Tensor,如果是则抛出 RuntimeError 异常,否则返回 `input` 。

Paddle 无此 API,需要组合实现。

### 转写示例

```python
# PyTorch 写法
x.positive()

# Paddle 写法
def positive(x):
if x.dtype != paddle.bool:
return x
else:
raise RuntimeError("boolean tensors is not supported.")

positive(x)
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
## [paddle 参数更多]torch.Tensor.scatter_reduce

### [torch.Tensor.scatter_reduce](https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce.html#torch-tensor-scatter-reduce)

```python
torch.Tensor.scatter_reduce(dim, index, src, reduce, *, include_self=True)
```

### [paddle.Tensor.put_along_axis](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/Tensor_cn.html#put-along-axis-indices-value-axis-reduce-assign-include-self-true-broadcast-true)

```python
paddle.Tensor.put_along_axis(indices, values, axis, reduce="assign", include_self=True, broadcast=True)
```

其中 Paddle 相比 PyTorch 支持更多其他参数,具体如下:

### 参数映射

| PyTorch | PaddlePaddle | 备注 |
| ------------ | ------------ | ------------------------------------------------------------ |
| dim | axis | 表示 scatter 的维度,仅参数名不一致。 |
| index | indices | 表示输入的索引张量,仅参数名不一致。 |
| src | values | 表示需要插入的值,仅参数名不一致。 |
| reduce | reduce | 表示插入 values 时的计算方式,参数默认值不一致。PyTorch 中该参数无默认值,需要输入,Paddle 中默认值为 `assign`,应设置为与 PyTorch 一致。其中 PyTorch 的 `sum` 对应 Paddle 中的 `add`,PyTorch 的 `prod` 对应 Paddle 中 `multiply`。 |
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

此处 paddle 的 api 文档应该有误
params_in_docs
params_in_code

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

此处 paddle 的 api 文档应该有误 params_in_docs params_in_code

以代码为准就行

| include_self | include_self | 表示插入 values 时是否包含输入元素中的值。 |
| - | broadcast | 表示是否需要广播索引张量矩阵,PyTorch 无此参数,Paddle 应设置为 `False` 才与 PyTorch 一致 |
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
## [输入参数类型不一致]torch.block_diag

### [torch.block_diag](https://pytorch.org/docs/stable/generated/torch.block_diag.html#torch-block-diag)

```python
torch.block_diag(*tensors)
```

### [paddle.block_diag](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/block_diag_cn.html)

```python
paddle.block_diag(inputs, name=None)
```

二者功能一致但参数类型不一致,具体如下:

### 参数映射

| PyTorch | PaddlePaddle | 备注 |
| -------- | ------------ | ------------------------------------------------------------ |
| *tensors | inputs | 一组输入 Tensor,PyTorch 参数 tensors 为可变参数,Paddle 参数 inputs 为 list(Tensor) 或 tuple(Tensor) 的形式。 |

### 转写示例

#### *tensors:一组输入 Tensor

```python
# PyTorch 写法
torch.block_diag(x, y, z)

# Paddle 写法
paddle.block_diag([x, y, z])
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
## [组合替代实现]torch.can_cast

### [torch.can_cast](https://pytorch.org/docs/stable/generated/torch.can_cast.html#torch-can-cast)

```python
torch.can_cast(from_, to)
```

判断类型的转换在 PyTorch 的[casting 规则](https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc)中是否被允许。

Paddle 无此 API,需要组合实现。

### 转写示例

```python
# PyTorch 写法
torch.can_cast(x, y)

# Paddle 写法
def can_cast(from_, to):
can_cast_dict = {
paddle.bfloat16: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: False,
paddle.int8: False,
paddle.int16: False,
paddle.int32: False,
paddle.int64: False,
paddle.bool: False
},
paddle.float16: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: False,
paddle.int8: False,
paddle.int16: False,
paddle.int32: False,
paddle.int64: False,
paddle.bool: False,
},
paddle.float32: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: False,
paddle.int8: False,
paddle.int16: False,
paddle.int32: False,
paddle.int64: False,
paddle.bool: False,
},
paddle.float64: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: False,
paddle.int8: False,
paddle.int16: False,
paddle.int32: False,
paddle.int64: False,
paddle.bool: False,
},
paddle.complex64: {
paddle.bfloat16: False,
paddle.float16: False,
paddle.float32: False,
paddle.float64: False,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: False,
paddle.int8: False,
paddle.int16: False,
paddle.int32: False,
paddle.int64: False,
paddle.bool: False,
},
paddle.complex128: {
paddle.bfloat16: False,
paddle.float16: False,
paddle.float32: False,
paddle.float64: False,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: False,
paddle.int8: False,
paddle.int16: False,
paddle.int32: False,
paddle.int64: False,
paddle.bool: False,
},
paddle.uint8: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: True,
paddle.int8: True,
paddle.int16: True,
paddle.int32: True,
paddle.int64: True,
paddle.bool: False,
},
paddle.int8: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: True,
paddle.int8: True,
paddle.int16: True,
paddle.int32: True,
paddle.int64: True,
paddle.bool: False,
},
paddle.int16: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: True,
paddle.int8: True,
paddle.int16: True,
paddle.int32: True,
paddle.int64: True,
paddle.bool: False,
},
paddle.int32: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: True,
paddle.int8: True,
paddle.int16: True,
paddle.int32: True,
paddle.int64: True,
paddle.bool: False,
},
paddle.int64: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: True,
paddle.int8: True,
paddle.int16: True,
paddle.int32: True,
paddle.int64: True,
paddle.bool: False,
},
paddle.bool: {
paddle.bfloat16: True,
paddle.float16: True,
paddle.float32: True,
paddle.float64: True,
paddle.complex64: True,
paddle.complex128: True,
paddle.uint8: True,
paddle.int8: True,
paddle.int16: True,
paddle.int32: True,
paddle.int64: True,
paddle.bool: True,
}
}
return can_cast_dict[from_][to]

can_cast(x, y)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个情况比较多,需要核对下,是不是与Torch结果一致的

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我写了一个测试代码,把示例中 can_cast_dict 的 paddle 全部替换成 torch,和 torch.can_cast 的结果一致

import torch

can_cast_dict = {
    torch.bfloat16: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: False,
        torch.int8: False,
        torch.int16: False,
        torch.int32: False,
        torch.int64: False,
        torch.bool: False
    },
    torch.float16: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: False,
        torch.int8: False,
        torch.int16: False,
        torch.int32: False,
        torch.int64: False,
        torch.bool: False,
    },
    torch.float32: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: False,
        torch.int8: False,
        torch.int16: False,
        torch.int32: False,
        torch.int64: False,
        torch.bool: False,
    },
    torch.float64: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: False,
        torch.int8: False,
        torch.int16: False,
        torch.int32: False,
        torch.int64: False,
        torch.bool: False,
    },
    torch.complex64: {
        torch.bfloat16: False,
        torch.float16: False,
        torch.float32: False,
        torch.float64: False,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: False,
        torch.int8: False,
        torch.int16: False,
        torch.int32: False,
        torch.int64: False,
        torch.bool: False,
    },
    torch.complex128: {
        torch.bfloat16: False,
        torch.float16: False,
        torch.float32: False,
        torch.float64: False,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: False,
        torch.int8: False,
        torch.int16: False,
        torch.int32: False,
        torch.int64: False,
        torch.bool: False,
    },
    torch.uint8: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: True,
        torch.int8: True,
        torch.int16: True,
        torch.int32: True,
        torch.int64: True,
        torch.bool: False,
    },
    torch.int8: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: True,
        torch.int8: True,
        torch.int16: True,
        torch.int32: True,
        torch.int64: True,
        torch.bool: False,
    },
    torch.int16: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: True,
        torch.int8: True,
        torch.int16: True,
        torch.int32: True,
        torch.int64: True,
        torch.bool: False,
    },
    torch.int32: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: True,
        torch.int8: True,
        torch.int16: True,
        torch.int32: True,
        torch.int64: True,
        torch.bool: False,
    },
    torch.int64: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: True,
        torch.int8: True,
        torch.int16: True,
        torch.int32: True,
        torch.int64: True,
        torch.bool: False,
    },
    torch.bool: {
        torch.bfloat16: True,
        torch.float16: True,
        torch.float32: True,
        torch.float64: True,
        torch.complex64: True,
        torch.complex128: True,
        torch.uint8: True,
        torch.int8: True,
        torch.int16: True,
        torch.int32: True,
        torch.int64: True,
        torch.bool: True,
    }
}
for _from_dtype in can_cast_dict.keys():
    for _to_dtype in can_cast_dict[_from_dtype].keys():
        assert torch.can_cast(_from_dtype, _to_dtype) == can_cast_dict[_from_dtype][_to_dtype], "can_cast error"
        print(f"_from_dtype={_from_dtype}, _to_dtype={_to_dtype}")

print("can_cast test pass")

```
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
## [输入参数类型不一致]torch.cartesian_prod

### [torch.cartesian_prod](https://pytorch.org/docs/stable/generated/torch.cartesian_prod.html#torch-cartesian-prod)

```python
torch.cartesian_prod(*tensors)
```

### [paddle.cartesian_prod](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/cartesian_prod_cn.html)

```python
paddle.cartesian_prod(x, name=None)
```

两者功能一致但参数类型不一致,具体如下:

### 参数映射

| PyTorch | PaddlePaddle | 备注 |
| -------- | ------------ | ------------------------------------------------------------ |
| *tensors | x | 一组输入 Tensor , PyTorch 参数 tensors 为可变参, Paddle 参数 x 为 list(Tensor) 或 tuple(Tensor) 的形式。 |

### 转写示例

#### *tensors:一组输入 Tensor

```python
# PyTorch 写法
torch.cartesian_prod(a, b)

# Paddle 写法
paddle.cartesian_prod([a, b])
```
Loading