mirror of
https://github.com/OpenListTeam/OpenList.git
synced 2025-09-20 12:46:17 +08:00
Compare commits
54 Commits
v4.1.0
...
renovate/g
Author | SHA1 | Date | |
---|---|---|---|
7350b44036 | |||
87cf95f50b | |||
8ab26cb823 | |||
5880c8e1af | |||
14bf4ecb4c | |||
04a5e58781 | |||
bbd4389345 | |||
f350ccdf95 | |||
4f2de9395e | |||
b0dbbebfb0 | |||
0c27b4bd47 | |||
736cd9e5f2 | |||
c7a603c926 | |||
a28d6d5693 | |||
e59d2233e2 | |||
01914a06ef | |||
6499374d1c | |||
b054919d5c | |||
048ee9c2e5 | |||
23394548ca | |||
b04677b806 | |||
e4c902dd93 | |||
5d8bd258c0 | |||
08c5283c8c | |||
10a14f10cd | |||
f86ebc52a0 | |||
016ed90efa | |||
d76407b201 | |||
5de6b660f2 | |||
71ada3b656 | |||
dc42f0e226 | |||
74bf9f6467 | |||
d0c22a1ecb | |||
57fceabcf4 | |||
8c244a984d | |||
df479ba806 | |||
5ae8e96237 | |||
aa0ced47b0 | |||
ab747d9052 | |||
93c06213d4 | |||
b9b8eed285 | |||
317d190b77 | |||
52d7d819ad | |||
0483e0f868 | |||
08dae4f55f | |||
9ac0484bc0 | |||
8cf15183a0 | |||
c8f2aaaa55 | |||
1208bd0a83 | |||
6b096bcad4 | |||
58dbf088f9 | |||
05ff7908f2 | |||
a703b736c9 | |||
e458f2ab53 |
56
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
56
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
<!--
|
||||||
|
Provide a general summary of your changes in the Title above.
|
||||||
|
The PR title must start with `feat(): `, `docs(): `, `fix(): `, `style(): `, or `refactor(): `, `chore(): `. For example: `feat(component): add new feature`.
|
||||||
|
If it spans multiple components, use the main component as the prefix and enumerate in the title, describe in the body.
|
||||||
|
-->
|
||||||
|
<!--
|
||||||
|
在上方标题中提供您更改的总体摘要。
|
||||||
|
PR 标题需以 `feat(): `, `docs(): `, `fix(): `, `style(): `, `refactor(): `, `chore(): ` 其中之一开头,例如:`feat(component): 新增功能`。
|
||||||
|
如果跨多个组件,请使用主要组件作为前缀,并在标题中枚举、描述中说明。
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Description / 描述
|
||||||
|
|
||||||
|
<!-- Describe your changes in detail -->
|
||||||
|
<!-- 详细描述您的更改 -->
|
||||||
|
|
||||||
|
## Motivation and Context / 背景
|
||||||
|
|
||||||
|
<!-- Why is this change required? What problem does it solve? -->
|
||||||
|
<!-- 为什么需要此更改?它解决了什么问题? -->
|
||||||
|
|
||||||
|
<!-- If it fixes an open issue, please link to the issue here. -->
|
||||||
|
<!-- 如果修复了一个打开的issue,请在此处链接到该issue -->
|
||||||
|
|
||||||
|
Closes #XXXX
|
||||||
|
|
||||||
|
<!-- or -->
|
||||||
|
<!-- 或者 -->
|
||||||
|
|
||||||
|
Relates to #XXXX
|
||||||
|
|
||||||
|
## How Has This Been Tested? / 测试
|
||||||
|
|
||||||
|
<!-- Please describe in detail how you tested your changes. -->
|
||||||
|
<!-- 请详细描述您如何测试更改 -->
|
||||||
|
|
||||||
|
## Checklist / 检查清单
|
||||||
|
|
||||||
|
<!-- Go over all the following points, and put an `x` in all the boxes that apply. -->
|
||||||
|
<!-- 检查以下所有要点,并在所有适用的框中打`x` -->
|
||||||
|
|
||||||
|
<!-- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
|
||||||
|
<!-- 如果您对其中任何一项不确定,请不要犹豫提问。我们会帮助您! -->
|
||||||
|
|
||||||
|
- [ ] I have read the [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) document.
|
||||||
|
我已阅读 [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) 文档。
|
||||||
|
- [ ] I have formatted my code with `go fmt` or [prettier](https://prettier.io/).
|
||||||
|
我已使用 `go fmt` 或 [prettier](https://prettier.io/) 格式化提交的代码。
|
||||||
|
- [ ] I have added appropriate labels to this PR (or mentioned needed labels in the description if lacking permissions).
|
||||||
|
我已为此 PR 添加了适当的标签(如无权限或需要的标签不存在,请在描述中说明,管理员将后续处理)。
|
||||||
|
- [ ] I have requested review from relevant code authors using the "Request review" feature when applicable.
|
||||||
|
我已在适当情况下使用"Request review"功能请求相关代码作者进行审查。
|
||||||
|
- [ ] I have updated the repository accordingly (If it’s needed).
|
||||||
|
我已相应更新了相关仓库(若适用)。
|
||||||
|
- [ ] [OpenList-Frontend](https://github.com/OpenListTeam/OpenList-Frontend) #XXXX
|
||||||
|
- [ ] [OpenList-Docs](https://github.com/OpenListTeam/OpenList-Docs) #XXXX
|
1
.github/workflows/beta_release.yml
vendored
1
.github/workflows/beta_release.yml
vendored
@ -93,6 +93,7 @@ jobs:
|
|||||||
run: bash build.sh dev web
|
run: bash build.sh dev web
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
|
||||||
|
|
||||||
- name: Build
|
- name: Build
|
||||||
uses: OpenListTeam/cgo-actions@v1.2.2
|
uses: OpenListTeam/cgo-actions@v1.2.2
|
||||||
|
1
.github/workflows/build.yml
vendored
1
.github/workflows/build.yml
vendored
@ -39,6 +39,7 @@ jobs:
|
|||||||
run: bash build.sh dev web
|
run: bash build.sh dev web
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
|
||||||
|
|
||||||
- name: Build
|
- name: Build
|
||||||
uses: OpenListTeam/cgo-actions@v1.2.2
|
uses: OpenListTeam/cgo-actions@v1.2.2
|
||||||
|
1
.github/workflows/release.yml
vendored
1
.github/workflows/release.yml
vendored
@ -66,6 +66,7 @@ jobs:
|
|||||||
bash build.sh release ${{ matrix.build-type == 'lite' && 'lite' || '' }} ${{ matrix.target-platform }}
|
bash build.sh release ${{ matrix.build-type == 'lite' && 'lite' || '' }} ${{ matrix.target-platform }}
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
|
||||||
|
|
||||||
- name: Upload assets
|
- name: Upload assets
|
||||||
uses: softprops/action-gh-release@v2
|
uses: softprops/action-gh-release@v2
|
||||||
|
2
.github/workflows/release_docker.yml
vendored
2
.github/workflows/release_docker.yml
vendored
@ -66,6 +66,7 @@ jobs:
|
|||||||
run: bash build.sh release docker-multiplatform
|
run: bash build.sh release docker-multiplatform
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
@ -105,6 +106,7 @@ jobs:
|
|||||||
run: bash build.sh release lite docker-multiplatform
|
run: bash build.sh release lite docker-multiplatform
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
|
38
.github/workflows/sync_repo.yml
vendored
Normal file
38
.github/workflows/sync_repo.yml
vendored
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
name: Sync to Gitee
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
sync:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
name: Sync GitHub to Gitee
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Setup SSH
|
||||||
|
run: |
|
||||||
|
mkdir -p ~/.ssh
|
||||||
|
echo "${{ secrets.GITEE_SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
|
||||||
|
chmod 600 ~/.ssh/id_rsa
|
||||||
|
ssh-keyscan gitee.com >> ~/.ssh/known_hosts
|
||||||
|
|
||||||
|
- name: Create single commit and push
|
||||||
|
run: |
|
||||||
|
git config user.name "GitHub Actions"
|
||||||
|
git config user.email "actions@github.com"
|
||||||
|
|
||||||
|
# Create a new branch
|
||||||
|
git checkout --orphan new-main
|
||||||
|
git add .
|
||||||
|
git commit -m "Sync from GitHub: $(date)"
|
||||||
|
|
||||||
|
# Add Gitee remote and force push
|
||||||
|
git remote add gitee ${{ vars.GITEE_REPO_URL }}
|
||||||
|
git push --force gitee new-main:main
|
1
.github/workflows/test_docker.yml
vendored
1
.github/workflows/test_docker.yml
vendored
@ -55,6 +55,7 @@ jobs:
|
|||||||
run: bash build.sh beta docker-multiplatform
|
run: bash build.sh beta docker-multiplatform
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
|
110
CONTRIBUTING.md
110
CONTRIBUTING.md
@ -2,106 +2,76 @@
|
|||||||
|
|
||||||
## Setup your machine
|
## Setup your machine
|
||||||
|
|
||||||
`OpenList` is written in [Go](https://golang.org/) and [React](https://reactjs.org/).
|
`OpenList` is written in [Go](https://golang.org/) and [SolidJS](https://www.solidjs.com/).
|
||||||
|
|
||||||
Prerequisites:
|
Prerequisites:
|
||||||
|
|
||||||
- [git](https://git-scm.com)
|
- [git](https://git-scm.com)
|
||||||
- [Go 1.20+](https://golang.org/doc/install)
|
- [Go 1.24+](https://golang.org/doc/install)
|
||||||
- [gcc](https://gcc.gnu.org/)
|
- [gcc](https://gcc.gnu.org/)
|
||||||
- [nodejs](https://nodejs.org/)
|
- [nodejs](https://nodejs.org/)
|
||||||
|
|
||||||
Clone `OpenList` and `OpenList-Frontend` anywhere:
|
## Cloning a fork
|
||||||
|
|
||||||
|
Fork and clone `OpenList` and `OpenList-Frontend` anywhere:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ git clone https://github.com/OpenListTeam/OpenList.git
|
$ git clone https://github.com/<your-username>/OpenList.git
|
||||||
$ git clone --recurse-submodules https://github.com/OpenListTeam/OpenList-Frontend.git
|
$ git clone --recurse-submodules https://github.com/<your-username>/OpenList-Frontend.git
|
||||||
|
```
|
||||||
|
|
||||||
|
## Creating a branch
|
||||||
|
|
||||||
|
Create a new branch from the `main` branch, with an appropriate name.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ git checkout -b <branch-name>
|
||||||
```
|
```
|
||||||
You should switch to the `main` branch for development.
|
|
||||||
|
|
||||||
## Preview your change
|
## Preview your change
|
||||||
|
|
||||||
### backend
|
### backend
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ go run main.go
|
$ go run main.go
|
||||||
```
|
```
|
||||||
|
|
||||||
### frontend
|
### frontend
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ pnpm dev
|
$ pnpm dev
|
||||||
```
|
```
|
||||||
|
|
||||||
## Add a new driver
|
## Add a new driver
|
||||||
|
|
||||||
Copy `drivers/template` folder and rename it, and follow the comments in it.
|
Copy `drivers/template` folder and rename it, and follow the comments in it.
|
||||||
|
|
||||||
## Create a commit
|
## Create a commit
|
||||||
|
|
||||||
Commit messages should be well formatted, and to make that "standardized".
|
Commit messages should be well formatted, and to make that "standardized".
|
||||||
|
|
||||||
### Commit Message Format
|
Submit your pull request. For PR titles, follow [Conventional Commits](https://www.conventionalcommits.org).
|
||||||
Each commit message consists of a **header**, a **body** and a **footer**. The header has a special
|
|
||||||
format that includes a **type**, a **scope** and a **subject**:
|
|
||||||
|
|
||||||
```
|
https://github.com/OpenListTeam/OpenList/issues/376
|
||||||
<type>(<scope>): <subject>
|
|
||||||
<BLANK LINE>
|
|
||||||
<body>
|
|
||||||
<BLANK LINE>
|
|
||||||
<footer>
|
|
||||||
```
|
|
||||||
|
|
||||||
The **header** is mandatory and the **scope** of the header is optional.
|
It's suggested to sign your commits. See: [How to sign commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits)
|
||||||
|
|
||||||
Any line of the commit message cannot be longer than 100 characters! This allows the message to be easier
|
|
||||||
to read on GitHub as well as in various git tools.
|
|
||||||
|
|
||||||
### Revert
|
|
||||||
If the commit reverts a previous commit, it should begin with `revert: `, followed by the header
|
|
||||||
of the reverted commit.
|
|
||||||
In the body it should say: `This reverts commit <hash>.`, where the hash is the SHA of the commit
|
|
||||||
being reverted.
|
|
||||||
|
|
||||||
### Type
|
|
||||||
Must be one of the following:
|
|
||||||
|
|
||||||
* **feat**: A new feature
|
|
||||||
* **fix**: A bug fix
|
|
||||||
* **docs**: Documentation only changes
|
|
||||||
* **style**: Changes that do not affect the meaning of the code (white-space, formatting, missing
|
|
||||||
semi-colons, etc)
|
|
||||||
* **refactor**: A code change that neither fixes a bug nor adds a feature
|
|
||||||
* **perf**: A code change that improves performance
|
|
||||||
* **test**: Adding missing or correcting existing tests
|
|
||||||
* **build**: Affects project builds or dependency modifications
|
|
||||||
* **revert**: Restore the previous commit
|
|
||||||
* **ci**: Continuous integration of related file modifications
|
|
||||||
* **chore**: Changes to the build process or auxiliary tools and libraries such as documentation
|
|
||||||
generation
|
|
||||||
* **release**: Release a new version
|
|
||||||
|
|
||||||
### Scope
|
|
||||||
The scope could be anything specifying place of the commit change. For example `$location`,
|
|
||||||
`$browser`, `$compile`, `$rootScope`, `ngHref`, `ngClick`, `ngView`, etc...
|
|
||||||
|
|
||||||
You can use `*` when the change affects more than a single scope.
|
|
||||||
|
|
||||||
### Subject
|
|
||||||
The subject contains succinct description of the change:
|
|
||||||
|
|
||||||
* use the imperative, present tense: "change" not "changed" nor "changes"
|
|
||||||
* don't capitalize first letter
|
|
||||||
* no dot (.) at the end
|
|
||||||
|
|
||||||
### Body
|
|
||||||
Just as in the **subject**, use the imperative, present tense: "change" not "changed" nor "changes".
|
|
||||||
The body should include the motivation for the change and contrast this with previous behavior.
|
|
||||||
|
|
||||||
### Footer
|
|
||||||
The footer should contain any information about **Breaking Changes** and is also the place to
|
|
||||||
[reference GitHub issues that this commit closes](https://help.github.com/articles/closing-issues-via-commit-messages/).
|
|
||||||
|
|
||||||
**Breaking Changes** should start with the word `BREAKING CHANGE:` with a space or two newlines.
|
|
||||||
The rest of the commit message is then used for this.
|
|
||||||
|
|
||||||
## Submit a pull request
|
## Submit a pull request
|
||||||
|
|
||||||
Push your branch to your `openlist` fork and open a pull request against the
|
Please make sure your code has been formatted with `go fmt` or [prettier](https://prettier.io/) before submitting.
|
||||||
`main` branch.
|
|
||||||
|
Push your branch to your `openlist` fork and open a pull request against the `main` branch.
|
||||||
|
|
||||||
|
## Merge your pull request
|
||||||
|
|
||||||
|
Your pull request will be merged after review. Please wait for the maintainer to merge your pull request after review.
|
||||||
|
|
||||||
|
At least 1 approving review is required by reviewers with write access. You can also request a review from maintainers.
|
||||||
|
|
||||||
|
## Delete your branch
|
||||||
|
|
||||||
|
(Optional) After your pull request is merged, you can delete your branch.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Thank you for your contribution! Let's make OpenList better together!
|
||||||
|
23
Dockerfile
23
Dockerfile
@ -1,3 +1,6 @@
|
|||||||
|
### Default image is base. You can add other support by modifying BASE_IMAGE_TAG. The following parameters are supported: base (default), aria2, ffmpeg, aio
|
||||||
|
ARG BASE_IMAGE_TAG=base
|
||||||
|
|
||||||
FROM alpine:edge AS builder
|
FROM alpine:edge AS builder
|
||||||
LABEL stage=go-builder
|
LABEL stage=go-builder
|
||||||
WORKDIR /app/
|
WORKDIR /app/
|
||||||
@ -7,21 +10,27 @@ RUN go mod download
|
|||||||
COPY ./ ./
|
COPY ./ ./
|
||||||
RUN bash build.sh release docker
|
RUN bash build.sh release docker
|
||||||
|
|
||||||
### Default image is base. You can add other support by modifying BASE_IMAGE_TAG. The following parameters are supported: base (default), aria2, ffmpeg, aio
|
|
||||||
ARG BASE_IMAGE_TAG=base
|
|
||||||
FROM openlistteam/openlist-base-image:${BASE_IMAGE_TAG}
|
FROM openlistteam/openlist-base-image:${BASE_IMAGE_TAG}
|
||||||
|
LABEL MAINTAINER="OpenList"
|
||||||
ARG INSTALL_FFMPEG=false
|
ARG INSTALL_FFMPEG=false
|
||||||
ARG INSTALL_ARIA2=false
|
ARG INSTALL_ARIA2=false
|
||||||
LABEL MAINTAINER="OpenList"
|
ARG USER=openlist
|
||||||
|
ARG UID=1001
|
||||||
|
ARG GID=1001
|
||||||
|
|
||||||
WORKDIR /opt/openlist/
|
WORKDIR /opt/openlist/
|
||||||
|
|
||||||
COPY --chmod=755 --from=builder /app/bin/openlist ./
|
RUN addgroup -g ${GID} ${USER} && \
|
||||||
COPY --chmod=755 entrypoint.sh /entrypoint.sh
|
adduser -D -u ${UID} -G ${USER} ${USER} && \
|
||||||
|
mkdir -p /opt/openlist/data
|
||||||
|
|
||||||
|
COPY --from=builder --chmod=755 --chown=${UID}:${GID} /app/bin/openlist ./
|
||||||
|
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh
|
||||||
|
|
||||||
|
USER ${USER}
|
||||||
RUN /entrypoint.sh version
|
RUN /entrypoint.sh version
|
||||||
|
|
||||||
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
|
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
|
||||||
VOLUME /opt/openlist/data/
|
VOLUME /opt/openlist/data/
|
||||||
EXPOSE 5244 5245
|
EXPOSE 5244 5245
|
||||||
CMD [ "/entrypoint.sh" ]
|
CMD [ "/entrypoint.sh" ]
|
||||||
|
@ -1,18 +1,26 @@
|
|||||||
ARG BASE_IMAGE_TAG=base
|
ARG BASE_IMAGE_TAG=base
|
||||||
FROM ghcr.io/openlistteam/openlist-base-image:${BASE_IMAGE_TAG}
|
FROM ghcr.io/openlistteam/openlist-base-image:${BASE_IMAGE_TAG}
|
||||||
|
LABEL MAINTAINER="OpenList"
|
||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
ARG INSTALL_FFMPEG=false
|
ARG INSTALL_FFMPEG=false
|
||||||
ARG INSTALL_ARIA2=false
|
ARG INSTALL_ARIA2=false
|
||||||
LABEL MAINTAINER="OpenList"
|
ARG USER=openlist
|
||||||
|
ARG UID=1001
|
||||||
|
ARG GID=1001
|
||||||
|
|
||||||
WORKDIR /opt/openlist/
|
WORKDIR /opt/openlist/
|
||||||
|
|
||||||
COPY --chmod=755 /build/${TARGETPLATFORM}/openlist ./
|
RUN addgroup -g ${GID} ${USER} && \
|
||||||
COPY --chmod=755 entrypoint.sh /entrypoint.sh
|
adduser -D -u ${UID} -G ${USER} ${USER} && \
|
||||||
|
mkdir -p /opt/openlist/data
|
||||||
|
|
||||||
|
COPY --chmod=755 --chown=${UID}:${GID} /build/${TARGETPLATFORM}/openlist ./
|
||||||
|
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh
|
||||||
|
|
||||||
|
USER ${USER}
|
||||||
RUN /entrypoint.sh version
|
RUN /entrypoint.sh version
|
||||||
|
|
||||||
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
|
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
|
||||||
VOLUME /opt/openlist/data/
|
VOLUME /opt/openlist/data/
|
||||||
EXPOSE 5244 5245
|
EXPOSE 5244 5245
|
||||||
CMD [ "/entrypoint.sh" ]
|
CMD [ "/entrypoint.sh" ]
|
9
build.sh
9
build.sh
@ -4,6 +4,9 @@ builtAt="$(date +'%F %T %z')"
|
|||||||
gitAuthor="The OpenList Projects Contributors <noreply@openlist.team>"
|
gitAuthor="The OpenList Projects Contributors <noreply@openlist.team>"
|
||||||
gitCommit=$(git log --pretty=format:"%h" -1)
|
gitCommit=$(git log --pretty=format:"%h" -1)
|
||||||
|
|
||||||
|
# Set frontend repository, default to OpenListTeam/OpenList-Frontend
|
||||||
|
frontendRepo="${FRONTEND_REPO:-OpenListTeam/OpenList-Frontend}"
|
||||||
|
|
||||||
githubAuthArgs=""
|
githubAuthArgs=""
|
||||||
if [ -n "$GITHUB_TOKEN" ]; then
|
if [ -n "$GITHUB_TOKEN" ]; then
|
||||||
githubAuthArgs="--header \"Authorization: Bearer $GITHUB_TOKEN\""
|
githubAuthArgs="--header \"Authorization: Bearer $GITHUB_TOKEN\""
|
||||||
@ -25,7 +28,7 @@ else
|
|||||||
git tag -d beta || true
|
git tag -d beta || true
|
||||||
# Always true if there's no tag
|
# Always true if there's no tag
|
||||||
version=$(git describe --abbrev=0 --tags 2>/dev/null || echo "v0.0.0")
|
version=$(git describe --abbrev=0 --tags 2>/dev/null || echo "v0.0.0")
|
||||||
webVersion=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest\"" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
|
webVersion=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/$frontendRepo/releases/latest\"" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo "backend version: $version"
|
echo "backend version: $version"
|
||||||
@ -46,7 +49,7 @@ ldflags="\
|
|||||||
"
|
"
|
||||||
|
|
||||||
FetchWebRolling() {
|
FetchWebRolling() {
|
||||||
pre_release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/tags/rolling\"")
|
pre_release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/$frontendRepo/releases/tags/rolling\"")
|
||||||
pre_release_assets=$(echo "$pre_release_json" | jq -r '.assets[].browser_download_url')
|
pre_release_assets=$(echo "$pre_release_json" | jq -r '.assets[].browser_download_url')
|
||||||
|
|
||||||
# There is no lite for rolling
|
# There is no lite for rolling
|
||||||
@ -59,7 +62,7 @@ FetchWebRolling() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
FetchWebRelease() {
|
FetchWebRelease() {
|
||||||
release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest\"")
|
release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/$frontendRepo/releases/latest\"")
|
||||||
release_assets=$(echo "$release_json" | jq -r '.assets[].browser_download_url')
|
release_assets=$(echo "$release_json" | jq -r '.assets[].browser_download_url')
|
||||||
|
|
||||||
if [ "$useLite" = true ]; then
|
if [ "$useLite" = true ]; then
|
||||||
|
@ -9,6 +9,7 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/db"
|
"github.com/OpenListTeam/OpenList/v4/internal/db"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/charmbracelet/bubbles/table"
|
"github.com/charmbracelet/bubbles/table"
|
||||||
tea "github.com/charmbracelet/bubbletea"
|
tea "github.com/charmbracelet/bubbletea"
|
||||||
"github.com/charmbracelet/lipgloss"
|
"github.com/charmbracelet/lipgloss"
|
||||||
@ -22,8 +23,8 @@ var storageCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
var disableStorageCmd = &cobra.Command{
|
var disableStorageCmd = &cobra.Command{
|
||||||
Use: "disable",
|
Use: "disable [mount path]",
|
||||||
Short: "Disable a storage",
|
Short: "Disable a storage by mount path",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
if len(args) < 1 {
|
if len(args) < 1 {
|
||||||
return fmt.Errorf("mount path is required")
|
return fmt.Errorf("mount path is required")
|
||||||
@ -34,15 +35,48 @@ var disableStorageCmd = &cobra.Command{
|
|||||||
storage, err := db.GetStorageByMountPath(mountPath)
|
storage, err := db.GetStorageByMountPath(mountPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to query storage: %+v", err)
|
return fmt.Errorf("failed to query storage: %+v", err)
|
||||||
} else {
|
}
|
||||||
storage.Disabled = true
|
storage.Disabled = true
|
||||||
err = db.UpdateStorage(storage)
|
err = db.UpdateStorage(storage)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to update storage: %+v", err)
|
return fmt.Errorf("failed to update storage: %+v", err)
|
||||||
} else {
|
}
|
||||||
fmt.Printf("Storage with mount path [%s] have been disabled\n", mountPath)
|
utils.Log.Infof("Storage with mount path [%s] has been disabled from CLI", mountPath)
|
||||||
|
fmt.Printf("Storage with mount path [%s] has been disabled\n", mountPath)
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteStorageCmd = &cobra.Command{
|
||||||
|
Use: "delete [id]",
|
||||||
|
Short: "Delete a storage by id",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
if len(args) < 1 {
|
||||||
|
return fmt.Errorf("id is required")
|
||||||
|
}
|
||||||
|
id, err := strconv.Atoi(args[0])
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("id must be a number")
|
||||||
|
}
|
||||||
|
|
||||||
|
if force, _ := cmd.Flags().GetBool("force"); force {
|
||||||
|
fmt.Printf("Are you sure you want to delete storage with id [%d]? [y/N]: ", id)
|
||||||
|
var confirm string
|
||||||
|
fmt.Scanln(&confirm)
|
||||||
|
if confirm != "y" && confirm != "Y" {
|
||||||
|
fmt.Println("Delete operation cancelled.")
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Init()
|
||||||
|
defer Release()
|
||||||
|
err = db.DeleteStorageById(uint(id))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to delete storage by id: %+v", err)
|
||||||
|
}
|
||||||
|
utils.Log.Infof("Storage with id [%d] have been deleted from CLI", id)
|
||||||
|
fmt.Printf("Storage with id [%d] have been deleted\n", id)
|
||||||
return nil
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@ -152,6 +186,8 @@ func init() {
|
|||||||
storageCmd.AddCommand(disableStorageCmd)
|
storageCmd.AddCommand(disableStorageCmd)
|
||||||
storageCmd.AddCommand(listStorageCmd)
|
storageCmd.AddCommand(listStorageCmd)
|
||||||
storageCmd.PersistentFlags().IntVarP(&storageTableHeight, "height", "H", 10, "Table height")
|
storageCmd.PersistentFlags().IntVarP(&storageTableHeight, "height", "H", 10, "Table height")
|
||||||
|
storageCmd.AddCommand(deleteStorageCmd)
|
||||||
|
deleteStorageCmd.Flags().BoolP("force", "f", false, "Force delete without confirmation")
|
||||||
// Here you will define your flags and configuration settings.
|
// Here you will define your flags and configuration settings.
|
||||||
|
|
||||||
// Cobra supports Persistent Flags which will work for this command
|
// Cobra supports Persistent Flags which will work for this command
|
||||||
|
@ -6,10 +6,9 @@ services:
|
|||||||
ports:
|
ports:
|
||||||
- '5244:5244'
|
- '5244:5244'
|
||||||
- '5245:5245'
|
- '5245:5245'
|
||||||
|
user: '0:0'
|
||||||
environment:
|
environment:
|
||||||
- PUID=0
|
|
||||||
- PGID=0
|
|
||||||
- UMASK=022
|
- UMASK=022
|
||||||
- TZ=UTC
|
- TZ=Asia/Shanghai
|
||||||
container_name: openlist
|
container_name: openlist
|
||||||
image: 'openlistteam/openlist:latest'
|
image: 'openlistteam/openlist:latest'
|
||||||
|
@ -1,43 +1,60 @@
|
|||||||
package _115
|
package _115
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
driver115 "github.com/SheltonZhu/115driver/pkg/driver"
|
driver115 "github.com/SheltonZhu/115driver/pkg/driver"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
md5Salt = "Qclm8MGWUv59TnrR0XPg"
|
md5Salt = "Qclm8MGWUv59TnrR0XPg"
|
||||||
appVer = "27.0.5.7"
|
appVer = "35.6.0.3"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (d *Pan115) getAppVersion() ([]driver115.AppVersion, error) {
|
func (d *Pan115) getAppVersion() (string, error) {
|
||||||
result := driver115.VersionResp{}
|
result := VersionResp{}
|
||||||
resp, err := base.RestyClient.R().Get(driver115.ApiGetVersion)
|
res, err := base.RestyClient.R().Get(driver115.ApiGetVersion)
|
||||||
|
|
||||||
err = driver115.CheckErr(err, &result, resp)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return "", err
|
||||||
}
|
}
|
||||||
|
err = utils.Json.Unmarshal(res.Body(), &result)
|
||||||
return result.Data.GetAppVersions(), nil
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
if len(result.Error) > 0 {
|
||||||
|
return "", errors.New(result.Error)
|
||||||
|
}
|
||||||
|
return result.Data.Win.Version, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Pan115) getAppVer() string {
|
func (d *Pan115) getAppVer() string {
|
||||||
// todo add some cache?
|
ver, err := d.getAppVersion()
|
||||||
vers, err := d.getAppVersion()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Warnf("[115] get app version failed: %v", err)
|
log.Warnf("[115] get app version failed: %v", err)
|
||||||
return appVer
|
return appVer
|
||||||
}
|
}
|
||||||
for _, ver := range vers {
|
if len(ver) > 0 {
|
||||||
if ver.AppName == "win" {
|
return ver
|
||||||
return ver.Version
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return appVer
|
return appVer
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Pan115) initAppVer() {
|
func (d *Pan115) initAppVer() {
|
||||||
appVer = d.getAppVer()
|
appVer = d.getAppVer()
|
||||||
|
log.Debugf("use app version: %v", appVer)
|
||||||
|
}
|
||||||
|
|
||||||
|
type VersionResp struct {
|
||||||
|
Error string `json:"error,omitempty"`
|
||||||
|
Data Versions `json:"data"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Versions struct {
|
||||||
|
Win Version `json:"win"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Version struct {
|
||||||
|
Version string `json:"version_code"`
|
||||||
}
|
}
|
||||||
|
@ -186,9 +186,7 @@ func (d *Pan115) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
preHash = strings.ToUpper(preHash)
|
preHash = strings.ToUpper(preHash)
|
||||||
fullHash := stream.GetHash().GetHash(utils.SHA1)
|
fullHash := stream.GetHash().GetHash(utils.SHA1)
|
||||||
if len(fullHash) != utils.SHA1.Width {
|
if len(fullHash) != utils.SHA1.Width {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
_, fullHash, err = streamPkg.CacheFullAndHash(stream, &up, utils.SHA1)
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, fullHash, err = streamPkg.CacheFullInTempFileAndHash(stream, cacheFileProgress, utils.SHA1)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -321,7 +321,7 @@ func (d *Pan115) UploadByMultipart(ctx context.Context, params *driver115.Upload
|
|||||||
err error
|
err error
|
||||||
)
|
)
|
||||||
|
|
||||||
tmpF, err := s.CacheFullInTempFile()
|
tmpF, err := s.CacheFullAndWriter(&up, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -239,9 +239,7 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
|
|||||||
}
|
}
|
||||||
sha1 := file.GetHash().GetHash(utils.SHA1)
|
sha1 := file.GetHash().GetHash(utils.SHA1)
|
||||||
if len(sha1) != utils.SHA1.Width {
|
if len(sha1) != utils.SHA1.Width {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
_, sha1, err = stream.CacheFullAndHash(file, &up, utils.SHA1)
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, sha1, err = stream.CacheFullInTempFileAndHash(file, cacheFileProgress, utils.SHA1)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -9,6 +9,7 @@ import (
|
|||||||
sdk "github.com/OpenListTeam/115-sdk-go"
|
sdk "github.com/OpenListTeam/115-sdk-go"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/aliyun/aliyun-oss-go-sdk/oss"
|
"github.com/aliyun/aliyun-oss-go-sdk/oss"
|
||||||
"github.com/avast/retry-go"
|
"github.com/avast/retry-go"
|
||||||
@ -69,9 +70,6 @@ func (d *Open115) singleUpload(ctx context.Context, tempF model.File, tokenResp
|
|||||||
// }
|
// }
|
||||||
|
|
||||||
func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
|
func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
|
||||||
fileSize := stream.GetSize()
|
|
||||||
chunkSize := calPartSize(fileSize)
|
|
||||||
|
|
||||||
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
|
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@ -86,6 +84,13 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fileSize := stream.GetSize()
|
||||||
|
chunkSize := calPartSize(fileSize)
|
||||||
|
ss, err := streamPkg.NewStreamSectionReader(stream, int(chunkSize), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
partNum := (stream.GetSize() + chunkSize - 1) / chunkSize
|
partNum := (stream.GetSize() + chunkSize - 1) / chunkSize
|
||||||
parts := make([]oss.UploadPart, partNum)
|
parts := make([]oss.UploadPart, partNum)
|
||||||
offset := int64(0)
|
offset := int64(0)
|
||||||
@ -98,10 +103,13 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
|
|||||||
if i == partNum {
|
if i == partNum {
|
||||||
partSize = fileSize - (i-1)*chunkSize
|
partSize = fileSize - (i-1)*chunkSize
|
||||||
}
|
}
|
||||||
rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
|
rd, err := ss.GetSectionReader(offset, partSize)
|
||||||
err = retry.Do(func() error {
|
if err != nil {
|
||||||
_ = rd.Reset()
|
return err
|
||||||
|
}
|
||||||
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
|
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
|
||||||
|
err = retry.Do(func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i))
|
part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@ -112,6 +120,7 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
|
|||||||
retry.Attempts(3),
|
retry.Attempts(3),
|
||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second))
|
retry.Delay(time.Second))
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -182,9 +182,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStrea
|
|||||||
etag := file.GetHash().GetHash(utils.MD5)
|
etag := file.GetHash().GetHash(utils.MD5)
|
||||||
var err error
|
var err error
|
||||||
if len(etag) < utils.MD5.Width {
|
if len(etag) < utils.MD5.Width {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
_, etag, err = stream.CacheFullAndHash(file, &up, utils.MD5)
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, etag, err = stream.CacheFullInTempFileAndHash(file, cacheFileProgress, utils.MD5)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -12,6 +12,7 @@ type Addition struct {
|
|||||||
//OrderBy string `json:"order_by" type:"select" options:"file_id,file_name,size,update_at" default:"file_name"`
|
//OrderBy string `json:"order_by" type:"select" options:"file_id,file_name,size,update_at" default:"file_name"`
|
||||||
//OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
|
//OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
|
||||||
AccessToken string
|
AccessToken string
|
||||||
|
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
|
||||||
}
|
}
|
||||||
|
|
||||||
var config = driver.Config{
|
var config = driver.Config{
|
||||||
@ -22,6 +23,11 @@ var config = driver.Config{
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
op.RegisterDriver(func() driver.Driver {
|
op.RegisterDriver(func() driver.Driver {
|
||||||
return &Pan123{}
|
// 新增默认选项 要在RegisterDriver初始化设置 才会对正在使用的用户生效
|
||||||
|
return &Pan123{
|
||||||
|
Addition: Addition{
|
||||||
|
UploadThread: 3,
|
||||||
|
},
|
||||||
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
@ -6,11 +6,16 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
"github.com/avast/retry-go"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -69,18 +74,21 @@ func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp, file model.F
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
tmpF, err := file.CacheFullInTempFile()
|
// fetch s3 pre signed urls
|
||||||
|
size := file.GetSize()
|
||||||
|
chunkSize := int64(16 * utils.MB)
|
||||||
|
chunkCount := 1
|
||||||
|
if size > chunkSize {
|
||||||
|
chunkCount = int((size + chunkSize - 1) / chunkSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
// fetch s3 pre signed urls
|
|
||||||
size := file.GetSize()
|
|
||||||
chunkSize := min(size, 16*utils.MB)
|
|
||||||
chunkCount := int(size / chunkSize)
|
|
||||||
lastChunkSize := size % chunkSize
|
lastChunkSize := size % chunkSize
|
||||||
if lastChunkSize > 0 {
|
if lastChunkSize == 0 {
|
||||||
chunkCount++
|
|
||||||
} else {
|
|
||||||
lastChunkSize = chunkSize
|
lastChunkSize = chunkSize
|
||||||
}
|
}
|
||||||
// only 1 batch is allowed
|
// only 1 batch is allowed
|
||||||
@ -90,46 +98,57 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
|
|||||||
batchSize = 10
|
batchSize = 10
|
||||||
getS3UploadUrl = d.getS3PreSignedUrls
|
getS3UploadUrl = d.getS3PreSignedUrls
|
||||||
}
|
}
|
||||||
|
|
||||||
|
thread := min(int(chunkCount), d.UploadThread)
|
||||||
|
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
|
||||||
|
retry.Attempts(3),
|
||||||
|
retry.Delay(time.Second),
|
||||||
|
retry.DelayType(retry.BackOffDelay))
|
||||||
for i := 1; i <= chunkCount; i += batchSize {
|
for i := 1; i <= chunkCount; i += batchSize {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(uploadCtx) {
|
||||||
return ctx.Err()
|
break
|
||||||
}
|
}
|
||||||
start := i
|
start := i
|
||||||
end := min(i+batchSize, chunkCount+1)
|
end := min(i+batchSize, chunkCount+1)
|
||||||
s3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, start, end)
|
s3PreSignedUrls, err := getS3UploadUrl(uploadCtx, upReq, start, end)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
// upload each chunk
|
// upload each chunk
|
||||||
for j := start; j < end; j++ {
|
for cur := start; cur < end; cur++ {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(uploadCtx) {
|
||||||
return ctx.Err()
|
break
|
||||||
}
|
}
|
||||||
|
offset := int64(cur-1) * chunkSize
|
||||||
curSize := chunkSize
|
curSize := chunkSize
|
||||||
if j == chunkCount {
|
if cur == chunkCount {
|
||||||
curSize = lastChunkSize
|
curSize = lastChunkSize
|
||||||
}
|
}
|
||||||
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.NewSectionReader(tmpF, chunkSize*int64(j-1), curSize), curSize, false, getS3UploadUrl)
|
var reader *stream.SectionReader
|
||||||
|
var rateLimitedRd io.Reader
|
||||||
|
threadG.GoWithLifecycle(errgroup.Lifecycle{
|
||||||
|
Before: func(ctx context.Context) error {
|
||||||
|
if reader == nil {
|
||||||
|
var err error
|
||||||
|
reader, err = ss.GetSectionReader(offset, curSize)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
up(float64(j) * 100 / float64(chunkCount))
|
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader)
|
||||||
}
|
}
|
||||||
}
|
return nil
|
||||||
// complete s3 upload
|
},
|
||||||
return d.completeS3(ctx, upReq, file, chunkCount > 1)
|
Do: func(ctx context.Context) error {
|
||||||
}
|
reader.Seek(0, io.SeekStart)
|
||||||
|
|
||||||
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader *io.SectionReader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
|
|
||||||
uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)]
|
uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)]
|
||||||
if uploadUrl == "" {
|
if uploadUrl == "" {
|
||||||
return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls)
|
return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls)
|
||||||
}
|
}
|
||||||
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, reader))
|
reader.Seek(0, io.SeekStart)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, rateLimitedRd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req = req.WithContext(ctx)
|
|
||||||
req.ContentLength = curSize
|
req.ContentLength = curSize
|
||||||
//req.Header.Set("Content-Length", strconv.FormatInt(curSize, 10))
|
//req.Header.Set("Content-Length", strconv.FormatInt(curSize, 10))
|
||||||
res, err := base.HttpClient.Do(req)
|
res, err := base.HttpClient.Do(req)
|
||||||
@ -138,18 +157,18 @@ func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSign
|
|||||||
}
|
}
|
||||||
defer res.Body.Close()
|
defer res.Body.Close()
|
||||||
if res.StatusCode == http.StatusForbidden {
|
if res.StatusCode == http.StatusForbidden {
|
||||||
if retry {
|
singleflight.AnyGroup.Do(fmt.Sprintf("Pan123.newUpload_%p", threadG), func() (any, error) {
|
||||||
return fmt.Errorf("upload s3 chunk %d failed, status code: %d", cur, res.StatusCode)
|
|
||||||
}
|
|
||||||
// refresh s3 pre signed urls
|
|
||||||
newS3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, cur, end)
|
newS3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, cur, end)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls
|
||||||
|
return nil, nil
|
||||||
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls
|
return fmt.Errorf("upload s3 chunk %d failed, status code: %d", cur, res.StatusCode)
|
||||||
// retry
|
|
||||||
reader.Seek(0, io.SeekStart)
|
|
||||||
return d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, cur, end, reader, curSize, true, getS3UploadUrl)
|
|
||||||
}
|
}
|
||||||
if res.StatusCode != http.StatusOK {
|
if res.StatusCode != http.StatusOK {
|
||||||
body, err := io.ReadAll(res.Body)
|
body, err := io.ReadAll(res.Body)
|
||||||
@ -158,5 +177,20 @@ func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSign
|
|||||||
}
|
}
|
||||||
return fmt.Errorf("upload s3 chunk %d failed, status code: %d, body: %s", cur, res.StatusCode, body)
|
return fmt.Errorf("upload s3 chunk %d failed, status code: %d, body: %s", cur, res.StatusCode, body)
|
||||||
}
|
}
|
||||||
|
progress := 10.0 + 85.0*float64(threadG.Success())/float64(chunkCount)
|
||||||
|
up(progress)
|
||||||
return nil
|
return nil
|
||||||
|
},
|
||||||
|
After: func(err error) {
|
||||||
|
ss.FreeSectionReader(reader)
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err := threadG.Wait(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer up(100)
|
||||||
|
// complete s3 upload
|
||||||
|
return d.completeS3(ctx, upReq, file, chunkCount > 1)
|
||||||
}
|
}
|
||||||
|
@ -2,7 +2,9 @@ package _123_open
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
@ -67,13 +69,45 @@ func (d *Open123) List(ctx context.Context, dir model.Obj, args model.ListArgs)
|
|||||||
func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
fileId, _ := strconv.ParseInt(file.GetID(), 10, 64)
|
fileId, _ := strconv.ParseInt(file.GetID(), 10, 64)
|
||||||
|
|
||||||
|
if d.DirectLink {
|
||||||
|
res, err := d.getDirectLink(fileId)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.DirectLinkPrivateKey == "" {
|
||||||
|
duration := 365 * 24 * time.Hour // 缓存1年
|
||||||
|
return &model.Link{
|
||||||
|
URL: res.Data.URL,
|
||||||
|
Expiration: &duration,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
u, err := d.getUserInfo()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
duration := time.Duration(d.DirectLinkValidDuration) * time.Minute
|
||||||
|
|
||||||
|
newURL, err := d.SignURL(res.Data.URL, d.DirectLinkPrivateKey,
|
||||||
|
u.Data.UID, duration)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &model.Link{
|
||||||
|
URL: newURL,
|
||||||
|
Expiration: &duration,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
res, err := d.getDownloadInfo(fileId)
|
res, err := d.getDownloadInfo(fileId)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
link := model.Link{URL: res.Data.DownloadUrl}
|
return &model.Link{URL: res.Data.DownloadUrl}, nil
|
||||||
return &link, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||||
@ -95,6 +129,22 @@ func (d *Open123) Rename(ctx context.Context, srcObj model.Obj, newName string)
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
// 尝试使用上传+MD5秒传功能实现复制
|
||||||
|
// 1. 创建文件
|
||||||
|
// parentFileID 父目录id,上传到根目录时填写 0
|
||||||
|
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parse parentFileID error: %v", err)
|
||||||
|
}
|
||||||
|
etag := srcObj.(File).Etag
|
||||||
|
createResp, err := d.create(parentFileId, srcObj.GetName(), etag, srcObj.GetSize(), 2, false)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// 是否秒传
|
||||||
|
if createResp.Data.Reuse {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
return errs.NotSupport
|
return errs.NotSupport
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -104,27 +154,64 @@ func (d *Open123) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
return d.trash(fileId)
|
return d.trash(fileId)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
|
// 1. 创建文件
|
||||||
|
// parentFileID 父目录id,上传到根目录时填写 0
|
||||||
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
|
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
|
||||||
etag := file.GetHash().GetHash(utils.MD5)
|
|
||||||
|
|
||||||
if len(etag) < utils.MD5.Width {
|
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, etag, err = stream.CacheFullInTempFileAndHash(file, cacheFileProgress, utils.MD5)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, fmt.Errorf("parse parentFileID error: %v", err)
|
||||||
|
}
|
||||||
|
// etag 文件md5
|
||||||
|
etag := file.GetHash().GetHash(utils.MD5)
|
||||||
|
if len(etag) < utils.MD5.Width {
|
||||||
|
_, etag, err = stream.CacheFullAndHash(file, &up, utils.MD5)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false)
|
createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
// 是否秒传
|
||||||
if createResp.Data.Reuse {
|
if createResp.Data.Reuse {
|
||||||
return nil
|
// 秒传成功才会返回正确的 FileID,否则为 0
|
||||||
|
if createResp.Data.FileID != 0 {
|
||||||
|
return File{
|
||||||
|
FileName: file.GetName(),
|
||||||
|
Size: file.GetSize(),
|
||||||
|
FileId: createResp.Data.FileID,
|
||||||
|
Type: 2,
|
||||||
|
Etag: etag,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return d.Upload(ctx, file, createResp, up)
|
// 2. 上传分片
|
||||||
|
err = d.Upload(ctx, file, createResp, up)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. 上传完毕
|
||||||
|
for range 60 {
|
||||||
|
uploadCompleteResp, err := d.complete(createResp.Data.PreuploadID)
|
||||||
|
// 返回错误代码未知,如:20103,文档也没有具体说
|
||||||
|
if err == nil && uploadCompleteResp.Data.Completed && uploadCompleteResp.Data.FileID != 0 {
|
||||||
|
up(100)
|
||||||
|
return File{
|
||||||
|
FileName: file.GetName(),
|
||||||
|
Size: file.GetSize(),
|
||||||
|
FileId: uploadCompleteResp.Data.FileID,
|
||||||
|
Type: 2,
|
||||||
|
Etag: etag,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
// 若接口返回的completed为 false 时,则需间隔1秒继续轮询此接口,获取上传最终结果。
|
||||||
|
time.Sleep(time.Second)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("upload complete timeout")
|
||||||
}
|
}
|
||||||
|
|
||||||
var _ driver.Driver = (*Open123)(nil)
|
var _ driver.Driver = (*Open123)(nil)
|
||||||
|
var _ driver.PutResult = (*Open123)(nil)
|
||||||
|
@ -23,6 +23,11 @@ type Addition struct {
|
|||||||
// 上传线程数
|
// 上传线程数
|
||||||
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
|
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
|
||||||
|
|
||||||
|
// 使用直链
|
||||||
|
DirectLink bool `json:"DirectLink" type:"bool" default:"false" required:"false" help:"use direct link when download file"`
|
||||||
|
DirectLinkPrivateKey string `json:"DirectLinkPrivateKey" required:"false" help:"private key for direct link, if URL authentication is enabled"`
|
||||||
|
DirectLinkValidDuration int64 `json:"DirectLinkValidDuration" type:"number" default:"30" required:"false" help:"minutes, if URL authentication is enabled"`
|
||||||
|
|
||||||
driver.RootID
|
driver.RootID
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -73,7 +73,9 @@ func (f File) GetName() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (f File) CreateTime() time.Time {
|
func (f File) CreateTime() time.Time {
|
||||||
parsedTime, err := time.Parse("2006-01-02 15:04:05", f.CreateAt)
|
// 返回的时间没有时区信息,默认 UTC+8
|
||||||
|
loc := time.FixedZone("UTC+8", 8*60*60)
|
||||||
|
parsedTime, err := time.ParseInLocation("2006-01-02 15:04:05", f.CreateAt, loc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return time.Now()
|
return time.Now()
|
||||||
}
|
}
|
||||||
@ -81,7 +83,9 @@ func (f File) CreateTime() time.Time {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (f File) ModTime() time.Time {
|
func (f File) ModTime() time.Time {
|
||||||
parsedTime, err := time.Parse("2006-01-02 15:04:05", f.UpdateAt)
|
// 返回的时间没有时区信息,默认 UTC+8
|
||||||
|
loc := time.FixedZone("UTC+8", 8*60*60)
|
||||||
|
parsedTime, err := time.ParseInLocation("2006-01-02 15:04:05", f.UpdateAt, loc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return time.Now()
|
return time.Now()
|
||||||
}
|
}
|
||||||
@ -123,7 +127,7 @@ type RefreshTokenResp struct {
|
|||||||
type UserInfoResp struct {
|
type UserInfoResp struct {
|
||||||
BaseResp
|
BaseResp
|
||||||
Data struct {
|
Data struct {
|
||||||
UID int64 `json:"uid"`
|
UID uint64 `json:"uid"`
|
||||||
Username string `json:"username"`
|
Username string `json:"username"`
|
||||||
DisplayName string `json:"displayName"`
|
DisplayName string `json:"displayName"`
|
||||||
HeadImage string `json:"headImage"`
|
HeadImage string `json:"headImage"`
|
||||||
@ -154,6 +158,14 @@ type DownloadInfoResp struct {
|
|||||||
} `json:"data"`
|
} `json:"data"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type DirectLinkResp struct {
|
||||||
|
BaseResp
|
||||||
|
Data struct {
|
||||||
|
URL string `json:"url"`
|
||||||
|
} `json:"data"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// 创建文件V2返回
|
||||||
type UploadCreateResp struct {
|
type UploadCreateResp struct {
|
||||||
BaseResp
|
BaseResp
|
||||||
Data struct {
|
Data struct {
|
||||||
@ -161,45 +173,15 @@ type UploadCreateResp struct {
|
|||||||
PreuploadID string `json:"preuploadID"`
|
PreuploadID string `json:"preuploadID"`
|
||||||
Reuse bool `json:"reuse"`
|
Reuse bool `json:"reuse"`
|
||||||
SliceSize int64 `json:"sliceSize"`
|
SliceSize int64 `json:"sliceSize"`
|
||||||
|
Servers []string `json:"servers"`
|
||||||
} `json:"data"`
|
} `json:"data"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type UploadUrlResp struct {
|
// 上传完毕V2返回
|
||||||
BaseResp
|
|
||||||
Data struct {
|
|
||||||
PresignedURL string `json:"presignedURL"`
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type UploadCompleteResp struct {
|
type UploadCompleteResp struct {
|
||||||
BaseResp
|
BaseResp
|
||||||
Data struct {
|
Data struct {
|
||||||
Async bool `json:"async"`
|
|
||||||
Completed bool `json:"completed"`
|
Completed bool `json:"completed"`
|
||||||
FileID int64 `json:"fileID"`
|
FileID int64 `json:"fileID"`
|
||||||
} `json:"data"`
|
} `json:"data"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type UploadAsyncResp struct {
|
|
||||||
BaseResp
|
|
||||||
Data struct {
|
|
||||||
Completed bool `json:"completed"`
|
|
||||||
FileID int64 `json:"fileID"`
|
|
||||||
} `json:"data"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type UploadResp struct {
|
|
||||||
BaseResp
|
|
||||||
Data struct {
|
|
||||||
AccessKeyId string `json:"AccessKeyId"`
|
|
||||||
Bucket string `json:"Bucket"`
|
|
||||||
Key string `json:"Key"`
|
|
||||||
SecretAccessKey string `json:"SecretAccessKey"`
|
|
||||||
SessionToken string `json:"SessionToken"`
|
|
||||||
FileId int64 `json:"FileId"`
|
|
||||||
Reuse bool `json:"Reuse"`
|
|
||||||
EndPoint string `json:"EndPoint"`
|
|
||||||
StorageNode string `json:"StorageNode"`
|
|
||||||
UploadId string `json:"UploadId"`
|
|
||||||
} `json:"data"`
|
|
||||||
}
|
|
||||||
|
@ -1,21 +1,28 @@
|
|||||||
package _123_open
|
package _123_open
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"mime/multipart"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
|
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/avast/retry-go"
|
"github.com/avast/retry-go"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// 创建文件 V2
|
||||||
func (d *Open123) create(parentFileID int64, filename string, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) {
|
func (d *Open123) create(parentFileID int64, filename string, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) {
|
||||||
var resp UploadCreateResp
|
var resp UploadCreateResp
|
||||||
_, err := d.Request(UploadCreate, http.MethodPost, func(req *resty.Request) {
|
_, err := d.Request(UploadCreate, http.MethodPost, func(req *resty.Request) {
|
||||||
@ -34,21 +41,136 @@ func (d *Open123) create(parentFileID int64, filename string, etag string, size
|
|||||||
return &resp, nil
|
return &resp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Open123) url(preuploadID string, sliceNo int64) (string, error) {
|
// 上传分片 V2
|
||||||
// get upload url
|
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createResp *UploadCreateResp, up driver.UpdateProgress) error {
|
||||||
var resp UploadUrlResp
|
uploadDomain := createResp.Data.Servers[0]
|
||||||
_, err := d.Request(UploadUrl, http.MethodPost, func(req *resty.Request) {
|
size := file.GetSize()
|
||||||
req.SetBody(base.Json{
|
chunkSize := createResp.Data.SliceSize
|
||||||
"preuploadId": preuploadID,
|
|
||||||
"sliceNo": sliceNo,
|
ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up)
|
||||||
})
|
|
||||||
}, &resp)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return err
|
||||||
}
|
}
|
||||||
return resp.Data.PresignedURL, nil
|
|
||||||
|
uploadNums := (size + chunkSize - 1) / chunkSize
|
||||||
|
thread := min(int(uploadNums), d.UploadThread)
|
||||||
|
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
|
||||||
|
retry.Attempts(3),
|
||||||
|
retry.Delay(time.Second),
|
||||||
|
retry.DelayType(retry.BackOffDelay))
|
||||||
|
|
||||||
|
for partIndex := range uploadNums {
|
||||||
|
if utils.IsCanceled(uploadCtx) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
partIndex := partIndex
|
||||||
|
partNumber := partIndex + 1 // 分片号从1开始
|
||||||
|
offset := partIndex * chunkSize
|
||||||
|
size := min(chunkSize, size-offset)
|
||||||
|
var reader *stream.SectionReader
|
||||||
|
var rateLimitedRd io.Reader
|
||||||
|
sliceMD5 := ""
|
||||||
|
// 表单
|
||||||
|
b := bytes.NewBuffer(make([]byte, 0, 2048))
|
||||||
|
threadG.GoWithLifecycle(errgroup.Lifecycle{
|
||||||
|
Before: func(ctx context.Context) error {
|
||||||
|
if reader == nil {
|
||||||
|
var err error
|
||||||
|
// 每个分片一个reader
|
||||||
|
reader, err = ss.GetSectionReader(offset, size)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// 计算当前分片的MD5
|
||||||
|
sliceMD5, err = utils.HashReader(utils.MD5, reader)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
Do: func(ctx context.Context) error {
|
||||||
|
// 重置分片reader位置,因为HashReader、上一次失败已经读取到分片EOF
|
||||||
|
reader.Seek(0, io.SeekStart)
|
||||||
|
|
||||||
|
b.Reset()
|
||||||
|
w := multipart.NewWriter(b)
|
||||||
|
// 添加表单字段
|
||||||
|
err = w.WriteField("preuploadID", createResp.Data.PreuploadID)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField("sliceNo", strconv.FormatInt(partNumber, 10))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField("sliceMD5", sliceMD5)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// 写入文件内容
|
||||||
|
_, err = w.CreateFormFile("slice", fmt.Sprintf("%s.part%d", file.GetName(), partNumber))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
headSize := b.Len()
|
||||||
|
err = w.Close()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
head := bytes.NewReader(b.Bytes()[:headSize])
|
||||||
|
tail := bytes.NewReader(b.Bytes()[headSize:])
|
||||||
|
rateLimitedRd = driver.NewLimitedUploadStream(ctx, io.MultiReader(head, reader, tail))
|
||||||
|
// 创建请求并设置header
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadDomain+"/upload/v2/file/slice", rateLimitedRd)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// 设置请求头
|
||||||
|
req.Header.Add("Authorization", "Bearer "+d.AccessToken)
|
||||||
|
req.Header.Add("Content-Type", w.FormDataContentType())
|
||||||
|
req.Header.Add("Platform", "open_platform")
|
||||||
|
|
||||||
|
res, err := base.HttpClient.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
if res.StatusCode != 200 {
|
||||||
|
return fmt.Errorf("slice %d upload failed, status code: %d", partNumber, res.StatusCode)
|
||||||
|
}
|
||||||
|
var resp BaseResp
|
||||||
|
respBody, err := io.ReadAll(res.Body)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = json.Unmarshal(respBody, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if resp.Code != 0 {
|
||||||
|
return fmt.Errorf("slice %d upload failed: %s", partNumber, resp.Message)
|
||||||
|
}
|
||||||
|
|
||||||
|
progress := 10.0 + 85.0*float64(threadG.Success())/float64(uploadNums)
|
||||||
|
up(progress)
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
After: func(err error) {
|
||||||
|
ss.FreeSectionReader(reader)
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := threadG.Wait(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 上传完毕
|
||||||
func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) {
|
func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) {
|
||||||
var resp UploadCompleteResp
|
var resp UploadCompleteResp
|
||||||
_, err := d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) {
|
_, err := d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) {
|
||||||
@ -61,91 +183,3 @@ func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) {
|
|||||||
}
|
}
|
||||||
return &resp, nil
|
return &resp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Open123) async(preuploadID string) (*UploadAsyncResp, error) {
|
|
||||||
var resp UploadAsyncResp
|
|
||||||
_, err := d.Request(UploadAsync, http.MethodPost, func(req *resty.Request) {
|
|
||||||
req.SetBody(base.Json{
|
|
||||||
"preuploadID": preuploadID,
|
|
||||||
})
|
|
||||||
}, &resp)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &resp, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createResp *UploadCreateResp, up driver.UpdateProgress) error {
|
|
||||||
size := file.GetSize()
|
|
||||||
chunkSize := createResp.Data.SliceSize
|
|
||||||
uploadNums := (size + chunkSize - 1) / chunkSize
|
|
||||||
threadG, uploadCtx := errgroup.NewGroupWithContext(ctx, d.UploadThread,
|
|
||||||
retry.Attempts(3),
|
|
||||||
retry.Delay(time.Second),
|
|
||||||
retry.DelayType(retry.BackOffDelay))
|
|
||||||
|
|
||||||
for partIndex := int64(0); partIndex < uploadNums; partIndex++ {
|
|
||||||
if utils.IsCanceled(uploadCtx) {
|
|
||||||
return ctx.Err()
|
|
||||||
}
|
|
||||||
partIndex := partIndex
|
|
||||||
partNumber := partIndex + 1 // 分片号从1开始
|
|
||||||
offset := partIndex * chunkSize
|
|
||||||
size := min(chunkSize, size-offset)
|
|
||||||
limitedReader, err := file.RangeRead(http_range.Range{
|
|
||||||
Start: offset,
|
|
||||||
Length: size})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
limitedReader = driver.NewLimitedUploadStream(ctx, limitedReader)
|
|
||||||
|
|
||||||
threadG.Go(func(ctx context.Context) error {
|
|
||||||
uploadPartUrl, err := d.url(createResp.Data.PreuploadID, partNumber)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
req, err := http.NewRequestWithContext(ctx, "PUT", uploadPartUrl, limitedReader)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
req = req.WithContext(ctx)
|
|
||||||
req.ContentLength = size
|
|
||||||
|
|
||||||
res, err := base.HttpClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
_ = res.Body.Close()
|
|
||||||
|
|
||||||
progress := 10.0 + 85.0*float64(threadG.Success())/float64(uploadNums)
|
|
||||||
up(progress)
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := threadG.Wait(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
uploadCompleteResp, err := d.complete(createResp.Data.PreuploadID)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if uploadCompleteResp.Data.Async == false || uploadCompleteResp.Data.Completed {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
for {
|
|
||||||
uploadAsyncResp, err := d.async(createResp.Data.PreuploadID)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if uploadAsyncResp.Data.Completed {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
up(100)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
@ -1,15 +1,20 @@
|
|||||||
package _123_open
|
package _123_open
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"crypto/md5"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"net/url"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
|
"github.com/google/uuid"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -19,16 +24,15 @@ var ( //不同情况下获取的AccessTokenQPS限制不同 如下模块化易于
|
|||||||
AccessToken = InitApiInfo(Api+"/api/v1/access_token", 1)
|
AccessToken = InitApiInfo(Api+"/api/v1/access_token", 1)
|
||||||
RefreshToken = InitApiInfo(Api+"/api/v1/oauth2/access_token", 1)
|
RefreshToken = InitApiInfo(Api+"/api/v1/oauth2/access_token", 1)
|
||||||
UserInfo = InitApiInfo(Api+"/api/v1/user/info", 1)
|
UserInfo = InitApiInfo(Api+"/api/v1/user/info", 1)
|
||||||
FileList = InitApiInfo(Api+"/api/v2/file/list", 4)
|
FileList = InitApiInfo(Api+"/api/v2/file/list", 3)
|
||||||
DownloadInfo = InitApiInfo(Api+"/api/v1/file/download_info", 0)
|
DownloadInfo = InitApiInfo(Api+"/api/v1/file/download_info", 5)
|
||||||
|
DirectLink = InitApiInfo(Api+"/api/v1/direct-link/url", 5)
|
||||||
Mkdir = InitApiInfo(Api+"/upload/v1/file/mkdir", 2)
|
Mkdir = InitApiInfo(Api+"/upload/v1/file/mkdir", 2)
|
||||||
Move = InitApiInfo(Api+"/api/v1/file/move", 1)
|
Move = InitApiInfo(Api+"/api/v1/file/move", 1)
|
||||||
Rename = InitApiInfo(Api+"/api/v1/file/name", 1)
|
Rename = InitApiInfo(Api+"/api/v1/file/name", 1)
|
||||||
Trash = InitApiInfo(Api+"/api/v1/file/trash", 2)
|
Trash = InitApiInfo(Api+"/api/v1/file/trash", 2)
|
||||||
UploadCreate = InitApiInfo(Api+"/upload/v1/file/create", 2)
|
UploadCreate = InitApiInfo(Api+"/upload/v2/file/create", 2)
|
||||||
UploadUrl = InitApiInfo(Api+"/upload/v1/file/get_upload_url", 0)
|
UploadComplete = InitApiInfo(Api+"/upload/v2/file/upload_complete", 0)
|
||||||
UploadComplete = InitApiInfo(Api+"/upload/v1/file/upload_complete", 0)
|
|
||||||
UploadAsync = InitApiInfo(Api+"/upload/v1/file/upload_async_result", 1)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
||||||
@ -82,8 +86,24 @@ func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCall
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Open123) flushAccessToken() error {
|
func (d *Open123) flushAccessToken() error {
|
||||||
if d.Addition.ClientID != "" {
|
if d.ClientID != "" {
|
||||||
if d.Addition.ClientSecret != "" {
|
if d.RefreshToken != "" {
|
||||||
|
var resp RefreshTokenResp
|
||||||
|
_, err := d.Request(RefreshToken, http.MethodPost, func(req *resty.Request) {
|
||||||
|
req.SetQueryParam("client_id", d.ClientID)
|
||||||
|
if d.ClientSecret != "" {
|
||||||
|
req.SetQueryParam("client_secret", d.ClientSecret)
|
||||||
|
}
|
||||||
|
req.SetQueryParam("grant_type", "refresh_token")
|
||||||
|
req.SetQueryParam("refresh_token", d.RefreshToken)
|
||||||
|
}, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
d.AccessToken = resp.AccessToken
|
||||||
|
d.RefreshToken = resp.RefreshToken
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
} else if d.ClientSecret != "" {
|
||||||
var resp AccessTokenResp
|
var resp AccessTokenResp
|
||||||
_, err := d.Request(AccessToken, http.MethodPost, func(req *resty.Request) {
|
_, err := d.Request(AccessToken, http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
@ -96,24 +116,38 @@ func (d *Open123) flushAccessToken() error {
|
|||||||
}
|
}
|
||||||
d.AccessToken = resp.Data.AccessToken
|
d.AccessToken = resp.Data.AccessToken
|
||||||
op.MustSaveDriverStorage(d)
|
op.MustSaveDriverStorage(d)
|
||||||
} else if d.Addition.RefreshToken != "" {
|
|
||||||
var resp RefreshTokenResp
|
|
||||||
_, err := d.Request(RefreshToken, http.MethodPost, func(req *resty.Request) {
|
|
||||||
req.SetQueryParam("client_id", d.ClientID)
|
|
||||||
req.SetQueryParam("grant_type", "refresh_token")
|
|
||||||
req.SetQueryParam("refresh_token", d.Addition.RefreshToken)
|
|
||||||
}, &resp)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
d.AccessToken = resp.AccessToken
|
|
||||||
d.RefreshToken = resp.RefreshToken
|
|
||||||
op.MustSaveDriverStorage(d)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (d *Open123) SignURL(originURL, privateKey string, uid uint64, validDuration time.Duration) (newURL string, err error) {
|
||||||
|
// 生成Unix时间戳
|
||||||
|
ts := time.Now().Add(validDuration).Unix()
|
||||||
|
|
||||||
|
// 生成随机数(建议使用UUID,不能包含中划线(-))
|
||||||
|
rand := strings.ReplaceAll(uuid.New().String(), "-", "")
|
||||||
|
|
||||||
|
// 解析URL
|
||||||
|
objURL, err := url.Parse(originURL)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
// 待签名字符串,格式:path-timestamp-rand-uid-privateKey
|
||||||
|
unsignedStr := fmt.Sprintf("%s-%d-%s-%d-%s", objURL.Path, ts, rand, uid, privateKey)
|
||||||
|
md5Hash := md5.Sum([]byte(unsignedStr))
|
||||||
|
// 生成鉴权参数,格式:timestamp-rand-uid-md5hash
|
||||||
|
authKey := fmt.Sprintf("%d-%s-%d-%x", ts, rand, uid, md5Hash)
|
||||||
|
|
||||||
|
// 添加鉴权参数到URL查询参数
|
||||||
|
v := objURL.Query()
|
||||||
|
v.Add("auth_key", authKey)
|
||||||
|
objURL.RawQuery = v.Encode()
|
||||||
|
|
||||||
|
return objURL.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
func (d *Open123) getUserInfo() (*UserInfoResp, error) {
|
func (d *Open123) getUserInfo() (*UserInfoResp, error) {
|
||||||
var resp UserInfoResp
|
var resp UserInfoResp
|
||||||
|
|
||||||
@ -161,6 +195,21 @@ func (d *Open123) getDownloadInfo(fileId int64) (*DownloadInfoResp, error) {
|
|||||||
return &resp, nil
|
return &resp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (d *Open123) getDirectLink(fileId int64) (*DirectLinkResp, error) {
|
||||||
|
var resp DirectLinkResp
|
||||||
|
|
||||||
|
_, err := d.Request(DirectLink, http.MethodGet, func(req *resty.Request) {
|
||||||
|
req.SetQueryParams(map[string]string{
|
||||||
|
"fileId": strconv.FormatInt(fileId, 10),
|
||||||
|
})
|
||||||
|
}, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &resp, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (d *Open123) mkdir(parentID int64, name string) error {
|
func (d *Open123) mkdir(parentID int64, name string) error {
|
||||||
_, err := d.Request(Mkdir, http.MethodPost, func(req *resty.Request) {
|
_, err := d.Request(Mkdir, http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
|
@ -522,32 +522,27 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
var err error
|
var err error
|
||||||
fullHash := stream.GetHash().GetHash(utils.SHA256)
|
fullHash := stream.GetHash().GetHash(utils.SHA256)
|
||||||
if len(fullHash) != utils.SHA256.Width {
|
if len(fullHash) != utils.SHA256.Width {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
_, fullHash, err = streamPkg.CacheFullAndHash(stream, &up, utils.SHA256)
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, fullHash, err = streamPkg.CacheFullInTempFileAndHash(stream, cacheFileProgress, utils.SHA256)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
size := stream.GetSize()
|
size := stream.GetSize()
|
||||||
var partSize = d.getPartSize(size)
|
partSize := d.getPartSize(size)
|
||||||
part := size / partSize
|
part := int64(1)
|
||||||
if size%partSize > 0 {
|
if size > partSize {
|
||||||
part++
|
part = (size + partSize - 1) / partSize
|
||||||
} else if part == 0 {
|
|
||||||
part = 1
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 生成所有 partInfos
|
||||||
partInfos := make([]PartInfo, 0, part)
|
partInfos := make([]PartInfo, 0, part)
|
||||||
for i := int64(0); i < part; i++ {
|
for i := int64(0); i < part; i++ {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
start := i * partSize
|
start := i * partSize
|
||||||
byteSize := size - start
|
byteSize := min(size-start, partSize)
|
||||||
if byteSize > partSize {
|
|
||||||
byteSize = partSize
|
|
||||||
}
|
|
||||||
partNumber := i + 1
|
partNumber := i + 1
|
||||||
partInfo := PartInfo{
|
partInfo := PartInfo{
|
||||||
PartNumber: partNumber,
|
PartNumber: partNumber,
|
||||||
@ -595,17 +590,20 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
// resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址
|
// resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址
|
||||||
// 快传的情况下同样需要手动处理冲突
|
// 快传的情况下同样需要手动处理冲突
|
||||||
if resp.Data.PartInfos != nil {
|
if resp.Data.PartInfos != nil {
|
||||||
// 读取前100个分片的上传地址
|
// Progress
|
||||||
uploadPartInfos := resp.Data.PartInfos
|
p := driver.NewProgress(size, up)
|
||||||
|
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
|
||||||
|
|
||||||
// 获取后续分片的上传地址
|
// 先上传前100个分片
|
||||||
for i := 101; i < len(partInfos); i += 100 {
|
err = d.uploadPersonalParts(ctx, partInfos, resp.Data.PartInfos, rateLimited, p)
|
||||||
end := i + 100
|
if err != nil {
|
||||||
if end > len(partInfos) {
|
return err
|
||||||
end = len(partInfos)
|
|
||||||
}
|
}
|
||||||
batchPartInfos := partInfos[i:end]
|
|
||||||
|
|
||||||
|
// 如果还有剩余分片,分批获取上传地址并上传
|
||||||
|
for i := 100; i < len(partInfos); i += 100 {
|
||||||
|
end := min(i+100, len(partInfos))
|
||||||
|
batchPartInfos := partInfos[i:end]
|
||||||
moredata := base.Json{
|
moredata := base.Json{
|
||||||
"fileId": resp.Data.FileId,
|
"fileId": resp.Data.FileId,
|
||||||
"uploadId": resp.Data.UploadId,
|
"uploadId": resp.Data.UploadId,
|
||||||
@ -621,45 +619,13 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
uploadPartInfos = append(uploadPartInfos, moreresp.Data.PartInfos...)
|
err = d.uploadPersonalParts(ctx, partInfos, moreresp.Data.PartInfos, rateLimited, p)
|
||||||
}
|
|
||||||
|
|
||||||
// Progress
|
|
||||||
p := driver.NewProgress(size, up)
|
|
||||||
|
|
||||||
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
|
|
||||||
// 上传所有分片
|
|
||||||
for _, uploadPartInfo := range uploadPartInfos {
|
|
||||||
index := uploadPartInfo.PartNumber - 1
|
|
||||||
partSize := partInfos[index].PartSize
|
|
||||||
log.Debugf("[139] uploading part %+v/%+v", index, len(uploadPartInfos))
|
|
||||||
limitReader := io.LimitReader(rateLimited, partSize)
|
|
||||||
|
|
||||||
// Update Progress
|
|
||||||
r := io.TeeReader(limitReader, p)
|
|
||||||
|
|
||||||
req, err := http.NewRequest("PUT", uploadPartInfo.UploadUrl, r)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req = req.WithContext(ctx)
|
|
||||||
req.Header.Set("Content-Type", "application/octet-stream")
|
|
||||||
req.Header.Set("Content-Length", fmt.Sprint(partSize))
|
|
||||||
req.Header.Set("Origin", "https://yun.139.com")
|
|
||||||
req.Header.Set("Referer", "https://yun.139.com/")
|
|
||||||
req.ContentLength = partSize
|
|
||||||
|
|
||||||
res, err := base.HttpClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
_ = res.Body.Close()
|
|
||||||
log.Debugf("[139] uploaded: %+v", res)
|
|
||||||
if res.StatusCode != http.StatusOK {
|
|
||||||
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 全部分片上传完毕后,complete
|
||||||
data = base.Json{
|
data = base.Json{
|
||||||
"contentHash": fullHash,
|
"contentHash": fullHash,
|
||||||
"contentHashAlgorithm": "SHA256",
|
"contentHashAlgorithm": "SHA256",
|
||||||
@ -788,12 +754,10 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
size := stream.GetSize()
|
size := stream.GetSize()
|
||||||
// Progress
|
// Progress
|
||||||
p := driver.NewProgress(size, up)
|
p := driver.NewProgress(size, up)
|
||||||
var partSize = d.getPartSize(size)
|
partSize := d.getPartSize(size)
|
||||||
part := size / partSize
|
part := int64(1)
|
||||||
if size%partSize > 0 {
|
if size > partSize {
|
||||||
part++
|
part = (size + partSize - 1) / partSize
|
||||||
} else if part == 0 {
|
|
||||||
part = 1
|
|
||||||
}
|
}
|
||||||
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
|
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
|
||||||
for i := int64(0); i < part; i++ {
|
for i := int64(0); i < part; i++ {
|
||||||
@ -807,12 +771,10 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
limitReader := io.LimitReader(rateLimited, byteSize)
|
limitReader := io.LimitReader(rateLimited, byteSize)
|
||||||
// Update Progress
|
// Update Progress
|
||||||
r := io.TeeReader(limitReader, p)
|
r := io.TeeReader(limitReader, p)
|
||||||
req, err := http.NewRequest("POST", resp.Data.UploadResult.RedirectionURL, r)
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, resp.Data.UploadResult.RedirectionURL, r)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
req = req.WithContext(ctx)
|
|
||||||
req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName()))
|
req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName()))
|
||||||
req.Header.Set("contentSize", strconv.FormatInt(size, 10))
|
req.Header.Set("contentSize", strconv.FormatInt(size, 10))
|
||||||
req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1))
|
req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1))
|
||||||
|
@ -1,9 +1,11 @@
|
|||||||
package _139
|
package _139
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"path"
|
"path"
|
||||||
@ -13,6 +15,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
@ -623,3 +626,47 @@ func (d *Yun139) getPersonalCloudHost() string {
|
|||||||
}
|
}
|
||||||
return d.PersonalCloudHost
|
return d.PersonalCloudHost
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (d *Yun139) uploadPersonalParts(ctx context.Context, partInfos []PartInfo, uploadPartInfos []PersonalPartInfo, rateLimited *driver.RateLimitReader, p *driver.Progress) error {
|
||||||
|
// 确保数组以 PartNumber 从小到大排序
|
||||||
|
sort.Slice(uploadPartInfos, func(i, j int) bool {
|
||||||
|
return uploadPartInfos[i].PartNumber < uploadPartInfos[j].PartNumber
|
||||||
|
})
|
||||||
|
|
||||||
|
for _, uploadPartInfo := range uploadPartInfos {
|
||||||
|
index := uploadPartInfo.PartNumber - 1
|
||||||
|
if index < 0 || index >= len(partInfos) {
|
||||||
|
return fmt.Errorf("invalid PartNumber %d: index out of bounds (partInfos length: %d)", uploadPartInfo.PartNumber, len(partInfos))
|
||||||
|
}
|
||||||
|
partSize := partInfos[index].PartSize
|
||||||
|
log.Debugf("[139] uploading part %+v/%+v", index, len(partInfos))
|
||||||
|
limitReader := io.LimitReader(rateLimited, partSize)
|
||||||
|
r := io.TeeReader(limitReader, p)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadPartInfo.UploadUrl, r)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
req.Header.Set("Content-Type", "application/octet-stream")
|
||||||
|
req.Header.Set("Content-Length", fmt.Sprint(partSize))
|
||||||
|
req.Header.Set("Origin", "https://yun.139.com")
|
||||||
|
req.Header.Set("Referer", "https://yun.139.com/")
|
||||||
|
req.ContentLength = partSize
|
||||||
|
err = func() error {
|
||||||
|
res, err := base.HttpClient.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
log.Debugf("[139] uploaded: %+v", res)
|
||||||
|
if res.StatusCode != http.StatusOK {
|
||||||
|
body, _ := io.ReadAll(res.Body)
|
||||||
|
return fmt.Errorf("unexpected status code: %d, body: %s", res.StatusCode, string(body))
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
@ -365,11 +365,10 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
|
|||||||
log.Debugf("uploadData: %+v", uploadData)
|
log.Debugf("uploadData: %+v", uploadData)
|
||||||
requestURL := uploadData.RequestURL
|
requestURL := uploadData.RequestURL
|
||||||
uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&")
|
uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&")
|
||||||
req, err := http.NewRequest(http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req = req.WithContext(ctx)
|
|
||||||
for _, v := range uploadHeaders {
|
for _, v := range uploadHeaders {
|
||||||
i := strings.Index(v, "=")
|
i := strings.Index(v, "=")
|
||||||
req.Header.Set(v[0:i], v[i+1:])
|
req.Header.Set(v[0:i], v[i+1:])
|
||||||
|
@ -5,17 +5,19 @@ import (
|
|||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/xml"
|
"encoding/xml"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/skip2/go-qrcode"
|
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/skip2/go-qrcode"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
@ -129,6 +131,7 @@ func (y *Cloud189TV) put(ctx context.Context, url string, headers map[string]str
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 请求完成后http.Client会Close Request.Body
|
||||||
resp, err := base.HttpClient.Do(req)
|
resp, err := base.HttpClient.Do(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@ -311,11 +314,14 @@ func (y *Cloud189TV) RapidUpload(ctx context.Context, dstDir model.Obj, stream m
|
|||||||
|
|
||||||
// 旧版本上传,家庭云不支持覆盖
|
// 旧版本上传,家庭云不支持覆盖
|
||||||
func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
||||||
tempFile, err := file.CacheFullInTempFile()
|
fileMd5 := file.GetHash().GetHash(utils.MD5)
|
||||||
if err != nil {
|
var tempFile = file.GetFile()
|
||||||
return nil, err
|
var err error
|
||||||
|
if len(fileMd5) != utils.MD5.Width {
|
||||||
|
tempFile, fileMd5, err = stream.CacheFullAndHash(file, &up, utils.MD5)
|
||||||
|
} else if tempFile == nil {
|
||||||
|
tempFile, err = file.CacheFullAndWriter(&up, nil)
|
||||||
}
|
}
|
||||||
fileMd5, err := utils.HashFile(utils.MD5, tempFile)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -328,6 +334,10 @@ func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
|
|
||||||
// 网盘中不存在该文件,开始上传
|
// 网盘中不存在该文件,开始上传
|
||||||
status := GetUploadFileStatusResp{CreateUploadFileResp: *uploadInfo}
|
status := GetUploadFileStatusResp{CreateUploadFileResp: *uploadInfo}
|
||||||
|
// driver.RateLimitReader会尝试Close底层的reader
|
||||||
|
// 但这里的tempFile是一个*os.File,Close后就没法继续读了
|
||||||
|
// 所以这里用io.NopCloser包一层
|
||||||
|
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.NopCloser(tempFile))
|
||||||
for status.GetSize() < file.GetSize() && status.FileDataExists != 1 {
|
for status.GetSize() < file.GetSize() && status.FileDataExists != 1 {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return nil, ctx.Err()
|
return nil, ctx.Err()
|
||||||
@ -345,7 +355,7 @@ func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
|
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile), isFamily)
|
_, err := y.put(ctx, status.FileUploadUrl, header, true, rateLimitedRd, isFamily)
|
||||||
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
|
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -7,6 +7,7 @@ import (
|
|||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"encoding/xml"
|
"encoding/xml"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"hash"
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/cookiejar"
|
"net/http/cookiejar"
|
||||||
@ -471,14 +472,16 @@ func (y *Cloud189PC) refreshSession() (err error) {
|
|||||||
// 普通上传
|
// 普通上传
|
||||||
// 无法上传大小为0的文件
|
// 无法上传大小为0的文件
|
||||||
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
||||||
size := file.GetSize()
|
// 文件大小
|
||||||
sliceSize := partSize(size)
|
fileSize := file.GetSize()
|
||||||
|
// 分片大小,不得为文件大小
|
||||||
|
sliceSize := partSize(fileSize)
|
||||||
|
|
||||||
params := Params{
|
params := Params{
|
||||||
"parentFolderId": dstDir.GetID(),
|
"parentFolderId": dstDir.GetID(),
|
||||||
"fileName": url.QueryEscape(file.GetName()),
|
"fileName": url.QueryEscape(file.GetName()),
|
||||||
"fileSize": fmt.Sprint(file.GetSize()),
|
"fileSize": fmt.Sprint(fileSize),
|
||||||
"sliceSize": fmt.Sprint(sliceSize),
|
"sliceSize": fmt.Sprint(sliceSize), // 必须为特定分片大小
|
||||||
"lazyCheck": "1",
|
"lazyCheck": "1",
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -500,43 +503,71 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
threadG, upCtx := errgroup.NewGroupWithContext(ctx, y.uploadThread,
|
ss, err := stream.NewStreamSectionReader(file, int(sliceSize), &up)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
threadG, upCtx := errgroup.NewOrderedGroupWithContext(ctx, y.uploadThread,
|
||||||
retry.Attempts(3),
|
retry.Attempts(3),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
retry.DelayType(retry.BackOffDelay))
|
retry.DelayType(retry.BackOffDelay))
|
||||||
|
|
||||||
count := int(size / sliceSize)
|
count := 1
|
||||||
lastPartSize := size % sliceSize
|
if fileSize > sliceSize {
|
||||||
if lastPartSize > 0 {
|
count = int((fileSize + sliceSize - 1) / sliceSize)
|
||||||
count++
|
}
|
||||||
} else {
|
lastPartSize := fileSize % sliceSize
|
||||||
|
if lastPartSize == 0 {
|
||||||
lastPartSize = sliceSize
|
lastPartSize = sliceSize
|
||||||
}
|
}
|
||||||
fileMd5 := utils.MD5.NewFunc()
|
|
||||||
silceMd5 := utils.MD5.NewFunc()
|
|
||||||
silceMd5Hexs := make([]string, 0, count)
|
silceMd5Hexs := make([]string, 0, count)
|
||||||
teeReader := io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5))
|
silceMd5 := utils.MD5.NewFunc()
|
||||||
byteSize := sliceSize
|
var writers io.Writer = silceMd5
|
||||||
|
|
||||||
|
fileMd5Hex := file.GetHash().GetHash(utils.MD5)
|
||||||
|
var fileMd5 hash.Hash
|
||||||
|
if len(fileMd5Hex) != utils.MD5.Width {
|
||||||
|
fileMd5 = utils.MD5.NewFunc()
|
||||||
|
writers = io.MultiWriter(silceMd5, fileMd5)
|
||||||
|
}
|
||||||
for i := 1; i <= count; i++ {
|
for i := 1; i <= count; i++ {
|
||||||
if utils.IsCanceled(upCtx) {
|
if utils.IsCanceled(upCtx) {
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
offset := int64((i)-1) * sliceSize
|
||||||
|
partSize := sliceSize
|
||||||
if i == count {
|
if i == count {
|
||||||
byteSize = lastPartSize
|
partSize = lastPartSize
|
||||||
|
}
|
||||||
|
partInfo := ""
|
||||||
|
var reader *stream.SectionReader
|
||||||
|
var rateLimitedRd io.Reader
|
||||||
|
threadG.GoWithLifecycle(errgroup.Lifecycle{
|
||||||
|
Before: func(ctx context.Context) error {
|
||||||
|
if reader == nil {
|
||||||
|
var err error
|
||||||
|
reader, err = ss.GetSectionReader(offset, partSize)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
byteData := make([]byte, byteSize)
|
|
||||||
// 读取块
|
|
||||||
silceMd5.Reset()
|
silceMd5.Reset()
|
||||||
if _, err := io.ReadFull(teeReader, byteData); err != io.EOF && err != nil {
|
w, err := utils.CopyWithBuffer(writers, reader)
|
||||||
return nil, err
|
if w != partSize {
|
||||||
|
return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", partSize, w, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// 计算块md5并进行hex和base64编码
|
// 计算块md5并进行hex和base64编码
|
||||||
md5Bytes := silceMd5.Sum(nil)
|
md5Bytes := silceMd5.Sum(nil)
|
||||||
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Bytes)))
|
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Bytes)))
|
||||||
partInfo := fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
|
partInfo = fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
|
||||||
|
|
||||||
threadG.Go(func(ctx context.Context) error {
|
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
Do: func(ctx context.Context) error {
|
||||||
|
reader.Seek(0, io.SeekStart)
|
||||||
uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
|
uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@ -545,21 +576,28 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
// step.4 上传切片
|
// step.4 上传切片
|
||||||
uploadUrl := uploadUrls[0]
|
uploadUrl := uploadUrls[0]
|
||||||
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false,
|
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false,
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)), isFamily)
|
driver.NewLimitedUploadStream(ctx, rateLimitedRd), isFamily)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
up(float64(threadG.Success()) * 100 / float64(count))
|
up(float64(threadG.Success()) * 100 / float64(count))
|
||||||
return nil
|
return nil
|
||||||
})
|
},
|
||||||
|
After: func(err error) {
|
||||||
|
ss.FreeSectionReader(reader)
|
||||||
|
},
|
||||||
|
},
|
||||||
|
)
|
||||||
}
|
}
|
||||||
if err = threadG.Wait(); err != nil {
|
if err = threadG.Wait(); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
|
if fileMd5 != nil {
|
||||||
|
fileMd5Hex = strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
|
||||||
|
}
|
||||||
sliceMd5Hex := fileMd5Hex
|
sliceMd5Hex := fileMd5Hex
|
||||||
if file.GetSize() > sliceSize {
|
if fileSize > sliceSize {
|
||||||
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
|
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -620,11 +658,12 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
|
|||||||
cache = tmpF
|
cache = tmpF
|
||||||
}
|
}
|
||||||
sliceSize := partSize(size)
|
sliceSize := partSize(size)
|
||||||
count := int(size / sliceSize)
|
count := 1
|
||||||
|
if size > sliceSize {
|
||||||
|
count = int((size + sliceSize - 1) / sliceSize)
|
||||||
|
}
|
||||||
lastSliceSize := size % sliceSize
|
lastSliceSize := size % sliceSize
|
||||||
if lastSliceSize > 0 {
|
if lastSliceSize == 0 {
|
||||||
count++
|
|
||||||
} else {
|
|
||||||
lastSliceSize = sliceSize
|
lastSliceSize = sliceSize
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -738,7 +777,8 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
|
|||||||
}
|
}
|
||||||
|
|
||||||
// step.4 上传切片
|
// step.4 上传切片
|
||||||
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(cache, offset, byteSize), isFamily)
|
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize))
|
||||||
|
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, rateLimitedRd, isFamily)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -820,9 +860,7 @@ func (y *Cloud189PC) GetMultiUploadUrls(ctx context.Context, isFamily bool, uplo
|
|||||||
|
|
||||||
// 旧版本上传,家庭云不支持覆盖
|
// 旧版本上传,家庭云不支持覆盖
|
||||||
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
tempFile, fileMd5, err := stream.CacheFullAndHash(file, &up, utils.MD5)
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
tempFile, fileMd5, err := stream.CacheFullInTempFileAndHash(file, cacheFileProgress, utils.MD5)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -5,6 +5,7 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
|
"net/url"
|
||||||
stdpath "path"
|
stdpath "path"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
@ -12,6 +13,7 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/fs"
|
"github.com/OpenListTeam/OpenList/v4/internal/fs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/sign"
|
"github.com/OpenListTeam/OpenList/v4/internal/sign"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
@ -78,10 +80,18 @@ func (d *Alias) Get(ctx context.Context, path string) (model.Obj, error) {
|
|||||||
return nil, errs.ObjectNotFound
|
return nil, errs.ObjectNotFound
|
||||||
}
|
}
|
||||||
for _, dst := range dsts {
|
for _, dst := range dsts {
|
||||||
obj, err := d.get(ctx, path, dst, sub)
|
obj, err := fs.Get(ctx, stdpath.Join(dst, sub), &fs.GetArgs{NoLog: true})
|
||||||
if err == nil {
|
if err != nil {
|
||||||
return obj, nil
|
continue
|
||||||
}
|
}
|
||||||
|
return &model.Object{
|
||||||
|
Path: path,
|
||||||
|
Name: obj.GetName(),
|
||||||
|
Size: obj.GetSize(),
|
||||||
|
Modified: obj.ModTime(),
|
||||||
|
IsFolder: obj.IsDir(),
|
||||||
|
HashInfo: obj.GetHash(),
|
||||||
|
}, nil
|
||||||
}
|
}
|
||||||
return nil, errs.ObjectNotFound
|
return nil, errs.ObjectNotFound
|
||||||
}
|
}
|
||||||
@ -99,7 +109,27 @@ func (d *Alias) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
|
|||||||
var objs []model.Obj
|
var objs []model.Obj
|
||||||
fsArgs := &fs.ListArgs{NoLog: true, Refresh: args.Refresh}
|
fsArgs := &fs.ListArgs{NoLog: true, Refresh: args.Refresh}
|
||||||
for _, dst := range dsts {
|
for _, dst := range dsts {
|
||||||
tmp, err := d.list(ctx, dst, sub, fsArgs)
|
tmp, err := fs.List(ctx, stdpath.Join(dst, sub), fsArgs)
|
||||||
|
if err == nil {
|
||||||
|
tmp, err = utils.SliceConvert(tmp, func(obj model.Obj) (model.Obj, error) {
|
||||||
|
thumb, ok := model.GetThumb(obj)
|
||||||
|
objRes := model.Object{
|
||||||
|
Name: obj.GetName(),
|
||||||
|
Size: obj.GetSize(),
|
||||||
|
Modified: obj.ModTime(),
|
||||||
|
IsFolder: obj.IsDir(),
|
||||||
|
}
|
||||||
|
if !ok {
|
||||||
|
return &objRes, nil
|
||||||
|
}
|
||||||
|
return &model.ObjThumb{
|
||||||
|
Object: objRes,
|
||||||
|
Thumbnail: model.Thumbnail{
|
||||||
|
Thumbnail: thumb,
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
})
|
||||||
|
}
|
||||||
if err == nil {
|
if err == nil {
|
||||||
objs = append(objs, tmp...)
|
objs = append(objs, tmp...)
|
||||||
}
|
}
|
||||||
@ -113,45 +143,45 @@ func (d *Alias) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
|
|||||||
if !ok {
|
if !ok {
|
||||||
return nil, errs.ObjectNotFound
|
return nil, errs.ObjectNotFound
|
||||||
}
|
}
|
||||||
|
// proxy || ftp,s3
|
||||||
|
if common.GetApiUrl(ctx) == "" {
|
||||||
|
args.Redirect = false
|
||||||
|
}
|
||||||
for _, dst := range dsts {
|
for _, dst := range dsts {
|
||||||
reqPath := stdpath.Join(dst, sub)
|
reqPath := stdpath.Join(dst, sub)
|
||||||
link, file, err := d.link(ctx, reqPath, args)
|
link, fi, err := d.link(ctx, reqPath, args)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
var resultLink *model.Link
|
if link == nil {
|
||||||
if link != nil {
|
// 重定向且需要通过代理
|
||||||
resultLink = &model.Link{
|
return &model.Link{
|
||||||
URL: link.URL,
|
|
||||||
Header: link.Header,
|
|
||||||
RangeReader: link.RangeReader,
|
|
||||||
SyncClosers: utils.NewSyncClosers(link),
|
|
||||||
ContentLength: link.ContentLength,
|
|
||||||
}
|
|
||||||
if link.MFile != nil {
|
|
||||||
resultLink.RangeReader = &model.FileRangeReader{
|
|
||||||
RangeReaderIF: stream.GetRangeReaderFromMFile(file.GetSize(), link.MFile),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
} else {
|
|
||||||
resultLink = &model.Link{
|
|
||||||
URL: fmt.Sprintf("%s/p%s?sign=%s",
|
URL: fmt.Sprintf("%s/p%s?sign=%s",
|
||||||
common.GetApiUrl(ctx),
|
common.GetApiUrl(ctx),
|
||||||
utils.EncodePath(reqPath, true),
|
utils.EncodePath(reqPath, true),
|
||||||
sign.Sign(reqPath)),
|
sign.Sign(reqPath)),
|
||||||
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resultLink := *link
|
||||||
|
resultLink.SyncClosers = utils.NewSyncClosers(link)
|
||||||
|
if args.Redirect {
|
||||||
|
return &resultLink, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if resultLink.ContentLength == 0 {
|
||||||
|
resultLink.ContentLength = fi.GetSize()
|
||||||
|
}
|
||||||
|
if resultLink.MFile != nil {
|
||||||
|
return &resultLink, nil
|
||||||
}
|
}
|
||||||
if !args.Redirect {
|
|
||||||
if d.DownloadConcurrency > 0 {
|
if d.DownloadConcurrency > 0 {
|
||||||
resultLink.Concurrency = d.DownloadConcurrency
|
resultLink.Concurrency = d.DownloadConcurrency
|
||||||
}
|
}
|
||||||
if d.DownloadPartSize > 0 {
|
if d.DownloadPartSize > 0 {
|
||||||
resultLink.PartSize = d.DownloadPartSize * utils.KB
|
resultLink.PartSize = d.DownloadPartSize * utils.KB
|
||||||
}
|
}
|
||||||
}
|
return &resultLink, nil
|
||||||
return resultLink, nil
|
|
||||||
}
|
}
|
||||||
return nil, errs.ObjectNotFound
|
return nil, errs.ObjectNotFound
|
||||||
}
|
}
|
||||||
@ -278,24 +308,29 @@ func (d *Alias) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer,
|
|||||||
reqPath, err := d.getReqPath(ctx, dstDir, true)
|
reqPath, err := d.getReqPath(ctx, dstDir, true)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
if len(reqPath) == 1 {
|
if len(reqPath) == 1 {
|
||||||
return fs.PutDirectly(ctx, *reqPath[0], &stream.FileStream{
|
storage, reqActualPath, err := op.GetStorageAndActualPath(*reqPath[0])
|
||||||
Obj: s,
|
|
||||||
Mimetype: s.GetMimetype(),
|
|
||||||
WebPutAsTask: s.NeedStore(),
|
|
||||||
Reader: s,
|
|
||||||
})
|
|
||||||
} else {
|
|
||||||
file, err := s.CacheFullInTempFile()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
for _, path := range reqPath {
|
return op.Put(ctx, storage, reqActualPath, &stream.FileStream{
|
||||||
|
Obj: s,
|
||||||
|
Mimetype: s.GetMimetype(),
|
||||||
|
Reader: s,
|
||||||
|
}, up)
|
||||||
|
} else {
|
||||||
|
file, err := s.CacheFullAndWriter(nil, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
count := float64(len(reqPath) + 1)
|
||||||
|
up(100 / count)
|
||||||
|
for i, path := range reqPath {
|
||||||
err = errors.Join(err, fs.PutDirectly(ctx, *path, &stream.FileStream{
|
err = errors.Join(err, fs.PutDirectly(ctx, *path, &stream.FileStream{
|
||||||
Obj: s,
|
Obj: s,
|
||||||
Mimetype: s.GetMimetype(),
|
Mimetype: s.GetMimetype(),
|
||||||
WebPutAsTask: s.NeedStore(),
|
|
||||||
Reader: file,
|
Reader: file,
|
||||||
}))
|
}))
|
||||||
|
up(float64(i+2) / float64(count) * 100)
|
||||||
_, e := file.Seek(0, io.SeekStart)
|
_, e := file.Seek(0, io.SeekStart)
|
||||||
if e != nil {
|
if e != nil {
|
||||||
return errors.Join(err, e)
|
return errors.Join(err, e)
|
||||||
@ -367,10 +402,24 @@ func (d *Alias) Extract(ctx context.Context, obj model.Obj, args model.ArchiveIn
|
|||||||
return nil, errs.ObjectNotFound
|
return nil, errs.ObjectNotFound
|
||||||
}
|
}
|
||||||
for _, dst := range dsts {
|
for _, dst := range dsts {
|
||||||
link, err := d.extract(ctx, dst, sub, args)
|
reqPath := stdpath.Join(dst, sub)
|
||||||
if err == nil {
|
link, err := d.extract(ctx, reqPath, args)
|
||||||
return link, nil
|
if err != nil {
|
||||||
|
continue
|
||||||
}
|
}
|
||||||
|
if link == nil {
|
||||||
|
return &model.Link{
|
||||||
|
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s",
|
||||||
|
common.GetApiUrl(ctx),
|
||||||
|
utils.EncodePath(reqPath, true),
|
||||||
|
utils.EncodePath(args.InnerPath, true),
|
||||||
|
url.QueryEscape(args.Password),
|
||||||
|
sign.SignArchive(reqPath)),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
resultLink := *link
|
||||||
|
resultLink.SyncClosers = utils.NewSyncClosers(link)
|
||||||
|
return &resultLink, nil
|
||||||
}
|
}
|
||||||
return nil, errs.NotImplement
|
return nil, errs.NotImplement
|
||||||
}
|
}
|
||||||
|
@ -2,8 +2,6 @@ package alias
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
|
||||||
"net/url"
|
|
||||||
stdpath "path"
|
stdpath "path"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
@ -12,8 +10,6 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/fs"
|
"github.com/OpenListTeam/OpenList/v4/internal/fs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/sign"
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/server/common"
|
"github.com/OpenListTeam/OpenList/v4/server/common"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -54,55 +50,12 @@ func (d *Alias) getRootAndPath(path string) (string, string) {
|
|||||||
return parts[0], parts[1]
|
return parts[0], parts[1]
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Alias) get(ctx context.Context, path string, dst, sub string) (model.Obj, error) {
|
|
||||||
obj, err := fs.Get(ctx, stdpath.Join(dst, sub), &fs.GetArgs{NoLog: true})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &model.Object{
|
|
||||||
Path: path,
|
|
||||||
Name: obj.GetName(),
|
|
||||||
Size: obj.GetSize(),
|
|
||||||
Modified: obj.ModTime(),
|
|
||||||
IsFolder: obj.IsDir(),
|
|
||||||
HashInfo: obj.GetHash(),
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Alias) list(ctx context.Context, dst, sub string, args *fs.ListArgs) ([]model.Obj, error) {
|
|
||||||
objs, err := fs.List(ctx, stdpath.Join(dst, sub), args)
|
|
||||||
// the obj must implement the model.SetPath interface
|
|
||||||
// return objs, err
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return utils.SliceConvert(objs, func(obj model.Obj) (model.Obj, error) {
|
|
||||||
thumb, ok := model.GetThumb(obj)
|
|
||||||
objRes := model.Object{
|
|
||||||
Name: obj.GetName(),
|
|
||||||
Size: obj.GetSize(),
|
|
||||||
Modified: obj.ModTime(),
|
|
||||||
IsFolder: obj.IsDir(),
|
|
||||||
}
|
|
||||||
if !ok {
|
|
||||||
return &objRes, nil
|
|
||||||
}
|
|
||||||
return &model.ObjThumb{
|
|
||||||
Object: objRes,
|
|
||||||
Thumbnail: model.Thumbnail{
|
|
||||||
Thumbnail: thumb,
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Alias) link(ctx context.Context, reqPath string, args model.LinkArgs) (*model.Link, model.Obj, error) {
|
func (d *Alias) link(ctx context.Context, reqPath string, args model.LinkArgs) (*model.Link, model.Obj, error) {
|
||||||
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
|
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, err
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
// proxy || ftp,s3
|
if !args.Redirect {
|
||||||
if !args.Redirect || len(common.GetApiUrl(ctx)) == 0 {
|
|
||||||
return op.Link(ctx, storage, reqActualPath, args)
|
return op.Link(ctx, storage, reqActualPath, args)
|
||||||
}
|
}
|
||||||
obj, err := fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
|
obj, err := fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
|
||||||
@ -183,8 +136,7 @@ func (d *Alias) listArchive(ctx context.Context, dst, sub string, args model.Arc
|
|||||||
return nil, errs.NotImplement
|
return nil, errs.NotImplement
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Alias) extract(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) (*model.Link, error) {
|
func (d *Alias) extract(ctx context.Context, reqPath string, args model.ArchiveInnerArgs) (*model.Link, error) {
|
||||||
reqPath := stdpath.Join(dst, sub)
|
|
||||||
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
|
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@ -192,20 +144,12 @@ func (d *Alias) extract(ctx context.Context, dst, sub string, args model.Archive
|
|||||||
if _, ok := storage.(driver.ArchiveReader); !ok {
|
if _, ok := storage.(driver.ArchiveReader); !ok {
|
||||||
return nil, errs.NotImplement
|
return nil, errs.NotImplement
|
||||||
}
|
}
|
||||||
if args.Redirect && common.ShouldProxy(storage, stdpath.Base(sub)) {
|
if args.Redirect && common.ShouldProxy(storage, stdpath.Base(reqPath)) {
|
||||||
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
|
_, err := fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
|
||||||
if err != nil {
|
if err == nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
link := &model.Link{
|
return nil, nil
|
||||||
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s",
|
|
||||||
common.GetApiUrl(ctx),
|
|
||||||
utils.EncodePath(reqPath, true),
|
|
||||||
utils.EncodePath(args.InnerPath, true),
|
|
||||||
url.QueryEscape(args.Password),
|
|
||||||
sign.SignArchive(reqPath)),
|
|
||||||
}
|
|
||||||
return link, nil
|
|
||||||
}
|
}
|
||||||
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
|
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
|
||||||
return link, err
|
return link, err
|
||||||
|
@ -297,11 +297,10 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
|
|||||||
if d.InternalUpload {
|
if d.InternalUpload {
|
||||||
url = partInfo.InternalUploadUrl
|
url = partInfo.InternalUploadUrl
|
||||||
}
|
}
|
||||||
req, err := http.NewRequest("PUT", url, io.LimitReader(rateLimited, DEFAULT))
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, url, io.LimitReader(rateLimited, DEFAULT))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req = req.WithContext(ctx)
|
|
||||||
res, err := base.HttpClient.Do(req)
|
res, err := base.HttpClient.Do(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -3,7 +3,6 @@ package aliyundrive_open
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"time"
|
"time"
|
||||||
@ -13,7 +12,6 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/OpenListTeam/rateg"
|
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
@ -24,8 +22,7 @@ type AliyundriveOpen struct {
|
|||||||
|
|
||||||
DriveId string
|
DriveId string
|
||||||
|
|
||||||
limitList func(ctx context.Context, data base.Json) (*Files, error)
|
limiter *limiter
|
||||||
limitLink func(ctx context.Context, file model.Obj) (*model.Link, error)
|
|
||||||
ref *AliyundriveOpen
|
ref *AliyundriveOpen
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -38,25 +35,23 @@ func (d *AliyundriveOpen) GetAddition() driver.Additional {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) Init(ctx context.Context) error {
|
func (d *AliyundriveOpen) Init(ctx context.Context) error {
|
||||||
|
d.limiter = getLimiterForUser(globalLimiterUserID) // First create a globally shared limiter to limit the initial requests.
|
||||||
if d.LIVPDownloadFormat == "" {
|
if d.LIVPDownloadFormat == "" {
|
||||||
d.LIVPDownloadFormat = "jpeg"
|
d.LIVPDownloadFormat = "jpeg"
|
||||||
}
|
}
|
||||||
if d.DriveType == "" {
|
if d.DriveType == "" {
|
||||||
d.DriveType = "default"
|
d.DriveType = "default"
|
||||||
}
|
}
|
||||||
res, err := d.request("/adrive/v1.0/user/getDriveInfo", http.MethodPost, nil)
|
res, err := d.request(ctx, limiterOther, "/adrive/v1.0/user/getDriveInfo", http.MethodPost, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
d.limiter.free()
|
||||||
|
d.limiter = nil
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
d.DriveId = utils.Json.Get(res, d.DriveType+"_drive_id").ToString()
|
d.DriveId = utils.Json.Get(res, d.DriveType+"_drive_id").ToString()
|
||||||
d.limitList = rateg.LimitFnCtx(d.list, rateg.LimitFnOption{
|
userid := utils.Json.Get(res, "user_id").ToString()
|
||||||
Limit: 4,
|
d.limiter.free()
|
||||||
Bucket: 1,
|
d.limiter = getLimiterForUser(userid) // Allocate a corresponding limiter for each user.
|
||||||
})
|
|
||||||
d.limitLink = rateg.LimitFnCtx(d.link, rateg.LimitFnOption{
|
|
||||||
Limit: 1,
|
|
||||||
Bucket: 1,
|
|
||||||
})
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -70,6 +65,8 @@ func (d *AliyundriveOpen) InitReference(storage driver.Driver) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) Drop(ctx context.Context) error {
|
func (d *AliyundriveOpen) Drop(ctx context.Context) error {
|
||||||
|
d.limiter.free()
|
||||||
|
d.limiter = nil
|
||||||
d.ref = nil
|
d.ref = nil
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@ -87,9 +84,6 @@ func (d *AliyundriveOpen) GetRoot(ctx context.Context) (model.Obj, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
if d.limitList == nil {
|
|
||||||
return nil, fmt.Errorf("driver not init")
|
|
||||||
}
|
|
||||||
files, err := d.getFiles(ctx, dir.GetID())
|
files, err := d.getFiles(ctx, dir.GetID())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@ -107,8 +101,8 @@ func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.Li
|
|||||||
return objs, err
|
return objs, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) link(ctx context.Context, file model.Obj) (*model.Link, error) {
|
func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
res, err := d.request("/adrive/v1.0/openFile/getDownloadUrl", http.MethodPost, func(req *resty.Request) {
|
res, err := d.request(ctx, limiterLink, "/adrive/v1.0/openFile/getDownloadUrl", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"file_id": file.GetID(),
|
"file_id": file.GetID(),
|
||||||
@ -132,17 +126,10 @@ func (d *AliyundriveOpen) link(ctx context.Context, file model.Obj) (*model.Link
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
|
||||||
if d.limitLink == nil {
|
|
||||||
return nil, fmt.Errorf("driver not init")
|
|
||||||
}
|
|
||||||
return d.limitLink(ctx, file)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
|
func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
|
||||||
nowTime, _ := getNowTime()
|
nowTime, _ := getNowTime()
|
||||||
newDir := File{CreatedAt: nowTime, UpdatedAt: nowTime}
|
newDir := File{CreatedAt: nowTime, UpdatedAt: nowTime}
|
||||||
_, err := d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"parent_file_id": parentDir.GetID(),
|
"parent_file_id": parentDir.GetID(),
|
||||||
@ -168,7 +155,7 @@ func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirN
|
|||||||
|
|
||||||
func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
var resp MoveOrCopyResp
|
var resp MoveOrCopyResp
|
||||||
_, err := d.request("/adrive/v1.0/openFile/move", http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/move", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"file_id": srcObj.GetID(),
|
"file_id": srcObj.GetID(),
|
||||||
@ -198,7 +185,7 @@ func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (m
|
|||||||
|
|
||||||
func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
|
func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
|
||||||
var newFile File
|
var newFile File
|
||||||
_, err := d.request("/adrive/v1.0/openFile/update", http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/update", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"file_id": srcObj.GetID(),
|
"file_id": srcObj.GetID(),
|
||||||
@ -230,7 +217,7 @@ func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName
|
|||||||
|
|
||||||
func (d *AliyundriveOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
func (d *AliyundriveOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
var resp MoveOrCopyResp
|
var resp MoveOrCopyResp
|
||||||
_, err := d.request("/adrive/v1.0/openFile/copy", http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/copy", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"file_id": srcObj.GetID(),
|
"file_id": srcObj.GetID(),
|
||||||
@ -256,7 +243,7 @@ func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
if d.RemoveWay == "delete" {
|
if d.RemoveWay == "delete" {
|
||||||
uri = "/adrive/v1.0/openFile/delete"
|
uri = "/adrive/v1.0/openFile/delete"
|
||||||
}
|
}
|
||||||
_, err := d.request(uri, http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, uri, http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"file_id": obj.GetID(),
|
"file_id": obj.GetID(),
|
||||||
@ -295,7 +282,7 @@ func (d *AliyundriveOpen) Other(ctx context.Context, args model.OtherArgs) (inte
|
|||||||
default:
|
default:
|
||||||
return nil, errs.NotSupport
|
return nil, errs.NotSupport
|
||||||
}
|
}
|
||||||
_, err := d.request(uri, http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, uri, http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(data).SetResult(&resp)
|
req.SetBody(data).SetResult(&resp)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
96
drivers/aliyundrive_open/limiter.go
Normal file
96
drivers/aliyundrive_open/limiter.go
Normal file
@ -0,0 +1,96 @@
|
|||||||
|
package aliyundrive_open
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"golang.org/x/time/rate"
|
||||||
|
)
|
||||||
|
|
||||||
|
// See document https://www.yuque.com/aliyundrive/zpfszx/mqocg38hlxzc5vcd
|
||||||
|
// See issue https://github.com/OpenListTeam/OpenList/issues/724
|
||||||
|
// We got limit per user per app, so the limiter should be global.
|
||||||
|
|
||||||
|
type limiterType int
|
||||||
|
|
||||||
|
const (
|
||||||
|
limiterList limiterType = iota
|
||||||
|
limiterLink
|
||||||
|
limiterOther
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
listRateLimit = 3.9 // 4 per second in document, but we use 3.9 per second to be safe
|
||||||
|
linkRateLimit = 0.9 // 1 per second in document, but we use 0.9 per second to be safe
|
||||||
|
otherRateLimit = 14.9 // 15 per second in document, but we use 14.9 per second to be safe
|
||||||
|
globalLimiterUserID = "" // Global limiter user ID, used to limit the initial requests.
|
||||||
|
)
|
||||||
|
|
||||||
|
type limiter struct {
|
||||||
|
usedBy int
|
||||||
|
list *rate.Limiter
|
||||||
|
link *rate.Limiter
|
||||||
|
other *rate.Limiter
|
||||||
|
}
|
||||||
|
|
||||||
|
var limiters = make(map[string]*limiter)
|
||||||
|
var limitersLock = &sync.Mutex{}
|
||||||
|
|
||||||
|
func getLimiterForUser(userid string) *limiter {
|
||||||
|
limitersLock.Lock()
|
||||||
|
defer limitersLock.Unlock()
|
||||||
|
defer func() {
|
||||||
|
// Clean up limiters that are no longer used.
|
||||||
|
for id, lim := range limiters {
|
||||||
|
if lim.usedBy <= 0 && id != globalLimiterUserID { // Do not delete the global limiter.
|
||||||
|
delete(limiters, id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
if lim, ok := limiters[userid]; ok {
|
||||||
|
lim.usedBy++
|
||||||
|
return lim
|
||||||
|
}
|
||||||
|
lim := &limiter{
|
||||||
|
usedBy: 1,
|
||||||
|
list: rate.NewLimiter(rate.Limit(listRateLimit), 1),
|
||||||
|
link: rate.NewLimiter(rate.Limit(linkRateLimit), 1),
|
||||||
|
other: rate.NewLimiter(rate.Limit(otherRateLimit), 1),
|
||||||
|
}
|
||||||
|
limiters[userid] = lim
|
||||||
|
return lim
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *limiter) wait(ctx context.Context, typ limiterType) error {
|
||||||
|
if l == nil {
|
||||||
|
return fmt.Errorf("driver not init")
|
||||||
|
}
|
||||||
|
switch typ {
|
||||||
|
case limiterList:
|
||||||
|
return l.list.Wait(ctx)
|
||||||
|
case limiterLink:
|
||||||
|
return l.link.Wait(ctx)
|
||||||
|
case limiterOther:
|
||||||
|
return l.other.Wait(ctx)
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unknown limiter type")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func (l *limiter) free() {
|
||||||
|
if l == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
limitersLock.Lock()
|
||||||
|
defer limitersLock.Unlock()
|
||||||
|
l.usedBy--
|
||||||
|
}
|
||||||
|
func (d *AliyundriveOpen) wait(ctx context.Context, typ limiterType) error {
|
||||||
|
if d == nil {
|
||||||
|
return fmt.Errorf("driver not init")
|
||||||
|
}
|
||||||
|
if d.ref != nil {
|
||||||
|
return d.ref.wait(ctx, typ) // If this is a reference driver, wait on the reference driver.
|
||||||
|
}
|
||||||
|
return d.limiter.wait(ctx, typ)
|
||||||
|
}
|
@ -50,10 +50,10 @@ func calPartSize(fileSize int64) int64 {
|
|||||||
return partSize
|
return partSize
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) getUploadUrl(count int, fileId, uploadId string) ([]PartInfo, error) {
|
func (d *AliyundriveOpen) getUploadUrl(ctx context.Context, count int, fileId, uploadId string) ([]PartInfo, error) {
|
||||||
partInfoList := makePartInfos(count)
|
partInfoList := makePartInfos(count)
|
||||||
var resp CreateResp
|
var resp CreateResp
|
||||||
_, err := d.request("/adrive/v1.0/openFile/getUploadUrl", http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/getUploadUrl", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"file_id": fileId,
|
"file_id": fileId,
|
||||||
@ -69,7 +69,7 @@ func (d *AliyundriveOpen) uploadPart(ctx context.Context, r io.Reader, partInfo
|
|||||||
if d.InternalUpload {
|
if d.InternalUpload {
|
||||||
uploadUrl = strings.ReplaceAll(uploadUrl, "https://cn-beijing-data.aliyundrive.net/", "http://ccp-bj29-bj-1592982087.oss-cn-beijing-internal.aliyuncs.com/")
|
uploadUrl = strings.ReplaceAll(uploadUrl, "https://cn-beijing-data.aliyundrive.net/", "http://ccp-bj29-bj-1592982087.oss-cn-beijing-internal.aliyuncs.com/")
|
||||||
}
|
}
|
||||||
req, err := http.NewRequestWithContext(ctx, "PUT", uploadUrl, r)
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, r)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -84,10 +84,10 @@ func (d *AliyundriveOpen) uploadPart(ctx context.Context, r io.Reader, partInfo
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) completeUpload(fileId, uploadId string) (model.Obj, error) {
|
func (d *AliyundriveOpen) completeUpload(ctx context.Context, fileId, uploadId string) (model.Obj, error) {
|
||||||
// 3. complete
|
// 3. complete
|
||||||
var newFile File
|
var newFile File
|
||||||
_, err := d.request("/adrive/v1.0/openFile/complete", http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/complete", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"file_id": fileId,
|
"file_id": fileId,
|
||||||
@ -137,11 +137,8 @@ func (d *AliyundriveOpen) calProofCode(stream model.FileStreamer) (string, error
|
|||||||
}
|
}
|
||||||
buf := make([]byte, length)
|
buf := make([]byte, length)
|
||||||
n, err := io.ReadFull(reader, buf)
|
n, err := io.ReadFull(reader, buf)
|
||||||
if err == io.ErrUnexpectedEOF {
|
if n != int(length) {
|
||||||
return "", fmt.Errorf("can't read data, expected=%d, got=%d", len(buf), n)
|
return "", fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", length, n, err)
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
}
|
||||||
return base64.StdEncoding.EncodeToString(buf), nil
|
return base64.StdEncoding.EncodeToString(buf), nil
|
||||||
}
|
}
|
||||||
@ -183,7 +180,7 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
|
|||||||
createData["pre_hash"] = hash
|
createData["pre_hash"] = hash
|
||||||
}
|
}
|
||||||
var createResp CreateResp
|
var createResp CreateResp
|
||||||
_, err, e := d.requestReturnErrResp("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
|
_, err, e := d.requestReturnErrResp(ctx, limiterOther, "/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(createData).SetResult(&createResp)
|
req.SetBody(createData).SetResult(&createResp)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -194,9 +191,7 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
|
|||||||
|
|
||||||
hash := stream.GetHash().GetHash(utils.SHA1)
|
hash := stream.GetHash().GetHash(utils.SHA1)
|
||||||
if len(hash) != utils.SHA1.Width {
|
if len(hash) != utils.SHA1.Width {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
_, hash, err = streamPkg.CacheFullAndHash(stream, &up, utils.SHA1)
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, hash, err = streamPkg.CacheFullInTempFileAndHash(stream, cacheFileProgress, utils.SHA1)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -210,7 +205,7 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("cal proof code error: %s", err.Error())
|
return nil, fmt.Errorf("cal proof code error: %s", err.Error())
|
||||||
}
|
}
|
||||||
_, err = d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
|
_, err = d.request(ctx, limiterOther, "/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(createData).SetResult(&createResp)
|
req.SetBody(createData).SetResult(&createResp)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -221,17 +216,20 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
|
|||||||
if !createResp.RapidUpload {
|
if !createResp.RapidUpload {
|
||||||
// 2. normal upload
|
// 2. normal upload
|
||||||
log.Debugf("[aliyundive_open] normal upload")
|
log.Debugf("[aliyundive_open] normal upload")
|
||||||
|
ss, err := streamPkg.NewStreamSectionReader(stream, int(partSize), &up)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
preTime := time.Now()
|
preTime := time.Now()
|
||||||
var offset, length int64 = 0, partSize
|
var offset, length int64 = 0, partSize
|
||||||
//var length
|
|
||||||
for i := 0; i < len(createResp.PartInfoList); i++ {
|
for i := 0; i < len(createResp.PartInfoList); i++ {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return nil, ctx.Err()
|
return nil, ctx.Err()
|
||||||
}
|
}
|
||||||
// refresh upload url if 50 minutes passed
|
// refresh upload url if 50 minutes passed
|
||||||
if time.Since(preTime) > 50*time.Minute {
|
if time.Since(preTime) > 50*time.Minute {
|
||||||
createResp.PartInfoList, err = d.getUploadUrl(count, createResp.FileId, createResp.UploadId)
|
createResp.PartInfoList, err = d.getUploadUrl(ctx, count, createResp.FileId, createResp.UploadId)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -240,22 +238,19 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
|
|||||||
if remain := stream.GetSize() - offset; length > remain {
|
if remain := stream.GetSize() - offset; length > remain {
|
||||||
length = remain
|
length = remain
|
||||||
}
|
}
|
||||||
rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
|
rd, err := ss.GetSectionReader(offset, length)
|
||||||
if rapidUpload {
|
|
||||||
srd, err := stream.RangeRead(http_range.Range{Start: offset, Length: length})
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
rd = utils.NewMultiReadable(srd)
|
|
||||||
}
|
|
||||||
err = retry.Do(func() error {
|
|
||||||
_ = rd.Reset()
|
|
||||||
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
|
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
|
||||||
|
err = retry.Do(func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
return d.uploadPart(ctx, rateLimitedRd, createResp.PartInfoList[i])
|
return d.uploadPart(ctx, rateLimitedRd, createResp.PartInfoList[i])
|
||||||
},
|
},
|
||||||
retry.Attempts(3),
|
retry.Attempts(3),
|
||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second))
|
retry.Delay(time.Second))
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -268,5 +263,5 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
|
|||||||
|
|
||||||
log.Debugf("[aliyundrive_open] create file success, resp: %+v", createResp)
|
log.Debugf("[aliyundrive_open] create file success, resp: %+v", createResp)
|
||||||
// 3. complete
|
// 3. complete
|
||||||
return d.completeUpload(createResp.FileId, createResp.UploadId)
|
return d.completeUpload(ctx, createResp.FileId, createResp.UploadId)
|
||||||
}
|
}
|
||||||
|
@ -19,7 +19,7 @@ import (
|
|||||||
|
|
||||||
// do others that not defined in Driver interface
|
// do others that not defined in Driver interface
|
||||||
|
|
||||||
func (d *AliyundriveOpen) _refreshToken() (string, string, error) {
|
func (d *AliyundriveOpen) _refreshToken(ctx context.Context) (string, string, error) {
|
||||||
if d.UseOnlineAPI && d.APIAddress != "" {
|
if d.UseOnlineAPI && d.APIAddress != "" {
|
||||||
u := d.APIAddress
|
u := d.APIAddress
|
||||||
var resp struct {
|
var resp struct {
|
||||||
@ -33,8 +33,11 @@ func (d *AliyundriveOpen) _refreshToken() (string, string, error) {
|
|||||||
if d.AlipanType == "alipanTV" {
|
if d.AlipanType == "alipanTV" {
|
||||||
driverTxt = "alicloud_tv"
|
driverTxt = "alicloud_tv"
|
||||||
}
|
}
|
||||||
|
err := d.wait(ctx, limiterOther)
|
||||||
_, err := base.RestyClient.R().
|
if err != nil {
|
||||||
|
return "", "", err
|
||||||
|
}
|
||||||
|
_, err = base.RestyClient.R().
|
||||||
SetHeader("User-Agent", "Mozilla/5.0 (Macintosh; Apple macOS 15_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36 Chrome/138.0.0.0 Openlist/425.6.30").
|
SetHeader("User-Agent", "Mozilla/5.0 (Macintosh; Apple macOS 15_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36 Chrome/138.0.0.0 Openlist/425.6.30").
|
||||||
SetResult(&resp).
|
SetResult(&resp).
|
||||||
SetQueryParams(map[string]string{
|
SetQueryParams(map[string]string{
|
||||||
@ -54,11 +57,14 @@ func (d *AliyundriveOpen) _refreshToken() (string, string, error) {
|
|||||||
}
|
}
|
||||||
return resp.RefreshToken, resp.AccessToken, nil
|
return resp.RefreshToken, resp.AccessToken, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// 本地刷新逻辑,必须要求 client_id 和 client_secret
|
// 本地刷新逻辑,必须要求 client_id 和 client_secret
|
||||||
if d.ClientID == "" || d.ClientSecret == "" {
|
if d.ClientID == "" || d.ClientSecret == "" {
|
||||||
return "", "", fmt.Errorf("empty ClientID or ClientSecret")
|
return "", "", fmt.Errorf("empty ClientID or ClientSecret")
|
||||||
}
|
}
|
||||||
|
err := d.wait(ctx, limiterOther)
|
||||||
|
if err != nil {
|
||||||
|
return "", "", err
|
||||||
|
}
|
||||||
url := API_URL + "/oauth/access_token"
|
url := API_URL + "/oauth/access_token"
|
||||||
//var resp base.TokenResp
|
//var resp base.TokenResp
|
||||||
var e ErrResp
|
var e ErrResp
|
||||||
@ -110,18 +116,18 @@ func getSub(token string) (string, error) {
|
|||||||
return utils.Json.Get(bs, "sub").ToString(), nil
|
return utils.Json.Get(bs, "sub").ToString(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) refreshToken() error {
|
func (d *AliyundriveOpen) refreshToken(ctx context.Context) error {
|
||||||
if d.ref != nil {
|
if d.ref != nil {
|
||||||
return d.ref.refreshToken()
|
return d.ref.refreshToken(ctx)
|
||||||
}
|
}
|
||||||
refresh, access, err := d._refreshToken()
|
refresh, access, err := d._refreshToken(ctx)
|
||||||
for i := 0; i < 3; i++ {
|
for i := 0; i < 3; i++ {
|
||||||
if err == nil {
|
if err == nil {
|
||||||
break
|
break
|
||||||
} else {
|
} else {
|
||||||
log.Errorf("[ali_open] failed to refresh token: %s", err)
|
log.Errorf("[ali_open] failed to refresh token: %s", err)
|
||||||
}
|
}
|
||||||
refresh, access, err = d._refreshToken()
|
refresh, access, err = d._refreshToken(ctx)
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@ -132,12 +138,12 @@ func (d *AliyundriveOpen) refreshToken() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) request(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
|
func (d *AliyundriveOpen) request(ctx context.Context, limitTy limiterType, uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
|
||||||
b, err, _ := d.requestReturnErrResp(uri, method, callback, retry...)
|
b, err, _ := d.requestReturnErrResp(ctx, limitTy, uri, method, callback, retry...)
|
||||||
return b, err
|
return b, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error, *ErrResp) {
|
func (d *AliyundriveOpen) requestReturnErrResp(ctx context.Context, limitTy limiterType, uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error, *ErrResp) {
|
||||||
req := base.RestyClient.R()
|
req := base.RestyClient.R()
|
||||||
// TODO check whether access_token is expired
|
// TODO check whether access_token is expired
|
||||||
req.SetHeader("Authorization", "Bearer "+d.getAccessToken())
|
req.SetHeader("Authorization", "Bearer "+d.getAccessToken())
|
||||||
@ -149,6 +155,10 @@ func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base
|
|||||||
}
|
}
|
||||||
var e ErrResp
|
var e ErrResp
|
||||||
req.SetError(&e)
|
req.SetError(&e)
|
||||||
|
err := d.wait(ctx, limitTy)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err, nil
|
||||||
|
}
|
||||||
res, err := req.Execute(method, API_URL+uri)
|
res, err := req.Execute(method, API_URL+uri)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if res != nil {
|
if res != nil {
|
||||||
@ -159,11 +169,11 @@ func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base
|
|||||||
isRetry := len(retry) > 0 && retry[0]
|
isRetry := len(retry) > 0 && retry[0]
|
||||||
if e.Code != "" {
|
if e.Code != "" {
|
||||||
if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.getAccessToken() == "") {
|
if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.getAccessToken() == "") {
|
||||||
err = d.refreshToken()
|
err = d.refreshToken(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err, nil
|
return nil, err, nil
|
||||||
}
|
}
|
||||||
return d.requestReturnErrResp(uri, method, callback, true)
|
return d.requestReturnErrResp(ctx, limitTy, uri, method, callback, true)
|
||||||
}
|
}
|
||||||
return nil, fmt.Errorf("%s:%s", e.Code, e.Message), &e
|
return nil, fmt.Errorf("%s:%s", e.Code, e.Message), &e
|
||||||
}
|
}
|
||||||
@ -172,7 +182,7 @@ func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base
|
|||||||
|
|
||||||
func (d *AliyundriveOpen) list(ctx context.Context, data base.Json) (*Files, error) {
|
func (d *AliyundriveOpen) list(ctx context.Context, data base.Json) (*Files, error) {
|
||||||
var resp Files
|
var resp Files
|
||||||
_, err := d.request("/adrive/v1.0/openFile/list", http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterList, "/adrive/v1.0/openFile/list", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(data).SetResult(&resp)
|
req.SetBody(data).SetResult(&resp)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -201,7 +211,7 @@ func (d *AliyundriveOpen) getFiles(ctx context.Context, fileId string) ([]File,
|
|||||||
//"video_thumbnail_width": 480,
|
//"video_thumbnail_width": 480,
|
||||||
//"image_thumbnail_width": 480,
|
//"image_thumbnail_width": 480,
|
||||||
}
|
}
|
||||||
resp, err := d.limitList(ctx, data)
|
resp, err := d.list(ctx, data)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -2,7 +2,6 @@ package aliyundrive_share
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@ -12,7 +11,6 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/cron"
|
"github.com/OpenListTeam/OpenList/v4/pkg/cron"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/OpenListTeam/rateg"
|
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
@ -25,8 +23,7 @@ type AliyundriveShare struct {
|
|||||||
DriveId string
|
DriveId string
|
||||||
cron *cron.Cron
|
cron *cron.Cron
|
||||||
|
|
||||||
limitList func(ctx context.Context, dir model.Obj) ([]model.Obj, error)
|
limiter *limiter
|
||||||
limitLink func(ctx context.Context, file model.Obj) (*model.Link, error)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveShare) Config() driver.Config {
|
func (d *AliyundriveShare) Config() driver.Config {
|
||||||
@ -38,29 +35,26 @@ func (d *AliyundriveShare) GetAddition() driver.Additional {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveShare) Init(ctx context.Context) error {
|
func (d *AliyundriveShare) Init(ctx context.Context) error {
|
||||||
err := d.refreshToken()
|
d.limiter = getLimiter()
|
||||||
|
err := d.refreshToken(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
d.limiter.free()
|
||||||
|
d.limiter = nil
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
err = d.getShareToken()
|
err = d.getShareToken(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
d.limiter.free()
|
||||||
|
d.limiter = nil
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
d.cron = cron.NewCron(time.Hour * 2)
|
d.cron = cron.NewCron(time.Hour * 2)
|
||||||
d.cron.Do(func() {
|
d.cron.Do(func() {
|
||||||
err := d.refreshToken()
|
err := d.refreshToken(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("%+v", err)
|
log.Errorf("%+v", err)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
d.limitList = rateg.LimitFnCtx(d.list, rateg.LimitFnOption{
|
|
||||||
Limit: 4,
|
|
||||||
Bucket: 1,
|
|
||||||
})
|
|
||||||
d.limitLink = rateg.LimitFnCtx(d.link, rateg.LimitFnOption{
|
|
||||||
Limit: 1,
|
|
||||||
Bucket: 1,
|
|
||||||
})
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -68,19 +62,14 @@ func (d *AliyundriveShare) Drop(ctx context.Context) error {
|
|||||||
if d.cron != nil {
|
if d.cron != nil {
|
||||||
d.cron.Stop()
|
d.cron.Stop()
|
||||||
}
|
}
|
||||||
|
d.limiter.free()
|
||||||
|
d.limiter = nil
|
||||||
d.DriveId = ""
|
d.DriveId = ""
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
func (d *AliyundriveShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
if d.limitList == nil {
|
files, err := d.getFiles(ctx, dir.GetID())
|
||||||
return nil, fmt.Errorf("driver not init")
|
|
||||||
}
|
|
||||||
return d.limitList(ctx, dir)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *AliyundriveShare) list(ctx context.Context, dir model.Obj) ([]model.Obj, error) {
|
|
||||||
files, err := d.getFiles(dir.GetID())
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -90,13 +79,6 @@ func (d *AliyundriveShare) list(ctx context.Context, dir model.Obj) ([]model.Obj
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
func (d *AliyundriveShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
if d.limitLink == nil {
|
|
||||||
return nil, fmt.Errorf("driver not init")
|
|
||||||
}
|
|
||||||
return d.limitLink(ctx, file)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *AliyundriveShare) link(ctx context.Context, file model.Obj) (*model.Link, error) {
|
|
||||||
data := base.Json{
|
data := base.Json{
|
||||||
"drive_id": d.DriveId,
|
"drive_id": d.DriveId,
|
||||||
"file_id": file.GetID(),
|
"file_id": file.GetID(),
|
||||||
@ -105,7 +87,7 @@ func (d *AliyundriveShare) link(ctx context.Context, file model.Obj) (*model.Lin
|
|||||||
"share_id": d.ShareId,
|
"share_id": d.ShareId,
|
||||||
}
|
}
|
||||||
var resp ShareLinkResp
|
var resp ShareLinkResp
|
||||||
_, err := d.request("https://api.alipan.com/v2/file/get_share_link_download_url", http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterLink, "https://api.alipan.com/v2/file/get_share_link_download_url", http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetHeader(CanaryHeaderKey, CanaryHeaderValue).SetBody(data).SetResult(&resp)
|
req.SetHeader(CanaryHeaderKey, CanaryHeaderValue).SetBody(data).SetResult(&resp)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -135,7 +117,7 @@ func (d *AliyundriveShare) Other(ctx context.Context, args model.OtherArgs) (int
|
|||||||
default:
|
default:
|
||||||
return nil, errs.NotSupport
|
return nil, errs.NotSupport
|
||||||
}
|
}
|
||||||
_, err := d.request(url, http.MethodPost, func(req *resty.Request) {
|
_, err := d.request(ctx, limiterOther, url, http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(data).SetResult(&resp)
|
req.SetBody(data).SetResult(&resp)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
67
drivers/aliyundrive_share/limiter.go
Normal file
67
drivers/aliyundrive_share/limiter.go
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
package aliyundrive_share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"golang.org/x/time/rate"
|
||||||
|
)
|
||||||
|
|
||||||
|
// See issue https://github.com/OpenListTeam/OpenList/issues/724
|
||||||
|
// Seems there is no limit per user.
|
||||||
|
|
||||||
|
type limiterType int
|
||||||
|
|
||||||
|
const (
|
||||||
|
limiterList limiterType = iota
|
||||||
|
limiterLink
|
||||||
|
limiterOther
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
listRateLimit = 3.9 // 4 per second in document, but we use 3.9 per second to be safe
|
||||||
|
linkRateLimit = 0.9 // 1 per second in document, but we use 0.9 per second to be safe
|
||||||
|
otherRateLimit = 14.9 // 15 per second in document, but we use 14.9 per second to be safe
|
||||||
|
)
|
||||||
|
|
||||||
|
type limiter struct {
|
||||||
|
list *rate.Limiter
|
||||||
|
link *rate.Limiter
|
||||||
|
other *rate.Limiter
|
||||||
|
}
|
||||||
|
|
||||||
|
func getLimiter() *limiter {
|
||||||
|
return &limiter{
|
||||||
|
list: rate.NewLimiter(rate.Limit(listRateLimit), 1),
|
||||||
|
link: rate.NewLimiter(rate.Limit(linkRateLimit), 1),
|
||||||
|
other: rate.NewLimiter(rate.Limit(otherRateLimit), 1),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *limiter) wait(ctx context.Context, typ limiterType) error {
|
||||||
|
if l == nil {
|
||||||
|
return fmt.Errorf("driver not init")
|
||||||
|
}
|
||||||
|
switch typ {
|
||||||
|
case limiterList:
|
||||||
|
return l.list.Wait(ctx)
|
||||||
|
case limiterLink:
|
||||||
|
return l.link.Wait(ctx)
|
||||||
|
case limiterOther:
|
||||||
|
return l.other.Wait(ctx)
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unknown limiter type")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func (l *limiter) free() {
|
||||||
|
|
||||||
|
}
|
||||||
|
func (d *AliyundriveShare) wait(ctx context.Context, typ limiterType) error {
|
||||||
|
if d == nil {
|
||||||
|
return fmt.Errorf("driver not init")
|
||||||
|
}
|
||||||
|
//if d.ref != nil {
|
||||||
|
// return d.ref.wait(ctx, typ) // If this is a reference driver, wait on the reference driver.
|
||||||
|
//}
|
||||||
|
return d.limiter.wait(ctx, typ)
|
||||||
|
}
|
@ -1,6 +1,7 @@
|
|||||||
package aliyundrive_share
|
package aliyundrive_share
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
@ -15,11 +16,15 @@ const (
|
|||||||
CanaryHeaderValue = "client=web,app=share,version=v2.3.1"
|
CanaryHeaderValue = "client=web,app=share,version=v2.3.1"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (d *AliyundriveShare) refreshToken() error {
|
func (d *AliyundriveShare) refreshToken(ctx context.Context) error {
|
||||||
|
err := d.wait(ctx, limiterOther)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
url := "https://auth.alipan.com/v2/account/token"
|
url := "https://auth.alipan.com/v2/account/token"
|
||||||
var resp base.TokenResp
|
var resp base.TokenResp
|
||||||
var e ErrorResp
|
var e ErrorResp
|
||||||
_, err := base.RestyClient.R().
|
_, err = base.RestyClient.R().
|
||||||
SetBody(base.Json{"refresh_token": d.RefreshToken, "grant_type": "refresh_token"}).
|
SetBody(base.Json{"refresh_token": d.RefreshToken, "grant_type": "refresh_token"}).
|
||||||
SetResult(&resp).
|
SetResult(&resp).
|
||||||
SetError(&e).
|
SetError(&e).
|
||||||
@ -36,7 +41,11 @@ func (d *AliyundriveShare) refreshToken() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// do others that not defined in Driver interface
|
// do others that not defined in Driver interface
|
||||||
func (d *AliyundriveShare) getShareToken() error {
|
func (d *AliyundriveShare) getShareToken(ctx context.Context) error {
|
||||||
|
err := d.wait(ctx, limiterOther)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
data := base.Json{
|
data := base.Json{
|
||||||
"share_id": d.ShareId,
|
"share_id": d.ShareId,
|
||||||
}
|
}
|
||||||
@ -45,7 +54,7 @@ func (d *AliyundriveShare) getShareToken() error {
|
|||||||
}
|
}
|
||||||
var e ErrorResp
|
var e ErrorResp
|
||||||
var resp ShareTokenResp
|
var resp ShareTokenResp
|
||||||
_, err := base.RestyClient.R().
|
_, err = base.RestyClient.R().
|
||||||
SetResult(&resp).SetError(&e).SetBody(data).
|
SetResult(&resp).SetError(&e).SetBody(data).
|
||||||
Post("https://api.alipan.com/v2/share_link/get_share_token")
|
Post("https://api.alipan.com/v2/share_link/get_share_token")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -58,7 +67,7 @@ func (d *AliyundriveShare) getShareToken() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveShare) request(url, method string, callback base.ReqCallback) ([]byte, error) {
|
func (d *AliyundriveShare) request(ctx context.Context, limitTy limiterType, url, method string, callback base.ReqCallback) ([]byte, error) {
|
||||||
var e ErrorResp
|
var e ErrorResp
|
||||||
req := base.RestyClient.R().
|
req := base.RestyClient.R().
|
||||||
SetError(&e).
|
SetError(&e).
|
||||||
@ -71,6 +80,10 @@ func (d *AliyundriveShare) request(url, method string, callback base.ReqCallback
|
|||||||
} else {
|
} else {
|
||||||
req.SetBody("{}")
|
req.SetBody("{}")
|
||||||
}
|
}
|
||||||
|
err := d.wait(ctx, limitTy)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
resp, err := req.Execute(method, url)
|
resp, err := req.Execute(method, url)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@ -78,14 +91,14 @@ func (d *AliyundriveShare) request(url, method string, callback base.ReqCallback
|
|||||||
if e.Code != "" {
|
if e.Code != "" {
|
||||||
if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" {
|
if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" {
|
||||||
if e.Code == "AccessTokenInvalid" {
|
if e.Code == "AccessTokenInvalid" {
|
||||||
err = d.refreshToken()
|
err = d.refreshToken(ctx)
|
||||||
} else {
|
} else {
|
||||||
err = d.getShareToken()
|
err = d.getShareToken(ctx)
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return d.request(url, method, callback)
|
return d.request(ctx, limitTy, url, method, callback)
|
||||||
} else {
|
} else {
|
||||||
return nil, errors.New(e.Code + ": " + e.Message)
|
return nil, errors.New(e.Code + ": " + e.Message)
|
||||||
}
|
}
|
||||||
@ -93,7 +106,7 @@ func (d *AliyundriveShare) request(url, method string, callback base.ReqCallback
|
|||||||
return resp.Body(), nil
|
return resp.Body(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveShare) getFiles(fileId string) ([]File, error) {
|
func (d *AliyundriveShare) getFiles(ctx context.Context, fileId string) ([]File, error) {
|
||||||
files := make([]File, 0)
|
files := make([]File, 0)
|
||||||
data := base.Json{
|
data := base.Json{
|
||||||
"image_thumbnail_process": "image/resize,w_160/format,jpeg",
|
"image_thumbnail_process": "image/resize,w_160/format,jpeg",
|
||||||
@ -110,6 +123,10 @@ func (d *AliyundriveShare) getFiles(fileId string) ([]File, error) {
|
|||||||
if data["marker"] == "first" {
|
if data["marker"] == "first" {
|
||||||
data["marker"] = ""
|
data["marker"] = ""
|
||||||
}
|
}
|
||||||
|
err := d.wait(ctx, limiterList)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
var e ErrorResp
|
var e ErrorResp
|
||||||
var resp ListResp
|
var resp ListResp
|
||||||
res, err := base.RestyClient.R().
|
res, err := base.RestyClient.R().
|
||||||
@ -123,11 +140,11 @@ func (d *AliyundriveShare) getFiles(fileId string) ([]File, error) {
|
|||||||
log.Debugf("aliyundrive share get files: %s", res.String())
|
log.Debugf("aliyundrive share get files: %s", res.String())
|
||||||
if e.Code != "" {
|
if e.Code != "" {
|
||||||
if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" {
|
if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" {
|
||||||
err = d.getShareToken()
|
err = d.getShareToken(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return d.getFiles(fileId)
|
return d.getFiles(ctx, fileId)
|
||||||
}
|
}
|
||||||
return nil, errors.New(e.Message)
|
return nil, errors.New(e.Message)
|
||||||
}
|
}
|
||||||
|
@ -23,6 +23,7 @@ import (
|
|||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve_v4"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve_v4"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/crypt"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/crypt"
|
||||||
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/degoo"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao_share"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao_share"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/dropbox"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/dropbox"
|
||||||
@ -48,6 +49,7 @@ import (
|
|||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/onedrive_app"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/onedrive_app"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/onedrive_sharelink"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/onedrive_sharelink"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/openlist"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/openlist"
|
||||||
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/openlist_share"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/pikpak"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/pikpak"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/pikpak_share"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/pikpak_share"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/quark_open"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/quark_open"
|
||||||
@ -59,6 +61,7 @@ import (
|
|||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/smb"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/smb"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/strm"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/strm"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/teambition"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/teambition"
|
||||||
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/teldrive"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/terabox"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/terabox"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder_browser"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder_browser"
|
||||||
|
@ -203,11 +203,12 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
|
|||||||
|
|
||||||
streamSize := stream.GetSize()
|
streamSize := stream.GetSize()
|
||||||
sliceSize := d.getSliceSize(streamSize)
|
sliceSize := d.getSliceSize(streamSize)
|
||||||
count := int(streamSize / sliceSize)
|
count := 1
|
||||||
|
if streamSize > sliceSize {
|
||||||
|
count = int((streamSize + sliceSize - 1) / sliceSize)
|
||||||
|
}
|
||||||
lastBlockSize := streamSize % sliceSize
|
lastBlockSize := streamSize % sliceSize
|
||||||
if lastBlockSize > 0 {
|
if lastBlockSize == 0 {
|
||||||
count++
|
|
||||||
} else {
|
|
||||||
lastBlockSize = sliceSize
|
lastBlockSize = sliceSize
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -262,11 +262,12 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
|
|||||||
|
|
||||||
// 计算需要的数据
|
// 计算需要的数据
|
||||||
streamSize := stream.GetSize()
|
streamSize := stream.GetSize()
|
||||||
count := int(streamSize / DEFAULT)
|
count := 1
|
||||||
|
if streamSize > DEFAULT {
|
||||||
|
count = int((streamSize + DEFAULT - 1) / DEFAULT)
|
||||||
|
}
|
||||||
lastBlockSize := streamSize % DEFAULT
|
lastBlockSize := streamSize % DEFAULT
|
||||||
if lastBlockSize > 0 {
|
if lastBlockSize == 0 {
|
||||||
count++
|
|
||||||
} else {
|
|
||||||
lastBlockSize = DEFAULT
|
lastBlockSize = DEFAULT
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -255,7 +255,7 @@ func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, file model.FileStr
|
|||||||
},
|
},
|
||||||
UpdateProgress: up,
|
UpdateProgress: up,
|
||||||
})
|
})
|
||||||
req, err := http.NewRequestWithContext(ctx, "POST", "https://pan-yz.chaoxing.com/upload", r)
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, "https://pan-yz.chaoxing.com/upload", r)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -167,7 +167,7 @@ func (d *ChaoXing) Login() (string, error) {
|
|||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
// Create the request
|
// Create the request
|
||||||
req, err := http.NewRequest("POST", "https://passport2.chaoxing.com/fanyalogin", body)
|
req, err := http.NewRequest(http.MethodPost, "https://passport2.chaoxing.com/fanyalogin", body)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
@ -18,6 +18,7 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/setting"
|
"github.com/OpenListTeam/OpenList/v4/internal/setting"
|
||||||
|
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/cookie"
|
"github.com/OpenListTeam/OpenList/v4/pkg/cookie"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/avast/retry-go"
|
"github.com/avast/retry-go"
|
||||||
@ -236,28 +237,32 @@ func (d *Cloudreve) upLocal(ctx context.Context, stream model.FileStreamer, u Up
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
|
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
|
||||||
|
DEFAULT := int64(u.ChunkSize)
|
||||||
|
ss, err := streamPkg.NewStreamSectionReader(stream, int(DEFAULT), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
uploadUrl := u.UploadURLs[0]
|
uploadUrl := u.UploadURLs[0]
|
||||||
credential := u.Credential
|
credential := u.Credential
|
||||||
var finish int64 = 0
|
var finish int64 = 0
|
||||||
var chunk int = 0
|
var chunk int = 0
|
||||||
DEFAULT := int64(u.ChunkSize)
|
|
||||||
for finish < stream.GetSize() {
|
for finish < stream.GetSize() {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
left := stream.GetSize() - finish
|
left := stream.GetSize() - finish
|
||||||
byteSize := min(left, DEFAULT)
|
byteSize := min(left, DEFAULT)
|
||||||
err := retry.Do(
|
|
||||||
func() error {
|
|
||||||
utils.Log.Debugf("[Cloudreve-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
utils.Log.Debugf("[Cloudreve-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
||||||
byteData := make([]byte, byteSize)
|
rd, err := ss.GetSectionReader(finish, byteSize)
|
||||||
n, err := io.ReadFull(stream, byteData)
|
|
||||||
utils.Log.Debug(err, n)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
err = retry.Do(
|
||||||
|
func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadUrl+"?chunk="+strconv.Itoa(chunk),
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadUrl+"?chunk="+strconv.Itoa(chunk),
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
|
driver.NewLimitedUploadStream(ctx, rd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -290,6 +295,7 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
|
|||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
)
|
)
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -301,26 +307,29 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
|
func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
|
||||||
|
DEFAULT := int64(u.ChunkSize)
|
||||||
|
ss, err := streamPkg.NewStreamSectionReader(stream, int(DEFAULT), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
uploadUrl := u.UploadURLs[0]
|
uploadUrl := u.UploadURLs[0]
|
||||||
var finish int64 = 0
|
var finish int64 = 0
|
||||||
DEFAULT := int64(u.ChunkSize)
|
|
||||||
for finish < stream.GetSize() {
|
for finish < stream.GetSize() {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
left := stream.GetSize() - finish
|
left := stream.GetSize() - finish
|
||||||
byteSize := min(left, DEFAULT)
|
byteSize := min(left, DEFAULT)
|
||||||
err := retry.Do(
|
|
||||||
func() error {
|
|
||||||
utils.Log.Debugf("[Cloudreve-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
utils.Log.Debugf("[Cloudreve-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
||||||
byteData := make([]byte, byteSize)
|
rd, err := ss.GetSectionReader(finish, byteSize)
|
||||||
n, err := io.ReadFull(stream, byteData)
|
|
||||||
utils.Log.Debug(err, n)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl,
|
err = retry.Do(
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
|
func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, rd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -346,6 +355,7 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
|
|||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
)
|
)
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -359,27 +369,31 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
|
func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
|
||||||
|
DEFAULT := int64(u.ChunkSize)
|
||||||
|
ss, err := streamPkg.NewStreamSectionReader(stream, int(DEFAULT), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
var finish int64 = 0
|
var finish int64 = 0
|
||||||
var chunk int = 0
|
var chunk int = 0
|
||||||
var etags []string
|
var etags []string
|
||||||
DEFAULT := int64(u.ChunkSize)
|
|
||||||
for finish < stream.GetSize() {
|
for finish < stream.GetSize() {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
left := stream.GetSize() - finish
|
left := stream.GetSize() - finish
|
||||||
byteSize := min(left, DEFAULT)
|
byteSize := min(left, DEFAULT)
|
||||||
err := retry.Do(
|
|
||||||
func() error {
|
|
||||||
utils.Log.Debugf("[Cloudreve-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
utils.Log.Debugf("[Cloudreve-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
||||||
byteData := make([]byte, byteSize)
|
rd, err := ss.GetSectionReader(finish, byteSize)
|
||||||
n, err := io.ReadFull(stream, byteData)
|
|
||||||
utils.Log.Debug(err, n)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
err = retry.Do(
|
||||||
|
func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, u.UploadURLs[chunk],
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, u.UploadURLs[chunk],
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
|
driver.NewLimitedUploadStream(ctx, rd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -404,6 +418,7 @@ func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u Uploa
|
|||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
)
|
)
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -19,6 +19,7 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/setting"
|
"github.com/OpenListTeam/OpenList/v4/internal/setting"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/avast/retry-go"
|
"github.com/avast/retry-go"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
@ -251,28 +252,32 @@ func (d *CloudreveV4) upLocal(ctx context.Context, file model.FileStreamer, u Fi
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
|
func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
|
||||||
|
DEFAULT := int64(u.ChunkSize)
|
||||||
|
ss, err := stream.NewStreamSectionReader(file, int(DEFAULT), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
uploadUrl := u.UploadUrls[0]
|
uploadUrl := u.UploadUrls[0]
|
||||||
credential := u.Credential
|
credential := u.Credential
|
||||||
var finish int64 = 0
|
var finish int64 = 0
|
||||||
var chunk int = 0
|
var chunk int = 0
|
||||||
DEFAULT := int64(u.ChunkSize)
|
|
||||||
for finish < file.GetSize() {
|
for finish < file.GetSize() {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
left := file.GetSize() - finish
|
left := file.GetSize() - finish
|
||||||
byteSize := min(left, DEFAULT)
|
byteSize := min(left, DEFAULT)
|
||||||
err := retry.Do(
|
|
||||||
func() error {
|
|
||||||
utils.Log.Debugf("[CloudreveV4-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
|
utils.Log.Debugf("[CloudreveV4-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
|
||||||
byteData := make([]byte, byteSize)
|
rd, err := ss.GetSectionReader(finish, byteSize)
|
||||||
n, err := io.ReadFull(file, byteData)
|
|
||||||
utils.Log.Debug(err, n)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
err = retry.Do(
|
||||||
|
func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadUrl+"?chunk="+strconv.Itoa(chunk),
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadUrl+"?chunk="+strconv.Itoa(chunk),
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
|
driver.NewLimitedUploadStream(ctx, rd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -305,6 +310,7 @@ func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u F
|
|||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
)
|
)
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -316,26 +322,29 @@ func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u F
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
|
func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
|
||||||
|
DEFAULT := int64(u.ChunkSize)
|
||||||
|
ss, err := stream.NewStreamSectionReader(file, int(DEFAULT), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
uploadUrl := u.UploadUrls[0]
|
uploadUrl := u.UploadUrls[0]
|
||||||
var finish int64 = 0
|
var finish int64 = 0
|
||||||
DEFAULT := int64(u.ChunkSize)
|
|
||||||
for finish < file.GetSize() {
|
for finish < file.GetSize() {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
left := file.GetSize() - finish
|
left := file.GetSize() - finish
|
||||||
byteSize := min(left, DEFAULT)
|
byteSize := min(left, DEFAULT)
|
||||||
err := retry.Do(
|
|
||||||
func() error {
|
|
||||||
utils.Log.Debugf("[CloudreveV4-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
|
utils.Log.Debugf("[CloudreveV4-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
|
||||||
byteData := make([]byte, byteSize)
|
rd, err := ss.GetSectionReader(finish, byteSize)
|
||||||
n, err := io.ReadFull(file, byteData)
|
|
||||||
utils.Log.Debug(err, n)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl,
|
err = retry.Do(
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
|
func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, rd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -362,6 +371,7 @@ func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u
|
|||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
)
|
)
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -375,27 +385,31 @@ func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
|
func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
|
||||||
|
DEFAULT := int64(u.ChunkSize)
|
||||||
|
ss, err := stream.NewStreamSectionReader(file, int(DEFAULT), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
var finish int64 = 0
|
var finish int64 = 0
|
||||||
var chunk int = 0
|
var chunk int = 0
|
||||||
var etags []string
|
var etags []string
|
||||||
DEFAULT := int64(u.ChunkSize)
|
|
||||||
for finish < file.GetSize() {
|
for finish < file.GetSize() {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
left := file.GetSize() - finish
|
left := file.GetSize() - finish
|
||||||
byteSize := min(left, DEFAULT)
|
byteSize := min(left, DEFAULT)
|
||||||
err := retry.Do(
|
|
||||||
func() error {
|
|
||||||
utils.Log.Debugf("[CloudreveV4-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
|
utils.Log.Debugf("[CloudreveV4-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
|
||||||
byteData := make([]byte, byteSize)
|
rd, err := ss.GetSectionReader(finish, byteSize)
|
||||||
n, err := io.ReadFull(file, byteData)
|
|
||||||
utils.Log.Debug(err, n)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
err = retry.Do(
|
||||||
|
func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, u.UploadUrls[chunk],
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, u.UploadUrls[chunk],
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
|
driver.NewLimitedUploadStream(ctx, rd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -421,6 +435,7 @@ func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileU
|
|||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
)
|
)
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -292,10 +292,10 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
|
|||||||
|
|
||||||
if offset == 0 && limit > 0 {
|
if offset == 0 && limit > 0 {
|
||||||
fileHeader = make([]byte, fileHeaderSize)
|
fileHeader = make([]byte, fileHeaderSize)
|
||||||
n, _ := io.ReadFull(remoteReader, fileHeader)
|
n, err := io.ReadFull(remoteReader, fileHeader)
|
||||||
if n != fileHeaderSize {
|
if n != fileHeaderSize {
|
||||||
fileHeader = nil
|
fileHeader = nil
|
||||||
return nil, fmt.Errorf("can't read data, expected=%d, got=%d", fileHeaderSize, n)
|
return nil, fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", fileHeaderSize, n, err)
|
||||||
}
|
}
|
||||||
if limit <= fileHeaderSize {
|
if limit <= fileHeaderSize {
|
||||||
remoteReader.Close()
|
remoteReader.Close()
|
||||||
@ -401,7 +401,6 @@ func (d *Crypt) Put(ctx context.Context, dstDir model.Obj, streamer model.FileSt
|
|||||||
},
|
},
|
||||||
Reader: wrappedIn,
|
Reader: wrappedIn,
|
||||||
Mimetype: "application/octet-stream",
|
Mimetype: "application/octet-stream",
|
||||||
WebPutAsTask: streamer.NeedStore(),
|
|
||||||
ForceStreamUpload: true,
|
ForceStreamUpload: true,
|
||||||
Exist: streamer.GetExist(),
|
Exist: streamer.GetExist(),
|
||||||
}
|
}
|
||||||
|
203
drivers/degoo/driver.go
Normal file
203
drivers/degoo/driver.go
Normal file
@ -0,0 +1,203 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Degoo struct {
|
||||||
|
model.Storage
|
||||||
|
Addition
|
||||||
|
client *http.Client
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Config() driver.Config {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) GetAddition() driver.Additional {
|
||||||
|
return &d.Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Init(ctx context.Context) error {
|
||||||
|
|
||||||
|
d.client = base.HttpClient
|
||||||
|
|
||||||
|
// Ensure we have a valid token (will login if needed or refresh if expired)
|
||||||
|
if err := d.ensureValidToken(ctx); err != nil {
|
||||||
|
return fmt.Errorf("failed to initialize token: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return d.getDevices(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Drop(ctx context.Context) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
|
items, err := d.getAllFileChildren5(ctx, dir.GetID())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return utils.MustSliceConvert(items, func(s DegooFileItem) model.Obj {
|
||||||
|
isFolder := s.Category == 2 || s.Category == 1 || s.Category == 10
|
||||||
|
|
||||||
|
createTime, modTime, _ := humanReadableTimes(s.CreationTime, s.LastModificationTime, s.LastUploadTime)
|
||||||
|
|
||||||
|
size, err := strconv.ParseInt(s.Size, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
size = 0 // Default to 0 if size parsing fails
|
||||||
|
}
|
||||||
|
|
||||||
|
return &model.Object{
|
||||||
|
ID: s.ID,
|
||||||
|
Path: s.FilePath,
|
||||||
|
Name: s.Name,
|
||||||
|
Size: size,
|
||||||
|
Modified: modTime,
|
||||||
|
Ctime: createTime,
|
||||||
|
IsFolder: isFolder,
|
||||||
|
}
|
||||||
|
}), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
|
item, err := d.getOverlay4(ctx, file.GetID())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &model.Link{URL: item.URL}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||||
|
// This is done by calling the setUploadFile3 API with a special checksum and size.
|
||||||
|
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) { setUploadFile3(Token: $Token, FileInfos: $FileInfos) }`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"FileInfos": []map[string]interface{}{
|
||||||
|
{
|
||||||
|
"Checksum": folderChecksum,
|
||||||
|
"Name": dirName,
|
||||||
|
"CreationTime": time.Now().UnixMilli(),
|
||||||
|
"ParentID": parentDir.GetID(),
|
||||||
|
"Size": 0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
|
const query = `mutation SetMoveFile($Token: String!, $Copy: Boolean, $NewParentID: String!, $FileIDs: [String]!) { setMoveFile(Token: $Token, Copy: $Copy, NewParentID: $NewParentID, FileIDs: $FileIDs) }`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"Copy": false,
|
||||||
|
"NewParentID": dstDir.GetID(),
|
||||||
|
"FileIDs": []string{srcObj.GetID()},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.apiCall(ctx, "SetMoveFile", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return srcObj, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||||
|
const query = `mutation SetRenameFile($Token: String!, $FileRenames: [FileRenameInfo]!) { setRenameFile(Token: $Token, FileRenames: $FileRenames) }`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"FileRenames": []DegooFileRenameInfo{
|
||||||
|
{
|
||||||
|
ID: srcObj.GetID(),
|
||||||
|
NewName: newName,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.apiCall(ctx, "SetRenameFile", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
|
// Copy is not implemented, Degoo API does not support direct copy.
|
||||||
|
return nil, errs.NotImplement
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
|
// Remove deletes a file or folder (moves to trash).
|
||||||
|
const query = `mutation SetDeleteFile5($Token: String!, $IsInRecycleBin: Boolean!, $IDs: [IDType]!) { setDeleteFile5(Token: $Token, IsInRecycleBin: $IsInRecycleBin, IDs: $IDs) }`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"IsInRecycleBin": false,
|
||||||
|
"IDs": []map[string]string{{"FileID": obj.GetID()}},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.apiCall(ctx, "SetDeleteFile5", query, variables)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
|
tmpF, err := file.CacheFullAndWriter(&up, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
parentID := dstDir.GetID()
|
||||||
|
|
||||||
|
// Calculate the checksum for the file.
|
||||||
|
checksum, err := d.checkSum(tmpF)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// 1. Get upload authorization via getBucketWriteAuth4.
|
||||||
|
auths, err := d.getBucketWriteAuth4(ctx, file, parentID, checksum)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Upload file.
|
||||||
|
// support rapid upload
|
||||||
|
if auths.GetBucketWriteAuth4[0].Error != "Already exist!" {
|
||||||
|
err = d.uploadS3(ctx, auths, tmpF, file, checksum)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Register metadata with setUploadFile3.
|
||||||
|
data, err := d.SetUploadFile3(ctx, file, parentID, checksum)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if !data.SetUploadFile3 {
|
||||||
|
return fmt.Errorf("setUploadFile3 failed: %v", data)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
27
drivers/degoo/meta.go
Normal file
27
drivers/degoo/meta.go
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Addition struct {
|
||||||
|
driver.RootID
|
||||||
|
Username string `json:"username" help:"Your Degoo account email"`
|
||||||
|
Password string `json:"password" help:"Your Degoo account password"`
|
||||||
|
RefreshToken string `json:"refresh_token" help:"Refresh token for automatic token renewal, obtained automatically"`
|
||||||
|
AccessToken string `json:"access_token" help:"Access token for Degoo API, obtained automatically"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var config = driver.Config{
|
||||||
|
Name: "Degoo",
|
||||||
|
LocalSort: true,
|
||||||
|
DefaultRoot: "0",
|
||||||
|
NoOverwriteUpload: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
op.RegisterDriver(func() driver.Driver {
|
||||||
|
return &Degoo{}
|
||||||
|
})
|
||||||
|
}
|
110
drivers/degoo/types.go
Normal file
110
drivers/degoo/types.go
Normal file
@ -0,0 +1,110 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DegooLoginRequest represents the login request body.
|
||||||
|
type DegooLoginRequest struct {
|
||||||
|
GenerateToken bool `json:"GenerateToken"`
|
||||||
|
Username string `json:"Username"`
|
||||||
|
Password string `json:"Password"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooLoginResponse represents a successful login response.
|
||||||
|
type DegooLoginResponse struct {
|
||||||
|
Token string `json:"Token"`
|
||||||
|
RefreshToken string `json:"RefreshToken"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooAccessTokenRequest represents the token refresh request body.
|
||||||
|
type DegooAccessTokenRequest struct {
|
||||||
|
RefreshToken string `json:"RefreshToken"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooAccessTokenResponse represents the token refresh response.
|
||||||
|
type DegooAccessTokenResponse struct {
|
||||||
|
AccessToken string `json:"AccessToken"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooFileItem represents a Degoo file or folder.
|
||||||
|
type DegooFileItem struct {
|
||||||
|
ID string `json:"ID"`
|
||||||
|
ParentID string `json:"ParentID"`
|
||||||
|
Name string `json:"Name"`
|
||||||
|
Category int `json:"Category"`
|
||||||
|
Size string `json:"Size"`
|
||||||
|
URL string `json:"URL"`
|
||||||
|
CreationTime string `json:"CreationTime"`
|
||||||
|
LastModificationTime string `json:"LastModificationTime"`
|
||||||
|
LastUploadTime string `json:"LastUploadTime"`
|
||||||
|
MetadataID string `json:"MetadataID"`
|
||||||
|
DeviceID int64 `json:"DeviceID"`
|
||||||
|
FilePath string `json:"FilePath"`
|
||||||
|
IsInRecycleBin bool `json:"IsInRecycleBin"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type DegooErrors struct {
|
||||||
|
Path []string `json:"path"`
|
||||||
|
Data interface{} `json:"data"`
|
||||||
|
ErrorType string `json:"errorType"`
|
||||||
|
ErrorInfo interface{} `json:"errorInfo"`
|
||||||
|
Message string `json:"message"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooGraphqlResponse is the common structure for GraphQL API responses.
|
||||||
|
type DegooGraphqlResponse struct {
|
||||||
|
Data json.RawMessage `json:"data"`
|
||||||
|
Errors []DegooErrors `json:"errors,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooGetChildren5Data is the data field for getFileChildren5.
|
||||||
|
type DegooGetChildren5Data struct {
|
||||||
|
GetFileChildren5 struct {
|
||||||
|
Items []DegooFileItem `json:"Items"`
|
||||||
|
NextToken string `json:"NextToken"`
|
||||||
|
} `json:"getFileChildren5"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooGetOverlay4Data is the data field for getOverlay4.
|
||||||
|
type DegooGetOverlay4Data struct {
|
||||||
|
GetOverlay4 DegooFileItem `json:"getOverlay4"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooFileRenameInfo represents a file rename operation.
|
||||||
|
type DegooFileRenameInfo struct {
|
||||||
|
ID string `json:"ID"`
|
||||||
|
NewName string `json:"NewName"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooFileIDs represents a list of file IDs for move operations.
|
||||||
|
type DegooFileIDs struct {
|
||||||
|
FileIDs []string `json:"FileIDs"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooGetBucketWriteAuth4Data is the data field for GetBucketWriteAuth4.
|
||||||
|
type DegooGetBucketWriteAuth4Data struct {
|
||||||
|
GetBucketWriteAuth4 []struct {
|
||||||
|
AuthData struct {
|
||||||
|
PolicyBase64 string `json:"PolicyBase64"`
|
||||||
|
Signature string `json:"Signature"`
|
||||||
|
BaseURL string `json:"BaseURL"`
|
||||||
|
KeyPrefix string `json:"KeyPrefix"`
|
||||||
|
AccessKey struct {
|
||||||
|
Key string `json:"Key"`
|
||||||
|
Value string `json:"Value"`
|
||||||
|
} `json:"AccessKey"`
|
||||||
|
ACL string `json:"ACL"`
|
||||||
|
AdditionalBody []struct {
|
||||||
|
Key string `json:"Key"`
|
||||||
|
Value string `json:"Value"`
|
||||||
|
} `json:"AdditionalBody"`
|
||||||
|
} `json:"AuthData"`
|
||||||
|
Error interface{} `json:"Error"`
|
||||||
|
} `json:"getBucketWriteAuth4"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooSetUploadFile3Data is the data field for SetUploadFile3.
|
||||||
|
type DegooSetUploadFile3Data struct {
|
||||||
|
SetUploadFile3 bool `json:"setUploadFile3"`
|
||||||
|
}
|
198
drivers/degoo/upload.go
Normal file
198
drivers/degoo/upload.go
Normal file
@ -0,0 +1,198 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"crypto/sha1"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"mime/multipart"
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (d *Degoo) getBucketWriteAuth4(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooGetBucketWriteAuth4Data, error) {
|
||||||
|
const query = `query GetBucketWriteAuth4(
|
||||||
|
$Token: String!
|
||||||
|
$ParentID: String!
|
||||||
|
$StorageUploadInfos: [StorageUploadInfo2]
|
||||||
|
) {
|
||||||
|
getBucketWriteAuth4(
|
||||||
|
Token: $Token
|
||||||
|
ParentID: $ParentID
|
||||||
|
StorageUploadInfos: $StorageUploadInfos
|
||||||
|
) {
|
||||||
|
AuthData {
|
||||||
|
PolicyBase64
|
||||||
|
Signature
|
||||||
|
BaseURL
|
||||||
|
KeyPrefix
|
||||||
|
AccessKey {
|
||||||
|
Key
|
||||||
|
Value
|
||||||
|
}
|
||||||
|
ACL
|
||||||
|
AdditionalBody {
|
||||||
|
Key
|
||||||
|
Value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Error
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"ParentID": parentID,
|
||||||
|
"StorageUploadInfos": []map[string]string{{
|
||||||
|
"FileName": file.GetName(),
|
||||||
|
"Checksum": checksum,
|
||||||
|
"Size": strconv.FormatInt(file.GetSize(), 10),
|
||||||
|
}}}
|
||||||
|
|
||||||
|
data, err := d.apiCall(ctx, "GetBucketWriteAuth4", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var resp DegooGetBucketWriteAuth4Data
|
||||||
|
err = json.Unmarshal(data, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &resp, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkSum calculates the SHA1-based checksum for Degoo upload API.
|
||||||
|
func (d *Degoo) checkSum(file io.Reader) (string, error) {
|
||||||
|
seed := []byte{13, 7, 2, 2, 15, 40, 75, 117, 13, 10, 19, 16, 29, 23, 3, 36}
|
||||||
|
hasher := sha1.New()
|
||||||
|
hasher.Write(seed)
|
||||||
|
|
||||||
|
if _, err := utils.CopyWithBuffer(hasher, file); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
cs := hasher.Sum(nil)
|
||||||
|
|
||||||
|
csBytes := []byte{10, byte(len(cs))}
|
||||||
|
csBytes = append(csBytes, cs...)
|
||||||
|
csBytes = append(csBytes, 16, 0)
|
||||||
|
|
||||||
|
return strings.ReplaceAll(base64.StdEncoding.EncodeToString(csBytes), "/", "_"), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) uploadS3(ctx context.Context, auths *DegooGetBucketWriteAuth4Data, tmpF model.File, file model.FileStreamer, checksum string) error {
|
||||||
|
a := auths.GetBucketWriteAuth4[0].AuthData
|
||||||
|
|
||||||
|
_, err := tmpF.Seek(0, io.SeekStart)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ext := utils.Ext(file.GetName())
|
||||||
|
key := fmt.Sprintf("%s%s/%s.%s", a.KeyPrefix, ext, checksum, ext)
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
w := multipart.NewWriter(&b)
|
||||||
|
err = w.WriteField("key", key)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField("acl", a.ACL)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField("policy", a.PolicyBase64)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField("signature", a.Signature)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField(a.AccessKey.Key, a.AccessKey.Value)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, additional := range a.AdditionalBody {
|
||||||
|
err = w.WriteField(additional.Key, additional.Value)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
err = w.WriteField("Content-Type", "")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = w.CreateFormFile("file", key)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
headSize := b.Len()
|
||||||
|
err = w.Close()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
head := bytes.NewReader(b.Bytes()[:headSize])
|
||||||
|
tail := bytes.NewReader(b.Bytes()[headSize:])
|
||||||
|
|
||||||
|
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.MultiReader(head, tmpF, tail))
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, a.BaseURL, rateLimitedRd)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
req.Header.Add("ngsw-bypass", "1")
|
||||||
|
req.Header.Add("Content-Type", w.FormDataContentType())
|
||||||
|
|
||||||
|
res, err := d.client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
if res.StatusCode != http.StatusNoContent {
|
||||||
|
return fmt.Errorf("upload failed with status code %d", res.StatusCode)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ driver.Driver = (*Degoo)(nil)
|
||||||
|
|
||||||
|
func (d *Degoo) SetUploadFile3(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooSetUploadFile3Data, error) {
|
||||||
|
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) {
|
||||||
|
setUploadFile3(Token: $Token, FileInfos: $FileInfos)
|
||||||
|
}`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"FileInfos": []map[string]string{{
|
||||||
|
"Checksum": checksum,
|
||||||
|
"CreationTime": strconv.FormatInt(file.CreateTime().UnixMilli(), 10),
|
||||||
|
"Name": file.GetName(),
|
||||||
|
"ParentID": parentID,
|
||||||
|
"Size": strconv.FormatInt(file.GetSize(), 10),
|
||||||
|
}}}
|
||||||
|
|
||||||
|
data, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var resp DegooSetUploadFile3Data
|
||||||
|
err = json.Unmarshal(data, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &resp, nil
|
||||||
|
}
|
462
drivers/degoo/util.go
Normal file
462
drivers/degoo/util.go
Normal file
@ -0,0 +1,462 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Thanks to https://github.com/bernd-wechner/Degoo for API research.
|
||||||
|
|
||||||
|
const (
|
||||||
|
// API endpoints
|
||||||
|
loginURL = "https://rest-api.degoo.com/login"
|
||||||
|
accessTokenURL = "https://rest-api.degoo.com/access-token/v2"
|
||||||
|
apiURL = "https://production-appsync.degoo.com/graphql"
|
||||||
|
|
||||||
|
// API configuration
|
||||||
|
apiKey = "da2-vs6twz5vnjdavpqndtbzg3prra"
|
||||||
|
folderChecksum = "CgAQAg"
|
||||||
|
|
||||||
|
// Token management
|
||||||
|
tokenRefreshThreshold = 5 * time.Minute
|
||||||
|
|
||||||
|
// Rate limiting
|
||||||
|
minRequestInterval = 1 * time.Second
|
||||||
|
|
||||||
|
// Error messages
|
||||||
|
errRateLimited = "rate limited (429), please try again later"
|
||||||
|
errUnauthorized = "unauthorized access"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
// Global rate limiting - protects against concurrent API calls
|
||||||
|
lastRequestTime time.Time
|
||||||
|
requestMutex sync.Mutex
|
||||||
|
)
|
||||||
|
|
||||||
|
// JWT payload structure for token expiration checking
|
||||||
|
type JWTPayload struct {
|
||||||
|
UserID string `json:"userID"`
|
||||||
|
Exp int64 `json:"exp"`
|
||||||
|
Iat int64 `json:"iat"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rate limiting helper functions
|
||||||
|
|
||||||
|
// applyRateLimit ensures minimum interval between API requests
|
||||||
|
func applyRateLimit() {
|
||||||
|
requestMutex.Lock()
|
||||||
|
defer requestMutex.Unlock()
|
||||||
|
|
||||||
|
if !lastRequestTime.IsZero() {
|
||||||
|
if elapsed := time.Since(lastRequestTime); elapsed < minRequestInterval {
|
||||||
|
time.Sleep(minRequestInterval - elapsed)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
lastRequestTime = time.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
// HTTP request helper functions
|
||||||
|
|
||||||
|
// createJSONRequest creates a new HTTP request with JSON body
|
||||||
|
func createJSONRequest(ctx context.Context, method, url string, body interface{}) (*http.Request, error) {
|
||||||
|
jsonBody, err := json.Marshal(body)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to marshal request body: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
req, err := http.NewRequestWithContext(ctx, method, url, bytes.NewBuffer(jsonBody))
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
req.Header.Set("User-Agent", base.UserAgent)
|
||||||
|
return req, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkHTTPResponse checks for common HTTP error conditions
|
||||||
|
func checkHTTPResponse(resp *http.Response, operation string) error {
|
||||||
|
if resp.StatusCode == http.StatusTooManyRequests {
|
||||||
|
return fmt.Errorf("%s %s", operation, errRateLimited)
|
||||||
|
}
|
||||||
|
if resp.StatusCode != http.StatusOK {
|
||||||
|
return fmt.Errorf("%s failed: %s", operation, resp.Status)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// isTokenExpired checks if the JWT token is expired or will expire soon
|
||||||
|
func (d *Degoo) isTokenExpired() bool {
|
||||||
|
if d.AccessToken == "" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
payload, err := extractJWTPayload(d.AccessToken)
|
||||||
|
if err != nil {
|
||||||
|
return true // Invalid token format
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if token expires within the threshold
|
||||||
|
expireTime := time.Unix(payload.Exp, 0)
|
||||||
|
return time.Now().Add(tokenRefreshThreshold).After(expireTime)
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractJWTPayload extracts and parses JWT payload
|
||||||
|
func extractJWTPayload(token string) (*JWTPayload, error) {
|
||||||
|
parts := strings.Split(token, ".")
|
||||||
|
if len(parts) != 3 {
|
||||||
|
return nil, fmt.Errorf("invalid JWT format")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decode the payload (second part)
|
||||||
|
payload, err := base64.RawURLEncoding.DecodeString(parts[1])
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to decode JWT payload: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var jwtPayload JWTPayload
|
||||||
|
if err := json.Unmarshal(payload, &jwtPayload); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse JWT payload: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &jwtPayload, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// refreshToken attempts to refresh the access token using the refresh token
|
||||||
|
func (d *Degoo) refreshToken(ctx context.Context) error {
|
||||||
|
if d.RefreshToken == "" {
|
||||||
|
return fmt.Errorf("no refresh token available")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create request
|
||||||
|
tokenReq := DegooAccessTokenRequest{RefreshToken: d.RefreshToken}
|
||||||
|
req, err := createJSONRequest(ctx, "POST", accessTokenURL, tokenReq)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create refresh token request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute request
|
||||||
|
resp, err := d.client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("refresh token request failed: %w", err)
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Check response
|
||||||
|
if err := checkHTTPResponse(resp, "refresh token"); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var accessTokenResp DegooAccessTokenResponse
|
||||||
|
if err := json.NewDecoder(resp.Body).Decode(&accessTokenResp); err != nil {
|
||||||
|
return fmt.Errorf("failed to parse access token response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if accessTokenResp.AccessToken == "" {
|
||||||
|
return fmt.Errorf("empty access token received")
|
||||||
|
}
|
||||||
|
|
||||||
|
d.AccessToken = accessTokenResp.AccessToken
|
||||||
|
// Save the updated token to storage
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ensureValidToken ensures we have a valid, non-expired token
|
||||||
|
func (d *Degoo) ensureValidToken(ctx context.Context) error {
|
||||||
|
// Check if token is expired or will expire soon
|
||||||
|
if d.isTokenExpired() {
|
||||||
|
// Try to refresh token first if we have a refresh token
|
||||||
|
if d.RefreshToken != "" {
|
||||||
|
if refreshErr := d.refreshToken(ctx); refreshErr == nil {
|
||||||
|
return nil // Successfully refreshed
|
||||||
|
} else {
|
||||||
|
// If refresh failed, fall back to full login
|
||||||
|
fmt.Printf("Token refresh failed, falling back to full login: %v\n", refreshErr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Perform full login
|
||||||
|
if d.Username != "" && d.Password != "" {
|
||||||
|
return d.login(ctx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// login performs the login process and retrieves the access token.
|
||||||
|
func (d *Degoo) login(ctx context.Context) error {
|
||||||
|
if d.Username == "" || d.Password == "" {
|
||||||
|
return fmt.Errorf("username or password not provided")
|
||||||
|
}
|
||||||
|
|
||||||
|
creds := DegooLoginRequest{
|
||||||
|
GenerateToken: true,
|
||||||
|
Username: d.Username,
|
||||||
|
Password: d.Password,
|
||||||
|
}
|
||||||
|
|
||||||
|
jsonCreds, err := json.Marshal(creds)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to serialize login credentials: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
req, err := http.NewRequestWithContext(ctx, "POST", loginURL, bytes.NewBuffer(jsonCreds))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create login request: %w", err)
|
||||||
|
}
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
req.Header.Set("User-Agent", base.UserAgent)
|
||||||
|
req.Header.Set("Origin", "https://app.degoo.com")
|
||||||
|
|
||||||
|
resp, err := d.client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("login request failed: %w", err)
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Handle rate limiting (429 Too Many Requests)
|
||||||
|
if resp.StatusCode == http.StatusTooManyRequests {
|
||||||
|
return fmt.Errorf("login rate limited (429), please try again later")
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp.StatusCode != http.StatusOK {
|
||||||
|
return fmt.Errorf("login failed: %s", resp.Status)
|
||||||
|
}
|
||||||
|
|
||||||
|
var loginResp DegooLoginResponse
|
||||||
|
if err := json.NewDecoder(resp.Body).Decode(&loginResp); err != nil {
|
||||||
|
return fmt.Errorf("failed to parse login response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if loginResp.RefreshToken != "" {
|
||||||
|
tokenReq := DegooAccessTokenRequest{RefreshToken: loginResp.RefreshToken}
|
||||||
|
jsonTokenReq, err := json.Marshal(tokenReq)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to serialize access token request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tokenReqHTTP, err := http.NewRequestWithContext(ctx, "POST", accessTokenURL, bytes.NewBuffer(jsonTokenReq))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create access token request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tokenReqHTTP.Header.Set("User-Agent", base.UserAgent)
|
||||||
|
|
||||||
|
tokenResp, err := d.client.Do(tokenReqHTTP)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get access token: %w", err)
|
||||||
|
}
|
||||||
|
defer tokenResp.Body.Close()
|
||||||
|
|
||||||
|
var accessTokenResp DegooAccessTokenResponse
|
||||||
|
if err := json.NewDecoder(tokenResp.Body).Decode(&accessTokenResp); err != nil {
|
||||||
|
return fmt.Errorf("failed to parse access token response: %w", err)
|
||||||
|
}
|
||||||
|
d.AccessToken = accessTokenResp.AccessToken
|
||||||
|
d.RefreshToken = loginResp.RefreshToken // Save refresh token
|
||||||
|
} else if loginResp.Token != "" {
|
||||||
|
d.AccessToken = loginResp.Token
|
||||||
|
d.RefreshToken = "" // Direct token, no refresh token available
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("login failed, no valid token returned")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save the updated tokens to storage
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// apiCall performs a Degoo GraphQL API request.
|
||||||
|
func (d *Degoo) apiCall(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
|
||||||
|
// Apply rate limiting
|
||||||
|
applyRateLimit()
|
||||||
|
|
||||||
|
// Ensure we have a valid token before making the API call
|
||||||
|
if err := d.ensureValidToken(ctx); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to ensure valid token: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update the Token in variables if it exists (after potential refresh)
|
||||||
|
d.updateTokenInVariables(variables)
|
||||||
|
|
||||||
|
return d.executeGraphQLRequest(ctx, operationName, query, variables)
|
||||||
|
}
|
||||||
|
|
||||||
|
// updateTokenInVariables updates the Token field in GraphQL variables
|
||||||
|
func (d *Degoo) updateTokenInVariables(variables map[string]interface{}) {
|
||||||
|
if variables != nil {
|
||||||
|
if _, hasToken := variables["Token"]; hasToken {
|
||||||
|
variables["Token"] = d.AccessToken
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// executeGraphQLRequest executes a GraphQL request with retry logic
|
||||||
|
func (d *Degoo) executeGraphQLRequest(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
|
||||||
|
reqBody := map[string]interface{}{
|
||||||
|
"operationName": operationName,
|
||||||
|
"query": query,
|
||||||
|
"variables": variables,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create and configure request
|
||||||
|
req, err := createJSONRequest(ctx, "POST", apiURL, reqBody)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set Degoo-specific headers
|
||||||
|
req.Header.Set("x-api-key", apiKey)
|
||||||
|
if d.AccessToken != "" {
|
||||||
|
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", d.AccessToken))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute request
|
||||||
|
resp, err := d.client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("GraphQL API request failed: %w", err)
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Check for HTTP errors
|
||||||
|
if err := checkHTTPResponse(resp, "GraphQL API"); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse GraphQL response
|
||||||
|
var degooResp DegooGraphqlResponse
|
||||||
|
if err := json.NewDecoder(resp.Body).Decode(°ooResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to decode GraphQL response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle GraphQL errors
|
||||||
|
if len(degooResp.Errors) > 0 {
|
||||||
|
return d.handleGraphQLError(ctx, degooResp.Errors[0], operationName, query, variables)
|
||||||
|
}
|
||||||
|
|
||||||
|
return degooResp.Data, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleGraphQLError handles GraphQL-level errors with retry logic
|
||||||
|
func (d *Degoo) handleGraphQLError(ctx context.Context, gqlError DegooErrors, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
|
||||||
|
if gqlError.ErrorType == "Unauthorized" {
|
||||||
|
// Re-login and retry
|
||||||
|
if err := d.login(ctx); err != nil {
|
||||||
|
return nil, fmt.Errorf("%s, login failed: %w", errUnauthorized, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update token in variables and retry
|
||||||
|
d.updateTokenInVariables(variables)
|
||||||
|
return d.apiCall(ctx, operationName, query, variables)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, fmt.Errorf("GraphQL API error: %s", gqlError.Message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// humanReadableTimes converts Degoo timestamps to Go time.Time.
|
||||||
|
func humanReadableTimes(creation, modification, upload string) (cTime, mTime, uTime time.Time) {
|
||||||
|
cTime, _ = time.Parse(time.RFC3339, creation)
|
||||||
|
if modification != "" {
|
||||||
|
modMillis, _ := strconv.ParseInt(modification, 10, 64)
|
||||||
|
mTime = time.Unix(0, modMillis*int64(time.Millisecond))
|
||||||
|
}
|
||||||
|
if upload != "" {
|
||||||
|
upMillis, _ := strconv.ParseInt(upload, 10, 64)
|
||||||
|
uTime = time.Unix(0, upMillis*int64(time.Millisecond))
|
||||||
|
}
|
||||||
|
return cTime, mTime, uTime
|
||||||
|
}
|
||||||
|
|
||||||
|
// getDevices fetches and caches top-level devices and folders.
|
||||||
|
func (d *Degoo) getDevices(ctx context.Context) error {
|
||||||
|
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ParentID } NextToken } }`
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"ParentID": "0",
|
||||||
|
"Limit": 10,
|
||||||
|
"Order": 3,
|
||||||
|
}
|
||||||
|
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
var resp DegooGetChildren5Data
|
||||||
|
if err := json.Unmarshal(data, &resp); err != nil {
|
||||||
|
return fmt.Errorf("failed to parse device list: %w", err)
|
||||||
|
}
|
||||||
|
if d.RootFolderID == "0" {
|
||||||
|
if len(resp.GetFileChildren5.Items) > 0 {
|
||||||
|
d.RootFolderID = resp.GetFileChildren5.Items[0].ParentID
|
||||||
|
}
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getAllFileChildren5 fetches all children of a directory with pagination.
|
||||||
|
func (d *Degoo) getAllFileChildren5(ctx context.Context, parentID string) ([]DegooFileItem, error) {
|
||||||
|
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime FilePath IsInRecycleBin DeviceID MetadataID } NextToken } }`
|
||||||
|
var allItems []DegooFileItem
|
||||||
|
nextToken := ""
|
||||||
|
for {
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"ParentID": parentID,
|
||||||
|
"Limit": 1000,
|
||||||
|
"Order": 3,
|
||||||
|
}
|
||||||
|
if nextToken != "" {
|
||||||
|
variables["NextToken"] = nextToken
|
||||||
|
}
|
||||||
|
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var resp DegooGetChildren5Data
|
||||||
|
if err := json.Unmarshal(data, &resp); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
allItems = append(allItems, resp.GetFileChildren5.Items...)
|
||||||
|
if resp.GetFileChildren5.NextToken == "" {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
nextToken = resp.GetFileChildren5.NextToken
|
||||||
|
}
|
||||||
|
return allItems, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getOverlay4 fetches metadata for a single item by ID.
|
||||||
|
func (d *Degoo) getOverlay4(ctx context.Context, id string) (DegooFileItem, error) {
|
||||||
|
const query = `query GetOverlay4($Token: String!, $ID: IDType!) { getOverlay4(Token: $Token, ID: $ID) { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime URL FilePath IsInRecycleBin DeviceID MetadataID } }`
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"ID": map[string]string{
|
||||||
|
"FileID": id,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
data, err := d.apiCall(ctx, "GetOverlay4", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return DegooFileItem{}, err
|
||||||
|
}
|
||||||
|
var resp DegooGetOverlay4Data
|
||||||
|
if err := json.Unmarshal(data, &resp); err != nil {
|
||||||
|
return DegooFileItem{}, fmt.Errorf("failed to parse item metadata: %w", err)
|
||||||
|
}
|
||||||
|
return resp.GetOverlay4, nil
|
||||||
|
}
|
@ -236,7 +236,7 @@ func (d *Doubao) Put(ctx context.Context, dstDir model.Obj, file model.FileStrea
|
|||||||
|
|
||||||
// 根据文件大小选择上传方式
|
// 根据文件大小选择上传方式
|
||||||
if file.GetSize() <= 1*utils.MB { // 小于1MB,使用普通模式上传
|
if file.GetSize() <= 1*utils.MB { // 小于1MB,使用普通模式上传
|
||||||
return d.Upload(&uploadConfig, dstDir, file, up, dataType)
|
return d.Upload(ctx, &uploadConfig, dstDir, file, up, dataType)
|
||||||
}
|
}
|
||||||
// 大文件使用分片上传
|
// 大文件使用分片上传
|
||||||
return d.UploadByMultipart(ctx, &uploadConfig, file.GetSize(), dstDir, file, up, dataType)
|
return d.UploadByMultipart(ctx, &uploadConfig, file.GetSize(), dstDir, file, up, dataType)
|
||||||
|
@ -24,6 +24,7 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
|
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/avast/retry-go"
|
"github.com/avast/retry-go"
|
||||||
@ -447,39 +448,65 @@ func (d *Doubao) uploadNode(uploadConfig *UploadConfig, dir model.Obj, file mode
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Upload 普通上传实现
|
// Upload 普通上传实现
|
||||||
func (d *Doubao) Upload(config *UploadConfig, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
|
func (d *Doubao) Upload(ctx context.Context, config *UploadConfig, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
|
||||||
data, err := io.ReadAll(file)
|
ss, err := stream.NewStreamSectionReader(file, int(file.GetSize()), &up)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
reader, err := ss.GetSectionReader(0, file.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// 计算CRC32
|
// 计算CRC32
|
||||||
crc32Hash := crc32.NewIEEE()
|
crc32Hash := crc32.NewIEEE()
|
||||||
crc32Hash.Write(data)
|
w, err := utils.CopyWithBuffer(crc32Hash, reader)
|
||||||
|
if w != file.GetSize() {
|
||||||
|
return nil, fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", file.GetSize(), w, err)
|
||||||
|
}
|
||||||
crc32Value := hex.EncodeToString(crc32Hash.Sum(nil))
|
crc32Value := hex.EncodeToString(crc32Hash.Sum(nil))
|
||||||
|
|
||||||
// 构建请求路径
|
// 构建请求路径
|
||||||
uploadNode := config.InnerUploadAddress.UploadNodes[0]
|
uploadNode := config.InnerUploadAddress.UploadNodes[0]
|
||||||
storeInfo := uploadNode.StoreInfos[0]
|
storeInfo := uploadNode.StoreInfos[0]
|
||||||
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
|
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
|
||||||
|
rateLimitedRd := driver.NewLimitedUploadStream(ctx, reader)
|
||||||
uploadResp := UploadResp{}
|
err = d._retryOperation("Upload", func() error {
|
||||||
|
reader.Seek(0, io.SeekStart)
|
||||||
if _, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadUrl, rateLimitedRd)
|
||||||
req.SetHeaders(map[string]string{
|
if err != nil {
|
||||||
"Content-Type": "application/octet-stream",
|
return err
|
||||||
"Content-Crc32": crc32Value,
|
|
||||||
"Content-Length": fmt.Sprintf("%d", len(data)),
|
|
||||||
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
|
|
||||||
})
|
|
||||||
|
|
||||||
req.SetBody(data)
|
|
||||||
}, &uploadResp); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
}
|
||||||
|
req.Header = map[string][]string{
|
||||||
if uploadResp.Code != 2000 {
|
"Referer": {BaseURL + "/"},
|
||||||
return nil, fmt.Errorf("upload failed: %s", uploadResp.Message)
|
"Origin": {BaseURL},
|
||||||
|
"User-Agent": {UserAgent},
|
||||||
|
"X-Storage-U": {d.UserId},
|
||||||
|
"Authorization": {storeInfo.Auth},
|
||||||
|
"Content-Type": {"application/octet-stream"},
|
||||||
|
"Content-Crc32": {crc32Value},
|
||||||
|
"Content-Length": {fmt.Sprintf("%d", file.GetSize())},
|
||||||
|
"Content-Disposition": {fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI))},
|
||||||
|
}
|
||||||
|
res, err := base.HttpClient.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
bytes, _ := io.ReadAll(res.Body)
|
||||||
|
resp := UploadResp{}
|
||||||
|
utils.Json.Unmarshal(bytes, &resp)
|
||||||
|
if resp.Code != 2000 {
|
||||||
|
return fmt.Errorf("upload part failed: %s", resp.Message)
|
||||||
|
} else if resp.Data.Crc32 != crc32Value {
|
||||||
|
return fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, resp.Data.Crc32)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
ss.FreeSectionReader(reader)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
uploadNodeResp, err := d.uploadNode(config, dstDir, file, dataType)
|
uploadNodeResp, err := d.uploadNode(config, dstDir, file, dataType)
|
||||||
@ -516,68 +543,107 @@ func (d *Doubao) UploadByMultipart(ctx context.Context, config *UploadConfig, fi
|
|||||||
if config.InnerUploadAddress.AdvanceOption.SliceSize > 0 {
|
if config.InnerUploadAddress.AdvanceOption.SliceSize > 0 {
|
||||||
chunkSize = int64(config.InnerUploadAddress.AdvanceOption.SliceSize)
|
chunkSize = int64(config.InnerUploadAddress.AdvanceOption.SliceSize)
|
||||||
}
|
}
|
||||||
|
ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
totalParts := (fileSize + chunkSize - 1) / chunkSize
|
totalParts := (fileSize + chunkSize - 1) / chunkSize
|
||||||
// 创建分片信息组
|
// 创建分片信息组
|
||||||
parts := make([]UploadPart, totalParts)
|
parts := make([]UploadPart, totalParts)
|
||||||
// 缓存文件
|
|
||||||
tempFile, err := file.CacheFullInTempFile()
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to cache file: %w", err)
|
|
||||||
}
|
|
||||||
up(10.0) // 更新进度
|
up(10.0) // 更新进度
|
||||||
// 设置并行上传
|
// 设置并行上传
|
||||||
threadG, uploadCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
|
thread := min(int(totalParts), d.uploadThread)
|
||||||
retry.Attempts(1),
|
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
|
||||||
|
retry.Attempts(MaxRetryAttempts),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
retry.DelayType(retry.BackOffDelay))
|
retry.DelayType(retry.BackOffDelay),
|
||||||
|
retry.MaxJitter(200*time.Millisecond),
|
||||||
|
)
|
||||||
|
|
||||||
var partsMutex sync.Mutex
|
var partsMutex sync.Mutex
|
||||||
// 并行上传所有分片
|
// 并行上传所有分片
|
||||||
for partIndex := int64(0); partIndex < totalParts; partIndex++ {
|
hash := crc32.NewIEEE()
|
||||||
|
for partIndex := range totalParts {
|
||||||
if utils.IsCanceled(uploadCtx) {
|
if utils.IsCanceled(uploadCtx) {
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
partIndex := partIndex
|
|
||||||
partNumber := partIndex + 1 // 分片编号从1开始
|
partNumber := partIndex + 1 // 分片编号从1开始
|
||||||
|
|
||||||
threadG.Go(func(ctx context.Context) error {
|
|
||||||
// 计算此分片的大小和偏移
|
// 计算此分片的大小和偏移
|
||||||
offset := partIndex * chunkSize
|
offset := partIndex * chunkSize
|
||||||
size := chunkSize
|
size := chunkSize
|
||||||
if partIndex == totalParts-1 {
|
if partIndex == totalParts-1 {
|
||||||
size = fileSize - offset
|
size = fileSize - offset
|
||||||
}
|
}
|
||||||
|
var reader *stream.SectionReader
|
||||||
limitedReader := driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, size))
|
var rateLimitedRd io.Reader
|
||||||
// 读取数据到内存
|
crc32Value := ""
|
||||||
data, err := io.ReadAll(limitedReader)
|
threadG.GoWithLifecycle(errgroup.Lifecycle{
|
||||||
if err != nil {
|
Before: func(ctx context.Context) error {
|
||||||
return fmt.Errorf("failed to read part %d: %w", partNumber, err)
|
if reader == nil {
|
||||||
}
|
|
||||||
// 计算CRC32
|
|
||||||
crc32Value := calculateCRC32(data)
|
|
||||||
// 使用_retryOperation上传分片
|
|
||||||
var uploadPart UploadPart
|
|
||||||
if err = d._retryOperation(fmt.Sprintf("Upload part %d", partNumber), func() error {
|
|
||||||
var err error
|
var err error
|
||||||
uploadPart, err = d.uploadPart(config, uploadUrl, uploadID, partNumber, data, crc32Value)
|
reader, err = ss.GetSectionReader(offset, size)
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}); err != nil {
|
}
|
||||||
return fmt.Errorf("part %d upload failed: %w", partNumber, err)
|
hash.Reset()
|
||||||
|
w, err := utils.CopyWithBuffer(hash, reader)
|
||||||
|
if w != size {
|
||||||
|
return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", size, w, err)
|
||||||
|
}
|
||||||
|
crc32Value = hex.EncodeToString(hash.Sum(nil))
|
||||||
|
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
Do: func(ctx context.Context) error {
|
||||||
|
reader.Seek(0, io.SeekStart)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, fmt.Sprintf("%s?uploadid=%s&part_number=%d&phase=transfer", uploadUrl, uploadID, partNumber), rateLimitedRd)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
req.Header = map[string][]string{
|
||||||
|
"Referer": {BaseURL + "/"},
|
||||||
|
"Origin": {BaseURL},
|
||||||
|
"User-Agent": {UserAgent},
|
||||||
|
"X-Storage-U": {d.UserId},
|
||||||
|
"Authorization": {storeInfo.Auth},
|
||||||
|
"Content-Type": {"application/octet-stream"},
|
||||||
|
"Content-Crc32": {crc32Value},
|
||||||
|
"Content-Length": {fmt.Sprintf("%d", size)},
|
||||||
|
"Content-Disposition": {fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI))},
|
||||||
|
}
|
||||||
|
res, err := base.HttpClient.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
bytes, _ := io.ReadAll(res.Body)
|
||||||
|
uploadResp := UploadResp{}
|
||||||
|
utils.Json.Unmarshal(bytes, &uploadResp)
|
||||||
|
if uploadResp.Code != 2000 {
|
||||||
|
return fmt.Errorf("upload part failed: %s", uploadResp.Message)
|
||||||
|
} else if uploadResp.Data.Crc32 != crc32Value {
|
||||||
|
return fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, uploadResp.Data.Crc32)
|
||||||
}
|
}
|
||||||
// 记录成功上传的分片
|
// 记录成功上传的分片
|
||||||
partsMutex.Lock()
|
partsMutex.Lock()
|
||||||
parts[partIndex] = UploadPart{
|
parts[partIndex] = UploadPart{
|
||||||
PartNumber: strconv.FormatInt(partNumber, 10),
|
PartNumber: strconv.FormatInt(partNumber, 10),
|
||||||
Etag: uploadPart.Etag,
|
Etag: uploadResp.Data.Etag,
|
||||||
Crc32: crc32Value,
|
Crc32: crc32Value,
|
||||||
}
|
}
|
||||||
partsMutex.Unlock()
|
partsMutex.Unlock()
|
||||||
// 更新进度
|
// 更新进度
|
||||||
progress := 10.0 + 90.0*float64(threadG.Success()+1)/float64(totalParts)
|
progress := 10.0 + 90.0*float64(threadG.Success()+1)/float64(totalParts)
|
||||||
up(math.Min(progress, 95.0))
|
up(math.Min(progress, 95.0))
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
},
|
||||||
|
After: func(err error) {
|
||||||
|
ss.FreeSectionReader(reader)
|
||||||
|
},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -680,42 +746,6 @@ func (d *Doubao) initMultipartUpload(config *UploadConfig, uploadUrl string, sto
|
|||||||
return uploadResp.Data.UploadId, nil
|
return uploadResp.Data.UploadId, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// 分片上传实现
|
|
||||||
func (d *Doubao) uploadPart(config *UploadConfig, uploadUrl, uploadID string, partNumber int64, data []byte, crc32Value string) (resp UploadPart, err error) {
|
|
||||||
uploadResp := UploadResp{}
|
|
||||||
storeInfo := config.InnerUploadAddress.UploadNodes[0].StoreInfos[0]
|
|
||||||
|
|
||||||
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
|
|
||||||
req.SetHeaders(map[string]string{
|
|
||||||
"Content-Type": "application/octet-stream",
|
|
||||||
"Content-Crc32": crc32Value,
|
|
||||||
"Content-Length": fmt.Sprintf("%d", len(data)),
|
|
||||||
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
|
|
||||||
})
|
|
||||||
|
|
||||||
req.SetQueryParams(map[string]string{
|
|
||||||
"uploadid": uploadID,
|
|
||||||
"part_number": strconv.FormatInt(partNumber, 10),
|
|
||||||
"phase": "transfer",
|
|
||||||
})
|
|
||||||
|
|
||||||
req.SetBody(data)
|
|
||||||
req.SetContentLength(true)
|
|
||||||
}, &uploadResp)
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return resp, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if uploadResp.Code != 2000 {
|
|
||||||
return resp, fmt.Errorf("upload part failed: %s", uploadResp.Message)
|
|
||||||
} else if uploadResp.Data.Crc32 != crc32Value {
|
|
||||||
return resp, fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, uploadResp.Data.Crc32)
|
|
||||||
}
|
|
||||||
|
|
||||||
return uploadResp.Data, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// 完成分片上传
|
// 完成分片上传
|
||||||
func (d *Doubao) completeMultipartUpload(config *UploadConfig, uploadUrl, uploadID string, parts []UploadPart) error {
|
func (d *Doubao) completeMultipartUpload(config *UploadConfig, uploadUrl, uploadID string, parts []UploadPart) error {
|
||||||
uploadResp := UploadResp{}
|
uploadResp := UploadResp{}
|
||||||
@ -784,13 +814,6 @@ func (d *Doubao) commitMultipartUpload(uploadConfig *UploadConfig) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// 计算CRC32
|
|
||||||
func calculateCRC32(data []byte) string {
|
|
||||||
hash := crc32.NewIEEE()
|
|
||||||
hash.Write(data)
|
|
||||||
return hex.EncodeToString(hash.Sum(nil))
|
|
||||||
}
|
|
||||||
|
|
||||||
// _retryOperation 操作重试
|
// _retryOperation 操作重试
|
||||||
func (d *Doubao) _retryOperation(operation string, fn func() error) error {
|
func (d *Doubao) _retryOperation(operation string, fn func() error) error {
|
||||||
return retry.Do(
|
return retry.Do(
|
||||||
|
@ -192,12 +192,11 @@ func (d *Dropbox) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
|
|||||||
|
|
||||||
url := d.contentBase + "/2/files/upload_session/append_v2"
|
url := d.contentBase + "/2/files/upload_session/append_v2"
|
||||||
reader := driver.NewLimitedUploadStream(ctx, io.LimitReader(stream, PartSize))
|
reader := driver.NewLimitedUploadStream(ctx, io.LimitReader(stream, PartSize))
|
||||||
req, err := http.NewRequest(http.MethodPost, url, reader)
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("failed to update file when append to upload session, err: %+v", err)
|
log.Errorf("failed to update file when append to upload session, err: %+v", err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req = req.WithContext(ctx)
|
|
||||||
req.Header.Set("Content-Type", "application/octet-stream")
|
req.Header.Set("Content-Type", "application/octet-stream")
|
||||||
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
|
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
|
||||||
|
|
||||||
|
@ -13,7 +13,7 @@ type Addition struct {
|
|||||||
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
|
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
|
||||||
AccessToken string
|
AccessToken string
|
||||||
RefreshToken string `json:"refresh_token" required:"true"`
|
RefreshToken string `json:"refresh_token" required:"true"`
|
||||||
RootNamespaceId string
|
RootNamespaceId string `json:"RootNamespaceId" required:"false"`
|
||||||
}
|
}
|
||||||
|
|
||||||
var config = driver.Config{
|
var config = driver.Config{
|
||||||
|
@ -169,13 +169,19 @@ func (d *Dropbox) getFiles(ctx context.Context, path string) ([]File, error) {
|
|||||||
|
|
||||||
func (d *Dropbox) finishUploadSession(ctx context.Context, toPath string, offset int64, sessionId string) error {
|
func (d *Dropbox) finishUploadSession(ctx context.Context, toPath string, offset int64, sessionId string) error {
|
||||||
url := d.contentBase + "/2/files/upload_session/finish"
|
url := d.contentBase + "/2/files/upload_session/finish"
|
||||||
req, err := http.NewRequest(http.MethodPost, url, nil)
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req = req.WithContext(ctx)
|
|
||||||
req.Header.Set("Content-Type", "application/octet-stream")
|
req.Header.Set("Content-Type", "application/octet-stream")
|
||||||
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
|
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
|
||||||
|
if d.RootNamespaceId != "" {
|
||||||
|
apiPathRootJson, err := d.buildPathRootHeader()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
req.Header.Set("Dropbox-API-Path-Root", apiPathRootJson)
|
||||||
|
}
|
||||||
|
|
||||||
uploadFinishArgs := UploadFinishArgs{
|
uploadFinishArgs := UploadFinishArgs{
|
||||||
Commit: struct {
|
Commit: struct {
|
||||||
@ -214,13 +220,19 @@ func (d *Dropbox) finishUploadSession(ctx context.Context, toPath string, offset
|
|||||||
|
|
||||||
func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) {
|
func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) {
|
||||||
url := d.contentBase + "/2/files/upload_session/start"
|
url := d.contentBase + "/2/files/upload_session/start"
|
||||||
req, err := http.NewRequest(http.MethodPost, url, nil)
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
req = req.WithContext(ctx)
|
|
||||||
req.Header.Set("Content-Type", "application/octet-stream")
|
req.Header.Set("Content-Type", "application/octet-stream")
|
||||||
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
|
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
|
||||||
|
if d.RootNamespaceId != "" {
|
||||||
|
apiPathRootJson, err := d.buildPathRootHeader()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
req.Header.Set("Dropbox-API-Path-Root", apiPathRootJson)
|
||||||
|
}
|
||||||
req.Header.Set("Dropbox-API-Arg", "{\"close\":false}")
|
req.Header.Set("Dropbox-API-Arg", "{\"close\":false}")
|
||||||
|
|
||||||
res, err := base.HttpClient.Do(req)
|
res, err := base.HttpClient.Do(req)
|
||||||
@ -235,3 +247,11 @@ func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) {
|
|||||||
_ = res.Body.Close()
|
_ = res.Body.Close()
|
||||||
return sessionId, nil
|
return sessionId, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (d *Dropbox) buildPathRootHeader() (string, error) {
|
||||||
|
return utils.Json.MarshalToString(map[string]interface{}{
|
||||||
|
".tag": "root",
|
||||||
|
"root": d.RootNamespaceId,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
@ -31,13 +31,13 @@ func (c *customTokenSource) Token() (*oauth2.Token, error) {
|
|||||||
v.Set("client_id", c.config.ClientID)
|
v.Set("client_id", c.config.ClientID)
|
||||||
v.Set("client_secret", c.config.ClientSecret)
|
v.Set("client_secret", c.config.ClientSecret)
|
||||||
|
|
||||||
req, err := http.NewRequest("POST", c.config.TokenURL, strings.NewReader(v.Encode()))
|
req, err := http.NewRequestWithContext(c.ctx, http.MethodPost, c.config.TokenURL, strings.NewReader(v.Encode()))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||||
|
|
||||||
resp, err := http.DefaultClient.Do(req.WithContext(c.ctx))
|
resp, err := http.DefaultClient.Do(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -2,12 +2,15 @@ package ftp
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"errors"
|
||||||
|
"io"
|
||||||
stdpath "path"
|
stdpath "path"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/jlaffaye/ftp"
|
"github.com/jlaffaye/ftp"
|
||||||
)
|
)
|
||||||
@ -16,6 +19,9 @@ type FTP struct {
|
|||||||
model.Storage
|
model.Storage
|
||||||
Addition
|
Addition
|
||||||
conn *ftp.ServerConn
|
conn *ftp.ServerConn
|
||||||
|
|
||||||
|
ctx context.Context
|
||||||
|
cancel context.CancelFunc
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *FTP) Config() driver.Config {
|
func (d *FTP) Config() driver.Config {
|
||||||
@ -27,12 +33,16 @@ func (d *FTP) GetAddition() driver.Additional {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *FTP) Init(ctx context.Context) error {
|
func (d *FTP) Init(ctx context.Context) error {
|
||||||
return d._login()
|
d.ctx, d.cancel = context.WithCancel(context.Background())
|
||||||
|
var err error
|
||||||
|
d.conn, err = d._login(ctx)
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *FTP) Drop(ctx context.Context) error {
|
func (d *FTP) Drop(ctx context.Context) error {
|
||||||
if d.conn != nil {
|
if d.conn != nil {
|
||||||
_ = d.conn.Logout()
|
_ = d.conn.Quit()
|
||||||
|
d.cancel()
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@ -62,25 +72,51 @@ func (d *FTP) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]m
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *FTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
func (d *FTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
if err := d.login(); err != nil {
|
conn, err := d._login(ctx)
|
||||||
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
remoteFile := NewFileReader(d.conn, encode(file.GetPath(), d.Encoding), file.GetSize())
|
path := encode(file.GetPath(), d.Encoding)
|
||||||
if remoteFile != nil && !d.Config().OnlyLinkMFile {
|
size := file.GetSize()
|
||||||
return &model.Link{
|
resultRangeReader := func(context context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
|
||||||
RangeReader: &model.FileRangeReader{
|
length := httpRange.Length
|
||||||
RangeReaderIF: stream.RateLimitRangeReaderFunc(stream.GetRangeReaderFromMFile(file.GetSize(), remoteFile)),
|
if length < 0 || httpRange.Start+length > size {
|
||||||
},
|
length = size - httpRange.Start
|
||||||
SyncClosers: utils.NewSyncClosers(remoteFile),
|
}
|
||||||
|
var c *ftp.ServerConn
|
||||||
|
if ctx == context {
|
||||||
|
c = conn
|
||||||
|
} else {
|
||||||
|
var err error
|
||||||
|
c, err = d._login(context)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
resp, err := c.RetrFrom(path, uint64(httpRange.Start))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var close utils.CloseFunc
|
||||||
|
if context == ctx {
|
||||||
|
close = resp.Close
|
||||||
|
} else {
|
||||||
|
close = func() error {
|
||||||
|
return errors.Join(resp.Close(), c.Quit())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return utils.ReadCloser{
|
||||||
|
Reader: io.LimitReader(resp, length),
|
||||||
|
Closer: close,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
MFile: &stream.RateLimitFile{
|
RangeReader: &model.FileRangeReader{
|
||||||
File: remoteFile,
|
RangeReaderIF: stream.RateLimitRangeReaderFunc(resultRangeReader),
|
||||||
Limiter: stream.ServerDownloadLimit,
|
|
||||||
Ctx: ctx,
|
|
||||||
},
|
},
|
||||||
|
SyncClosers: utils.NewSyncClosers(utils.CloseFunc(conn.Quit)),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -33,7 +33,7 @@ type Addition struct {
|
|||||||
var config = driver.Config{
|
var config = driver.Config{
|
||||||
Name: "FTP",
|
Name: "FTP",
|
||||||
LocalSort: true,
|
LocalSort: true,
|
||||||
OnlyLinkMFile: true,
|
OnlyLinkMFile: false,
|
||||||
DefaultRoot: "/",
|
DefaultRoot: "/",
|
||||||
NoLinkURL: true,
|
NoLinkURL: true,
|
||||||
}
|
}
|
||||||
|
@ -1,11 +1,8 @@
|
|||||||
package ftp
|
package ftp
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
|
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
|
||||||
@ -15,112 +12,32 @@ import (
|
|||||||
// do others that not defined in Driver interface
|
// do others that not defined in Driver interface
|
||||||
|
|
||||||
func (d *FTP) login() error {
|
func (d *FTP) login() error {
|
||||||
err, _, _ := singleflight.ErrorGroup.Do(fmt.Sprintf("FTP.login:%p", d), func() (error, error) {
|
_, err, _ := singleflight.AnyGroup.Do(fmt.Sprintf("FTP.login:%p", d), func() (any, error) {
|
||||||
return d._login(), nil
|
var err error
|
||||||
|
if d.conn != nil {
|
||||||
|
err = d.conn.NoOp()
|
||||||
|
if err != nil {
|
||||||
|
d.conn.Quit()
|
||||||
|
d.conn = nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if d.conn == nil {
|
||||||
|
d.conn, err = d._login(d.ctx)
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
})
|
})
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *FTP) _login() error {
|
func (d *FTP) _login(ctx context.Context) (*ftp.ServerConn, error) {
|
||||||
|
conn, err := ftp.Dial(d.Address, ftp.DialWithShutTimeout(10*time.Second), ftp.DialWithContext(ctx))
|
||||||
if d.conn != nil {
|
|
||||||
_, err := d.conn.CurrentDir()
|
|
||||||
if err == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
conn, err := ftp.Dial(d.Address, ftp.DialWithShutTimeout(10*time.Second))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
err = conn.Login(d.Username, d.Password)
|
err = conn.Login(d.Username, d.Password)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
conn.Quit()
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
d.conn = conn
|
return conn, nil
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// FileReader An FTP file reader that implements io.MFile for seeking.
|
|
||||||
type FileReader struct {
|
|
||||||
conn *ftp.ServerConn
|
|
||||||
resp *ftp.Response
|
|
||||||
offset atomic.Int64
|
|
||||||
readAtOffset int64
|
|
||||||
mu sync.Mutex
|
|
||||||
path string
|
|
||||||
size int64
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewFileReader(conn *ftp.ServerConn, path string, size int64) *FileReader {
|
|
||||||
return &FileReader{
|
|
||||||
conn: conn,
|
|
||||||
path: path,
|
|
||||||
size: size,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *FileReader) Read(buf []byte) (n int, err error) {
|
|
||||||
n, err = r.ReadAt(buf, r.offset.Load())
|
|
||||||
r.offset.Add(int64(n))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *FileReader) ReadAt(buf []byte, off int64) (n int, err error) {
|
|
||||||
if off < 0 {
|
|
||||||
return -1, os.ErrInvalid
|
|
||||||
}
|
|
||||||
r.mu.Lock()
|
|
||||||
defer r.mu.Unlock()
|
|
||||||
|
|
||||||
if off != r.readAtOffset {
|
|
||||||
//have to restart the connection, to correct offset
|
|
||||||
_ = r.resp.Close()
|
|
||||||
r.resp = nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if r.resp == nil {
|
|
||||||
r.resp, err = r.conn.RetrFrom(r.path, uint64(off))
|
|
||||||
r.readAtOffset = off
|
|
||||||
if err != nil {
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
n, err = r.resp.Read(buf)
|
|
||||||
r.readAtOffset += int64(n)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *FileReader) Seek(offset int64, whence int) (int64, error) {
|
|
||||||
oldOffset := r.offset.Load()
|
|
||||||
var newOffset int64
|
|
||||||
switch whence {
|
|
||||||
case io.SeekStart:
|
|
||||||
newOffset = offset
|
|
||||||
case io.SeekCurrent:
|
|
||||||
newOffset = oldOffset + offset
|
|
||||||
case io.SeekEnd:
|
|
||||||
return r.size, nil
|
|
||||||
default:
|
|
||||||
return -1, os.ErrInvalid
|
|
||||||
}
|
|
||||||
|
|
||||||
if newOffset < 0 {
|
|
||||||
// offset out of range
|
|
||||||
return oldOffset, os.ErrInvalid
|
|
||||||
}
|
|
||||||
if newOffset == oldOffset {
|
|
||||||
// offset not changed, so return directly
|
|
||||||
return oldOffset, nil
|
|
||||||
}
|
|
||||||
r.offset.Store(newOffset)
|
|
||||||
return newOffset, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *FileReader) Close() error {
|
|
||||||
if r.resp != nil {
|
|
||||||
return r.resp.Close()
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
@ -162,7 +162,7 @@ func (d *GoogleDrive) Put(ctx context.Context, dstDir model.Obj, stream model.Fi
|
|||||||
SetBody(driver.NewLimitedUploadStream(ctx, stream))
|
SetBody(driver.NewLimitedUploadStream(ctx, stream))
|
||||||
}, nil)
|
}, nil)
|
||||||
} else {
|
} else {
|
||||||
err = d.chunkUpload(ctx, stream, putUrl)
|
err = d.chunkUpload(ctx, stream, putUrl, up)
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -5,17 +5,20 @@ import (
|
|||||||
"crypto/x509"
|
"crypto/x509"
|
||||||
"encoding/pem"
|
"encoding/pem"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"regexp"
|
"regexp"
|
||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
|
"github.com/avast/retry-go"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
"github.com/golang-jwt/jwt/v4"
|
"github.com/golang-jwt/jwt/v4"
|
||||||
@ -251,28 +254,60 @@ func (d *GoogleDrive) getFiles(id string) ([]File, error) {
|
|||||||
return res, nil
|
return res, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *GoogleDrive) chunkUpload(ctx context.Context, stream model.FileStreamer, url string) error {
|
func (d *GoogleDrive) chunkUpload(ctx context.Context, file model.FileStreamer, url string, up driver.UpdateProgress) error {
|
||||||
var defaultChunkSize = d.ChunkSize * 1024 * 1024
|
var defaultChunkSize = d.ChunkSize * 1024 * 1024
|
||||||
var offset int64 = 0
|
ss, err := stream.NewStreamSectionReader(file, int(defaultChunkSize), &up)
|
||||||
for offset < stream.GetSize() {
|
|
||||||
if utils.IsCanceled(ctx) {
|
|
||||||
return ctx.Err()
|
|
||||||
}
|
|
||||||
chunkSize := stream.GetSize() - offset
|
|
||||||
if chunkSize > defaultChunkSize {
|
|
||||||
chunkSize = defaultChunkSize
|
|
||||||
}
|
|
||||||
reader, err := stream.RangeRead(http_range.Range{Start: offset, Length: chunkSize})
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
reader = driver.NewLimitedUploadStream(ctx, reader)
|
|
||||||
_, err = d.request(url, http.MethodPut, func(req *resty.Request) {
|
var offset int64 = 0
|
||||||
req.SetHeaders(map[string]string{
|
url += "?includeItemsFromAllDrives=true&supportsAllDrives=true"
|
||||||
"Content-Length": strconv.FormatInt(chunkSize, 10),
|
for offset < file.GetSize() {
|
||||||
"Content-Range": fmt.Sprintf("bytes %d-%d/%d", offset, offset+chunkSize-1, stream.GetSize()),
|
if utils.IsCanceled(ctx) {
|
||||||
}).SetBody(reader).SetContext(ctx)
|
return ctx.Err()
|
||||||
}, nil)
|
}
|
||||||
|
chunkSize := min(file.GetSize()-offset, defaultChunkSize)
|
||||||
|
reader, err := ss.GetSectionReader(offset, chunkSize)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
limitedReader := driver.NewLimitedUploadStream(ctx, reader)
|
||||||
|
err = retry.Do(func() error {
|
||||||
|
reader.Seek(0, io.SeekStart)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, url, limitedReader)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
req.Header = map[string][]string{
|
||||||
|
"Authorization": {"Bearer " + d.AccessToken},
|
||||||
|
"Content-Length": {strconv.FormatInt(chunkSize, 10)},
|
||||||
|
"Content-Range": {fmt.Sprintf("bytes %d-%d/%d", offset, offset+chunkSize-1, file.GetSize())},
|
||||||
|
}
|
||||||
|
res, err := base.HttpClient.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
bytes, _ := io.ReadAll(res.Body)
|
||||||
|
var e Error
|
||||||
|
utils.Json.Unmarshal(bytes, &e)
|
||||||
|
if e.Error.Code != 0 {
|
||||||
|
if e.Error.Code == 401 {
|
||||||
|
err = d.refreshToken()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return fmt.Errorf("%s: %v", e.Error.Message, e.Error.Errors)
|
||||||
|
}
|
||||||
|
up(float64(offset+chunkSize) / float64(file.GetSize()) * 100)
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
retry.Attempts(3),
|
||||||
|
retry.DelayType(retry.BackOffDelay),
|
||||||
|
retry.Delay(time.Second))
|
||||||
|
ss.FreeSectionReader(reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -276,9 +276,7 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreame
|
|||||||
etag := s.GetHash().GetHash(utils.MD5)
|
etag := s.GetHash().GetHash(utils.MD5)
|
||||||
var err error
|
var err error
|
||||||
if len(etag) != utils.MD5.Width {
|
if len(etag) != utils.MD5.Width {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
_, etag, err = stream.CacheFullAndHash(s, &up, utils.MD5)
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, etag, err = stream.CacheFullInTempFileAndHash(s, cacheFileProgress, utils.MD5)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -298,6 +296,23 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreame
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
upToken := utils.Json.Get(res, "upToken").ToString()
|
upToken := utils.Json.Get(res, "upToken").ToString()
|
||||||
|
if upToken == "-1" {
|
||||||
|
// 支持秒传
|
||||||
|
var resp UploadTokenRapidResp
|
||||||
|
err := utils.Json.Unmarshal(res, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &model.Object{
|
||||||
|
ID: strconv.FormatInt(resp.Map.FileID, 10),
|
||||||
|
Name: resp.Map.FileName,
|
||||||
|
Size: s.GetSize(),
|
||||||
|
Modified: s.ModTime(),
|
||||||
|
Ctime: s.CreateTime(),
|
||||||
|
IsFolder: false,
|
||||||
|
HashInfo: utils.NewHashInfo(utils.MD5, etag),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
key := fmt.Sprintf("disk/%d/%d/%d/%s/%016d", now.Year(), now.Month(), now.Day(), d.account, now.UnixMilli())
|
key := fmt.Sprintf("disk/%d/%d/%d/%s/%016d", now.Year(), now.Month(), now.Day(), d.account, now.UnixMilli())
|
||||||
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
|
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
|
||||||
|
@ -32,6 +32,7 @@ func init() {
|
|||||||
Name: "ILanZou",
|
Name: "ILanZou",
|
||||||
DefaultRoot: "0",
|
DefaultRoot: "0",
|
||||||
LocalSort: true,
|
LocalSort: true,
|
||||||
|
NoOverwriteUpload: true,
|
||||||
},
|
},
|
||||||
conf: Conf{
|
conf: Conf{
|
||||||
base: "https://api.ilanzou.com",
|
base: "https://api.ilanzou.com",
|
||||||
@ -50,6 +51,7 @@ func init() {
|
|||||||
Name: "FeijiPan",
|
Name: "FeijiPan",
|
||||||
DefaultRoot: "0",
|
DefaultRoot: "0",
|
||||||
LocalSort: true,
|
LocalSort: true,
|
||||||
|
NoOverwriteUpload: true,
|
||||||
},
|
},
|
||||||
conf: Conf{
|
conf: Conf{
|
||||||
base: "https://api.feijipan.com",
|
base: "https://api.feijipan.com",
|
||||||
|
@ -43,6 +43,18 @@ type Part struct {
|
|||||||
ETag string `json:"etag"`
|
ETag string `json:"etag"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type UploadTokenRapidResp struct {
|
||||||
|
Msg string `json:"msg"`
|
||||||
|
Code int `json:"code"`
|
||||||
|
UpToken string `json:"upToken"`
|
||||||
|
Map struct {
|
||||||
|
FileIconID int `json:"fileIconId"`
|
||||||
|
FileName string `json:"fileName"`
|
||||||
|
FileIcon string `json:"fileIcon"`
|
||||||
|
FileID int64 `json:"fileId"`
|
||||||
|
} `json:"map"`
|
||||||
|
}
|
||||||
|
|
||||||
type UploadResultResp struct {
|
type UploadResultResp struct {
|
||||||
Msg string `json:"msg"`
|
Msg string `json:"msg"`
|
||||||
Code int `json:"code"`
|
Code int `json:"code"`
|
||||||
|
@ -3,6 +3,7 @@ package LenovoNasShare
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"net/url"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@ -71,7 +72,23 @@ func (d *LenovoNasShare) List(ctx context.Context, dir model.Obj, args model.Lis
|
|||||||
files = append(files, resp.Data.List...)
|
files = append(files, resp.Data.List...)
|
||||||
|
|
||||||
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
|
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
|
||||||
|
if src.IsDir() {
|
||||||
return src, nil
|
return src, nil
|
||||||
|
}
|
||||||
|
return &model.ObjThumb{
|
||||||
|
Object: model.Object{
|
||||||
|
Name: src.GetName(),
|
||||||
|
Size: src.GetSize(),
|
||||||
|
Modified: src.ModTime(),
|
||||||
|
IsFolder: src.IsDir(),
|
||||||
|
},
|
||||||
|
Thumbnail: model.Thumbnail{
|
||||||
|
Thumbnail: func() string {
|
||||||
|
thumbUrl := d.Host + "/oneproxy/api/share/v1/file/thumb?code=" + d.ShareId + "&stoken=" + d.stoken + "&path=" + url.QueryEscape(src.GetPath())
|
||||||
|
return thumbUrl
|
||||||
|
}(),
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
92
drivers/local/benchmark_calculatedirsize_test.go
Normal file
92
drivers/local/benchmark_calculatedirsize_test.go
Normal file
@ -0,0 +1,92 @@
|
|||||||
|
package local
|
||||||
|
|
||||||
|
// TestDirCalculateSize tests the directory size calculation
|
||||||
|
// It should be run with the local driver enabled and directory size calculation set to true
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
)
|
||||||
|
|
||||||
|
func generatedTestDir(dir string, dep, filecount int) {
|
||||||
|
if dep == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
for i := 0; i < dep; i++ {
|
||||||
|
subDir := dir + "/dir" + strconv.Itoa(i)
|
||||||
|
os.Mkdir(subDir, 0755)
|
||||||
|
generatedTestDir(subDir, dep-1, filecount)
|
||||||
|
generatedFiles(subDir, filecount)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func generatedFiles(path string, count int) error {
|
||||||
|
for i := 0; i < count; i++ {
|
||||||
|
filePath := filepath.Join(path, "file"+strconv.Itoa(i)+".txt")
|
||||||
|
file, err := os.Create(filePath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// 使用随机ascii字符填充文件
|
||||||
|
content := make([]byte, 1024) // 1KB file
|
||||||
|
for j := range content {
|
||||||
|
content[j] = byte('a' + j%26) // Fill with 'a' to 'z'
|
||||||
|
}
|
||||||
|
_, err = file.Write(content)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
file.Close()
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// performance tests for directory size calculation
|
||||||
|
func BenchmarkCalculateDirSize(t *testing.B) {
|
||||||
|
// 初始化t的日志
|
||||||
|
t.Logf("Starting performance test for directory size calculation")
|
||||||
|
// 确保测试目录存在
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("Skipping performance test in short mode")
|
||||||
|
}
|
||||||
|
// 创建tmp directory for testing
|
||||||
|
testTempDir := t.TempDir()
|
||||||
|
err := os.MkdirAll(testTempDir, 0755)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to create test directory: %v", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(testTempDir) // Clean up after test
|
||||||
|
// 构建一个深度为5,每层10个文件和10个目录的目录结构
|
||||||
|
generatedTestDir(testTempDir, 5, 10)
|
||||||
|
// Initialize the local driver with directory size calculation enabled
|
||||||
|
d := &Local{
|
||||||
|
directoryMap: DirectoryMap{
|
||||||
|
root: testTempDir,
|
||||||
|
},
|
||||||
|
Addition: Addition{
|
||||||
|
DirectorySize: true,
|
||||||
|
RootPath: driver.RootPath{
|
||||||
|
RootFolderPath: testTempDir,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
//record the start time
|
||||||
|
t.StartTimer()
|
||||||
|
// Calculate the directory size
|
||||||
|
err = d.directoryMap.RecalculateDirSize()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to calculate directory size: %v", err)
|
||||||
|
}
|
||||||
|
//record the end time
|
||||||
|
t.StopTimer()
|
||||||
|
// Print the size and duration
|
||||||
|
node, ok := d.directoryMap.Get(d.directoryMap.root)
|
||||||
|
if !ok {
|
||||||
|
t.Fatalf("Failed to get root node from directory map")
|
||||||
|
}
|
||||||
|
t.Logf("Directory size: %d bytes", node.fileSum+node.directorySum)
|
||||||
|
t.Logf("Performance test completed successfully")
|
||||||
|
}
|
@ -33,6 +33,9 @@ type Local struct {
|
|||||||
Addition
|
Addition
|
||||||
mkdirPerm int32
|
mkdirPerm int32
|
||||||
|
|
||||||
|
// directory size data
|
||||||
|
directoryMap DirectoryMap
|
||||||
|
|
||||||
// zero means no limit
|
// zero means no limit
|
||||||
thumbConcurrency int
|
thumbConcurrency int
|
||||||
thumbTokenBucket TokenBucket
|
thumbTokenBucket TokenBucket
|
||||||
@ -66,6 +69,15 @@ func (d *Local) Init(ctx context.Context) error {
|
|||||||
}
|
}
|
||||||
d.Addition.RootFolderPath = abs
|
d.Addition.RootFolderPath = abs
|
||||||
}
|
}
|
||||||
|
if d.DirectorySize {
|
||||||
|
d.directoryMap.root = d.GetRootPath()
|
||||||
|
_, err := d.directoryMap.CalculateDirSize(d.GetRootPath())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
d.directoryMap.Clear()
|
||||||
|
}
|
||||||
if d.ThumbCacheFolder != "" && !utils.Exists(d.ThumbCacheFolder) {
|
if d.ThumbCacheFolder != "" && !utils.Exists(d.ThumbCacheFolder) {
|
||||||
err := os.MkdirAll(d.ThumbCacheFolder, os.FileMode(d.mkdirPerm))
|
err := os.MkdirAll(d.ThumbCacheFolder, os.FileMode(d.mkdirPerm))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -124,6 +136,9 @@ func (d *Local) GetAddition() driver.Additional {
|
|||||||
func (d *Local) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
func (d *Local) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
fullPath := dir.GetPath()
|
fullPath := dir.GetPath()
|
||||||
rawFiles, err := readDir(fullPath)
|
rawFiles, err := readDir(fullPath)
|
||||||
|
if d.DirectorySize && args.Refresh {
|
||||||
|
d.directoryMap.RecalculateDirSize()
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -147,7 +162,12 @@ func (d *Local) FileInfoToObj(ctx context.Context, f fs.FileInfo, reqPath string
|
|||||||
}
|
}
|
||||||
isFolder := f.IsDir() || isSymlinkDir(f, fullPath)
|
isFolder := f.IsDir() || isSymlinkDir(f, fullPath)
|
||||||
var size int64
|
var size int64
|
||||||
if !isFolder {
|
if isFolder {
|
||||||
|
node, ok := d.directoryMap.Get(filepath.Join(fullPath, f.Name()))
|
||||||
|
if ok {
|
||||||
|
size = node.fileSum + node.directorySum
|
||||||
|
}
|
||||||
|
} else {
|
||||||
size = f.Size()
|
size = f.Size()
|
||||||
}
|
}
|
||||||
var ctime time.Time
|
var ctime time.Time
|
||||||
@ -186,7 +206,12 @@ func (d *Local) Get(ctx context.Context, path string) (model.Obj, error) {
|
|||||||
isFolder := f.IsDir() || isSymlinkDir(f, path)
|
isFolder := f.IsDir() || isSymlinkDir(f, path)
|
||||||
size := f.Size()
|
size := f.Size()
|
||||||
if isFolder {
|
if isFolder {
|
||||||
size = 0
|
node, ok := d.directoryMap.Get(path)
|
||||||
|
if ok {
|
||||||
|
size = node.fileSum + node.directorySum
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
size = f.Size()
|
||||||
}
|
}
|
||||||
var ctime time.Time
|
var ctime time.Time
|
||||||
t, err := times.Stat(path)
|
t, err := times.Stat(path)
|
||||||
@ -245,13 +270,12 @@ func (d *Local) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
link.ContentLength = file.GetSize()
|
||||||
link.MFile = open
|
link.MFile = open
|
||||||
}
|
}
|
||||||
if link.MFile != nil && !d.Config().OnlyLinkMFile {
|
|
||||||
link.AddIfCloser(link.MFile)
|
link.AddIfCloser(link.MFile)
|
||||||
link.RangeReader = &model.FileRangeReader{
|
if !d.Config().OnlyLinkMFile {
|
||||||
RangeReaderIF: stream.GetRangeReaderFromMFile(file.GetSize(), link.MFile),
|
link.RangeReader = stream.GetRangeReaderFromMFile(link.ContentLength, link.MFile)
|
||||||
}
|
|
||||||
link.MFile = nil
|
link.MFile = nil
|
||||||
}
|
}
|
||||||
return link, nil
|
return link, nil
|
||||||
@ -272,22 +296,31 @@ func (d *Local) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
|||||||
if utils.IsSubPath(srcPath, dstPath) {
|
if utils.IsSubPath(srcPath, dstPath) {
|
||||||
return fmt.Errorf("the destination folder is a subfolder of the source folder")
|
return fmt.Errorf("the destination folder is a subfolder of the source folder")
|
||||||
}
|
}
|
||||||
if err := os.Rename(srcPath, dstPath); err != nil && strings.Contains(err.Error(), "invalid cross-device link") {
|
err := os.Rename(srcPath, dstPath)
|
||||||
// Handle cross-device file move in local driver
|
if err != nil && strings.Contains(err.Error(), "invalid cross-device link") {
|
||||||
if err = d.Copy(ctx, srcObj, dstDir); err != nil {
|
// 跨设备移动,先复制再删除
|
||||||
|
if err := d.Copy(ctx, srcObj, dstDir); err != nil {
|
||||||
return err
|
return err
|
||||||
} else {
|
}
|
||||||
// Directly remove file without check recycle bin if successfully copied
|
// 复制成功后直接删除源文件/文件夹
|
||||||
if srcObj.IsDir() {
|
if srcObj.IsDir() {
|
||||||
err = os.RemoveAll(srcObj.GetPath())
|
return os.RemoveAll(srcObj.GetPath())
|
||||||
} else {
|
}
|
||||||
err = os.Remove(srcObj.GetPath())
|
return os.Remove(srcObj.GetPath())
|
||||||
|
}
|
||||||
|
if err == nil {
|
||||||
|
srcParent := filepath.Dir(srcPath)
|
||||||
|
dstParent := filepath.Dir(dstPath)
|
||||||
|
if d.directoryMap.Has(srcParent) {
|
||||||
|
d.directoryMap.UpdateDirSize(srcParent)
|
||||||
|
d.directoryMap.UpdateDirParents(srcParent)
|
||||||
|
}
|
||||||
|
if d.directoryMap.Has(dstParent) {
|
||||||
|
d.directoryMap.UpdateDirSize(dstParent)
|
||||||
|
d.directoryMap.UpdateDirParents(dstParent)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Local) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
func (d *Local) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||||
@ -297,6 +330,14 @@ func (d *Local) Rename(ctx context.Context, srcObj model.Obj, newName string) er
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if srcObj.IsDir() {
|
||||||
|
if d.directoryMap.Has(srcPath) {
|
||||||
|
d.directoryMap.DeleteDirNode(srcPath)
|
||||||
|
d.directoryMap.CalculateDirSize(dstPath)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -307,11 +348,21 @@ func (d *Local) Copy(_ context.Context, srcObj, dstDir model.Obj) error {
|
|||||||
return fmt.Errorf("the destination folder is a subfolder of the source folder")
|
return fmt.Errorf("the destination folder is a subfolder of the source folder")
|
||||||
}
|
}
|
||||||
// Copy using otiai10/copy to perform more secure & efficient copy
|
// Copy using otiai10/copy to perform more secure & efficient copy
|
||||||
return cp.Copy(srcPath, dstPath, cp.Options{
|
err := cp.Copy(srcPath, dstPath, cp.Options{
|
||||||
Sync: true, // Sync file to disk after copy, may have performance penalty in filesystem such as ZFS
|
Sync: true, // Sync file to disk after copy, may have performance penalty in filesystem such as ZFS
|
||||||
PreserveTimes: true,
|
PreserveTimes: true,
|
||||||
PreserveOwner: true,
|
PreserveOwner: true,
|
||||||
})
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.directoryMap.Has(filepath.Dir(dstPath)) {
|
||||||
|
d.directoryMap.UpdateDirSize(filepath.Dir(dstPath))
|
||||||
|
d.directoryMap.UpdateDirParents(filepath.Dir(dstPath))
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Local) Remove(ctx context.Context, obj model.Obj) error {
|
func (d *Local) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
@ -332,6 +383,19 @@ func (d *Local) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if obj.IsDir() {
|
||||||
|
if d.directoryMap.Has(obj.GetPath()) {
|
||||||
|
d.directoryMap.DeleteDirNode(obj.GetPath())
|
||||||
|
d.directoryMap.UpdateDirSize(filepath.Dir(obj.GetPath()))
|
||||||
|
d.directoryMap.UpdateDirParents(filepath.Dir(obj.GetPath()))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if d.directoryMap.Has(filepath.Dir(obj.GetPath())) {
|
||||||
|
d.directoryMap.UpdateDirSize(filepath.Dir(obj.GetPath()))
|
||||||
|
d.directoryMap.UpdateDirParents(filepath.Dir(obj.GetPath()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -355,6 +419,11 @@ func (d *Local) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("[local] failed to change time of %s: %s", fullPath, err)
|
log.Errorf("[local] failed to change time of %s: %s", fullPath, err)
|
||||||
}
|
}
|
||||||
|
if d.directoryMap.Has(dstDir.GetPath()) {
|
||||||
|
d.directoryMap.UpdateDirSize(dstDir.GetPath())
|
||||||
|
d.directoryMap.UpdateDirParents(dstDir.GetPath())
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -7,6 +7,7 @@ import (
|
|||||||
|
|
||||||
type Addition struct {
|
type Addition struct {
|
||||||
driver.RootPath
|
driver.RootPath
|
||||||
|
DirectorySize bool `json:"directory_size" default:"false" help:"This might impact host performance"`
|
||||||
Thumbnail bool `json:"thumbnail" required:"true" help:"enable thumbnail"`
|
Thumbnail bool `json:"thumbnail" required:"true" help:"enable thumbnail"`
|
||||||
ThumbCacheFolder string `json:"thumb_cache_folder"`
|
ThumbCacheFolder string `json:"thumb_cache_folder"`
|
||||||
ThumbConcurrency string `json:"thumb_concurrency" default:"16" required:"false" help:"Number of concurrent thumbnail generation goroutines. This controls how many thumbnails can be generated in parallel."`
|
ThumbConcurrency string `json:"thumb_concurrency" default:"16" required:"false" help:"Number of concurrent thumbnail generation goroutines. This controls how many thumbnails can be generated in parallel."`
|
||||||
@ -27,6 +28,8 @@ var config = driver.Config{
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
op.RegisterDriver(func() driver.Driver {
|
op.RegisterDriver(func() driver.Driver {
|
||||||
return &Local{}
|
return &Local{
|
||||||
|
directoryMap: DirectoryMap{},
|
||||||
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
@ -8,9 +8,11 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
"slices"
|
||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/conf"
|
"github.com/OpenListTeam/OpenList/v4/internal/conf"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
@ -153,3 +155,253 @@ func (d *Local) getThumb(file model.Obj) (*bytes.Buffer, *string, error) {
|
|||||||
}
|
}
|
||||||
return &buf, nil, nil
|
return &buf, nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type DirectoryMap struct {
|
||||||
|
root string
|
||||||
|
data sync.Map
|
||||||
|
}
|
||||||
|
|
||||||
|
type DirectoryNode struct {
|
||||||
|
fileSum int64
|
||||||
|
directorySum int64
|
||||||
|
children []string
|
||||||
|
}
|
||||||
|
|
||||||
|
type DirectoryTask struct {
|
||||||
|
path string
|
||||||
|
cache *DirectoryTaskCache
|
||||||
|
}
|
||||||
|
|
||||||
|
type DirectoryTaskCache struct {
|
||||||
|
fileSum int64
|
||||||
|
children []string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) Has(path string) bool {
|
||||||
|
_, ok := m.data.Load(path)
|
||||||
|
|
||||||
|
return ok
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) Get(path string) (*DirectoryNode, bool) {
|
||||||
|
value, ok := m.data.Load(path)
|
||||||
|
if !ok {
|
||||||
|
return &DirectoryNode{}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
node, ok := value.(*DirectoryNode)
|
||||||
|
if !ok {
|
||||||
|
return &DirectoryNode{}, false
|
||||||
|
}
|
||||||
|
|
||||||
|
return node, true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) Set(path string, node *DirectoryNode) {
|
||||||
|
m.data.Store(path, node)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) Delete(path string) {
|
||||||
|
m.data.Delete(path)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) Clear() {
|
||||||
|
m.data.Clear()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) RecalculateDirSize() error {
|
||||||
|
m.Clear()
|
||||||
|
if m.root == "" {
|
||||||
|
return fmt.Errorf("root path is not set")
|
||||||
|
}
|
||||||
|
|
||||||
|
size, err := m.CalculateDirSize(m.root)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := m.Get(m.root); ok {
|
||||||
|
node.fileSum = size
|
||||||
|
node.directorySum = size
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) CalculateDirSize(dirname string) (int64, error) {
|
||||||
|
stack := []DirectoryTask{
|
||||||
|
{path: dirname},
|
||||||
|
}
|
||||||
|
|
||||||
|
for len(stack) > 0 {
|
||||||
|
task := stack[len(stack)-1]
|
||||||
|
stack = stack[:len(stack)-1]
|
||||||
|
|
||||||
|
if task.cache != nil {
|
||||||
|
directorySum := int64(0)
|
||||||
|
|
||||||
|
for _, filename := range task.cache.children {
|
||||||
|
child, ok := m.Get(filepath.Join(task.path, filename))
|
||||||
|
if !ok {
|
||||||
|
return 0, fmt.Errorf("child node not found")
|
||||||
|
}
|
||||||
|
directorySum += child.fileSum + child.directorySum
|
||||||
|
}
|
||||||
|
|
||||||
|
m.Set(task.path, &DirectoryNode{
|
||||||
|
fileSum: task.cache.fileSum,
|
||||||
|
directorySum: directorySum,
|
||||||
|
children: task.cache.children,
|
||||||
|
})
|
||||||
|
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
files, err := readDir(task.path)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
fileSum := int64(0)
|
||||||
|
directorySum := int64(0)
|
||||||
|
|
||||||
|
children := []string{}
|
||||||
|
queue := []DirectoryTask{}
|
||||||
|
|
||||||
|
for _, f := range files {
|
||||||
|
fullpath := filepath.Join(task.path, f.Name())
|
||||||
|
isFolder := f.IsDir() || isSymlinkDir(f, fullpath)
|
||||||
|
|
||||||
|
if isFolder {
|
||||||
|
if node, ok := m.Get(fullpath); ok {
|
||||||
|
directorySum += node.fileSum + node.directorySum
|
||||||
|
} else {
|
||||||
|
queue = append(queue, DirectoryTask{
|
||||||
|
path: fullpath,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
children = append(children, f.Name())
|
||||||
|
} else {
|
||||||
|
fileSum += f.Size()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(queue) > 0 {
|
||||||
|
stack = append(stack, DirectoryTask{
|
||||||
|
path: task.path,
|
||||||
|
cache: &DirectoryTaskCache{
|
||||||
|
fileSum: fileSum,
|
||||||
|
children: children,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
stack = append(stack, queue...)
|
||||||
|
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
m.Set(task.path, &DirectoryNode{
|
||||||
|
fileSum: fileSum,
|
||||||
|
directorySum: directorySum,
|
||||||
|
children: children,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := m.Get(dirname); ok {
|
||||||
|
return node.fileSum + node.directorySum, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) UpdateDirSize(dirname string) (int64, error) {
|
||||||
|
node, ok := m.Get(dirname)
|
||||||
|
if !ok {
|
||||||
|
return 0, fmt.Errorf("directory node not found")
|
||||||
|
}
|
||||||
|
|
||||||
|
files, err := readDir(dirname)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
fileSum := int64(0)
|
||||||
|
directorySum := int64(0)
|
||||||
|
|
||||||
|
children := []string{}
|
||||||
|
|
||||||
|
for _, f := range files {
|
||||||
|
fullpath := filepath.Join(dirname, f.Name())
|
||||||
|
isFolder := f.IsDir() || isSymlinkDir(f, fullpath)
|
||||||
|
|
||||||
|
if isFolder {
|
||||||
|
if node, ok := m.Get(fullpath); ok {
|
||||||
|
directorySum += node.fileSum + node.directorySum
|
||||||
|
} else {
|
||||||
|
value, err := m.CalculateDirSize(fullpath)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
directorySum += value
|
||||||
|
}
|
||||||
|
|
||||||
|
children = append(children, f.Name())
|
||||||
|
} else {
|
||||||
|
fileSum += f.Size()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, c := range node.children {
|
||||||
|
if !slices.Contains(children, c) {
|
||||||
|
m.DeleteDirNode(filepath.Join(dirname, c))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
node.fileSum = fileSum
|
||||||
|
node.directorySum = directorySum
|
||||||
|
node.children = children
|
||||||
|
|
||||||
|
return fileSum + directorySum, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) UpdateDirParents(dirname string) error {
|
||||||
|
parentPath := filepath.Dir(dirname)
|
||||||
|
for parentPath != m.root && !strings.HasPrefix(m.root, parentPath) {
|
||||||
|
if node, ok := m.Get(parentPath); ok {
|
||||||
|
directorySum := int64(0)
|
||||||
|
|
||||||
|
for _, c := range node.children {
|
||||||
|
child, ok := m.Get(filepath.Join(parentPath, c))
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("child node not found")
|
||||||
|
}
|
||||||
|
directorySum += child.fileSum + child.directorySum
|
||||||
|
}
|
||||||
|
|
||||||
|
node.directorySum = directorySum
|
||||||
|
}
|
||||||
|
|
||||||
|
parentPath = filepath.Dir(parentPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *DirectoryMap) DeleteDirNode(dirname string) error {
|
||||||
|
stack := []string{dirname}
|
||||||
|
|
||||||
|
for len(stack) > 0 {
|
||||||
|
current := stack[len(stack)-1]
|
||||||
|
stack = stack[:len(stack)-1]
|
||||||
|
|
||||||
|
if node, ok := m.Get(current); ok {
|
||||||
|
for _, filename := range node.children {
|
||||||
|
stack = append(stack, filepath.Join(current, filename))
|
||||||
|
}
|
||||||
|
|
||||||
|
m.Delete(current)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
@ -180,7 +180,7 @@ func (d *MediaTrack) Put(ctx context.Context, dstDir model.Obj, file model.FileS
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
tempFile, err := file.CacheFullInTempFile()
|
tempFile, err := file.CacheFullAndWriter(&up, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -4,6 +4,7 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"io"
|
"io"
|
||||||
|
"net/http"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
@ -72,7 +73,7 @@ func (d *Misskey) getFiles(dir model.Obj) ([]model.Obj, error) {
|
|||||||
} else {
|
} else {
|
||||||
body = map[string]string{}
|
body = map[string]string{}
|
||||||
}
|
}
|
||||||
err := d.request("/files", "POST", setBody(body), &files)
|
err := d.request("/files", http.MethodPost, setBody(body), &files)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return []model.Obj{}, err
|
return []model.Obj{}, err
|
||||||
}
|
}
|
||||||
@ -89,7 +90,7 @@ func (d *Misskey) getFolders(dir model.Obj) ([]model.Obj, error) {
|
|||||||
} else {
|
} else {
|
||||||
body = map[string]string{}
|
body = map[string]string{}
|
||||||
}
|
}
|
||||||
err := d.request("/folders", "POST", setBody(body), &folders)
|
err := d.request("/folders", http.MethodPost, setBody(body), &folders)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return []model.Obj{}, err
|
return []model.Obj{}, err
|
||||||
}
|
}
|
||||||
@ -106,7 +107,7 @@ func (d *Misskey) list(dir model.Obj) ([]model.Obj, error) {
|
|||||||
|
|
||||||
func (d *Misskey) link(file model.Obj) (*model.Link, error) {
|
func (d *Misskey) link(file model.Obj) (*model.Link, error) {
|
||||||
var mFile MFile
|
var mFile MFile
|
||||||
err := d.request("/files/show", "POST", setBody(map[string]string{"fileId": file.GetID()}), &mFile)
|
err := d.request("/files/show", http.MethodPost, setBody(map[string]string{"fileId": file.GetID()}), &mFile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -117,7 +118,7 @@ func (d *Misskey) link(file model.Obj) (*model.Link, error) {
|
|||||||
|
|
||||||
func (d *Misskey) makeDir(parentDir model.Obj, dirName string) (model.Obj, error) {
|
func (d *Misskey) makeDir(parentDir model.Obj, dirName string) (model.Obj, error) {
|
||||||
var folder MFolder
|
var folder MFolder
|
||||||
err := d.request("/folders/create", "POST", setBody(map[string]interface{}{"parentId": handleFolderId(parentDir), "name": dirName}), &folder)
|
err := d.request("/folders/create", http.MethodPost, setBody(map[string]interface{}{"parentId": handleFolderId(parentDir), "name": dirName}), &folder)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -127,11 +128,11 @@ func (d *Misskey) makeDir(parentDir model.Obj, dirName string) (model.Obj, error
|
|||||||
func (d *Misskey) move(srcObj, dstDir model.Obj) (model.Obj, error) {
|
func (d *Misskey) move(srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
if srcObj.IsDir() {
|
if srcObj.IsDir() {
|
||||||
var folder MFolder
|
var folder MFolder
|
||||||
err := d.request("/folders/update", "POST", setBody(map[string]interface{}{"folderId": srcObj.GetID(), "parentId": handleFolderId(dstDir)}), &folder)
|
err := d.request("/folders/update", http.MethodPost, setBody(map[string]interface{}{"folderId": srcObj.GetID(), "parentId": handleFolderId(dstDir)}), &folder)
|
||||||
return mFolder2Object(folder), err
|
return mFolder2Object(folder), err
|
||||||
} else {
|
} else {
|
||||||
var file MFile
|
var file MFile
|
||||||
err := d.request("/files/update", "POST", setBody(map[string]interface{}{"fileId": srcObj.GetID(), "folderId": handleFolderId(dstDir)}), &file)
|
err := d.request("/files/update", http.MethodPost, setBody(map[string]interface{}{"fileId": srcObj.GetID(), "folderId": handleFolderId(dstDir)}), &file)
|
||||||
return mFile2Object(file), err
|
return mFile2Object(file), err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -139,11 +140,11 @@ func (d *Misskey) move(srcObj, dstDir model.Obj) (model.Obj, error) {
|
|||||||
func (d *Misskey) rename(srcObj model.Obj, newName string) (model.Obj, error) {
|
func (d *Misskey) rename(srcObj model.Obj, newName string) (model.Obj, error) {
|
||||||
if srcObj.IsDir() {
|
if srcObj.IsDir() {
|
||||||
var folder MFolder
|
var folder MFolder
|
||||||
err := d.request("/folders/update", "POST", setBody(map[string]string{"folderId": srcObj.GetID(), "name": newName}), &folder)
|
err := d.request("/folders/update", http.MethodPost, setBody(map[string]string{"folderId": srcObj.GetID(), "name": newName}), &folder)
|
||||||
return mFolder2Object(folder), err
|
return mFolder2Object(folder), err
|
||||||
} else {
|
} else {
|
||||||
var file MFile
|
var file MFile
|
||||||
err := d.request("/files/update", "POST", setBody(map[string]string{"fileId": srcObj.GetID(), "name": newName}), &file)
|
err := d.request("/files/update", http.MethodPost, setBody(map[string]string{"fileId": srcObj.GetID(), "name": newName}), &file)
|
||||||
return mFile2Object(file), err
|
return mFile2Object(file), err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -171,7 +172,7 @@ func (d *Misskey) copy(srcObj, dstDir model.Obj) (model.Obj, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
err = d.request("/files/upload-from-url", "POST", setBody(map[string]interface{}{"url": url.URL, "folderId": handleFolderId(dstDir)}), &file)
|
err = d.request("/files/upload-from-url", http.MethodPost, setBody(map[string]interface{}{"url": url.URL, "folderId": handleFolderId(dstDir)}), &file)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -181,10 +182,10 @@ func (d *Misskey) copy(srcObj, dstDir model.Obj) (model.Obj, error) {
|
|||||||
|
|
||||||
func (d *Misskey) remove(obj model.Obj) error {
|
func (d *Misskey) remove(obj model.Obj) error {
|
||||||
if obj.IsDir() {
|
if obj.IsDir() {
|
||||||
err := d.request("/folders/delete", "POST", setBody(map[string]string{"folderId": obj.GetID()}), nil)
|
err := d.request("/folders/delete", http.MethodPost, setBody(map[string]string{"folderId": obj.GetID()}), nil)
|
||||||
return err
|
return err
|
||||||
} else {
|
} else {
|
||||||
err := d.request("/files/delete", "POST", setBody(map[string]string{"fileId": obj.GetID()}), nil)
|
err := d.request("/files/delete", http.MethodPost, setBody(map[string]string{"fileId": obj.GetID()}), nil)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -263,7 +263,7 @@ func (d *MoPan) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
file, err := stream.CacheFullInTempFile()
|
file, err := stream.CacheFullAndWriter(&up, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -55,9 +55,7 @@ func (lrc *LyricObj) getProxyLink(ctx context.Context) *model.Link {
|
|||||||
|
|
||||||
func (lrc *LyricObj) getLyricLink() *model.Link {
|
func (lrc *LyricObj) getLyricLink() *model.Link {
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
RangeReader: &model.FileRangeReader{
|
RangeReader: stream.GetRangeReaderFromMFile(int64(len(lrc.lyric)), strings.NewReader(lrc.lyric)),
|
||||||
RangeReaderIF: stream.GetRangeReaderFromMFile(int64(len(lrc.lyric)), strings.NewReader(lrc.lyric)),
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -223,7 +223,7 @@ func (d *NeteaseMusic) removeSongObj(file model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *NeteaseMusic) putSongStream(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *NeteaseMusic) putSongStream(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
tmp, err := stream.CacheFullInTempFile()
|
tmp, err := stream.CacheFullAndWriter(&up, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
package onedrive
|
package onedrive
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
@ -15,6 +14,7 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/avast/retry-go"
|
"github.com/avast/retry-go"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
@ -238,26 +238,29 @@ func (d *Onedrive) upBig(ctx context.Context, dstDir model.Obj, stream model.Fil
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
DEFAULT := d.ChunkSize * 1024 * 1024
|
||||||
|
ss, err := streamPkg.NewStreamSectionReader(stream, int(DEFAULT), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
uploadUrl := jsoniter.Get(res, "uploadUrl").ToString()
|
uploadUrl := jsoniter.Get(res, "uploadUrl").ToString()
|
||||||
var finish int64 = 0
|
var finish int64 = 0
|
||||||
DEFAULT := d.ChunkSize * 1024 * 1024
|
|
||||||
for finish < stream.GetSize() {
|
for finish < stream.GetSize() {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
left := stream.GetSize() - finish
|
left := stream.GetSize() - finish
|
||||||
byteSize := min(left, DEFAULT)
|
byteSize := min(left, DEFAULT)
|
||||||
err = retry.Do(
|
|
||||||
func() error {
|
|
||||||
utils.Log.Debugf("[Onedrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
utils.Log.Debugf("[Onedrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
||||||
byteData := make([]byte, byteSize)
|
rd, err := ss.GetSectionReader(finish, byteSize)
|
||||||
n, err := io.ReadFull(stream, byteData)
|
|
||||||
utils.Log.Debug(err, n)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl,
|
err = retry.Do(
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
|
func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, rd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -283,6 +286,7 @@ func (d *Onedrive) upBig(ctx context.Context, dstDir model.Obj, stream model.Fil
|
|||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
)
|
)
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
package onedrive_app
|
package onedrive_app
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
@ -15,6 +14,7 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/avast/retry-go"
|
"github.com/avast/retry-go"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
@ -152,26 +152,29 @@ func (d *OnedriveAPP) upBig(ctx context.Context, dstDir model.Obj, stream model.
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
DEFAULT := d.ChunkSize * 1024 * 1024
|
||||||
|
ss, err := streamPkg.NewStreamSectionReader(stream, int(DEFAULT), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
uploadUrl := jsoniter.Get(res, "uploadUrl").ToString()
|
uploadUrl := jsoniter.Get(res, "uploadUrl").ToString()
|
||||||
var finish int64 = 0
|
var finish int64 = 0
|
||||||
DEFAULT := d.ChunkSize * 1024 * 1024
|
|
||||||
for finish < stream.GetSize() {
|
for finish < stream.GetSize() {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
left := stream.GetSize() - finish
|
left := stream.GetSize() - finish
|
||||||
byteSize := min(left, DEFAULT)
|
byteSize := min(left, DEFAULT)
|
||||||
err = retry.Do(
|
|
||||||
func() error {
|
|
||||||
utils.Log.Debugf("[OnedriveAPP] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
utils.Log.Debugf("[OnedriveAPP] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
|
||||||
byteData := make([]byte, byteSize)
|
rd, err := ss.GetSectionReader(finish, byteSize)
|
||||||
n, err := io.ReadFull(stream, byteData)
|
|
||||||
utils.Log.Debug(err, n)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl,
|
err = retry.Do(
|
||||||
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
|
func() error {
|
||||||
|
rd.Seek(0, io.SeekStart)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, rd))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -197,6 +200,7 @@ func (d *OnedriveAPP) upBig(ctx context.Context, dstDir model.Obj, stream model.
|
|||||||
retry.DelayType(retry.BackOffDelay),
|
retry.DelayType(retry.BackOffDelay),
|
||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
)
|
)
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -38,14 +38,14 @@ func (d *OnedriveSharelink) Init(ctx context.Context) error {
|
|||||||
d.cron = cron.NewCron(time.Hour * 1)
|
d.cron = cron.NewCron(time.Hour * 1)
|
||||||
d.cron.Do(func() {
|
d.cron.Do(func() {
|
||||||
var err error
|
var err error
|
||||||
d.Headers, err = d.getHeaders()
|
d.Headers, err = d.getHeaders(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("%+v", err)
|
log.Errorf("%+v", err)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
// Get initial headers
|
// Get initial headers
|
||||||
d.Headers, err = d.getHeaders()
|
d.Headers, err = d.getHeaders(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -59,7 +59,7 @@ func (d *OnedriveSharelink) Drop(ctx context.Context) error {
|
|||||||
|
|
||||||
func (d *OnedriveSharelink) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
func (d *OnedriveSharelink) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
path := dir.GetPath()
|
path := dir.GetPath()
|
||||||
files, err := d.getFiles(path)
|
files, err := d.getFiles(ctx, path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -82,7 +82,7 @@ func (d *OnedriveSharelink) Link(ctx context.Context, file model.Obj, args model
|
|||||||
if d.HeaderTime < time.Now().Unix()-1800 {
|
if d.HeaderTime < time.Now().Unix()-1800 {
|
||||||
var err error
|
var err error
|
||||||
log.Debug("headers are older than 30 minutes, get new headers")
|
log.Debug("headers are older than 30 minutes, get new headers")
|
||||||
header, err = d.getHeaders()
|
header, err = d.getHeaders(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
package onedrive_sharelink
|
package onedrive_sharelink
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"crypto/tls"
|
"crypto/tls"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
@ -131,7 +132,7 @@ func getAttrValue(n *html.Node, key string) string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// getHeaders constructs and returns the necessary HTTP headers for accessing the OneDrive share link
|
// getHeaders constructs and returns the necessary HTTP headers for accessing the OneDrive share link
|
||||||
func (d *OnedriveSharelink) getHeaders() (http.Header, error) {
|
func (d *OnedriveSharelink) getHeaders(ctx context.Context) (http.Header, error) {
|
||||||
header := http.Header{}
|
header := http.Header{}
|
||||||
header.Set("User-Agent", base.UserAgent)
|
header.Set("User-Agent", base.UserAgent)
|
||||||
header.Set("accept-language", "zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6")
|
header.Set("accept-language", "zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6")
|
||||||
@ -142,7 +143,7 @@ func (d *OnedriveSharelink) getHeaders() (http.Header, error) {
|
|||||||
if d.ShareLinkPassword == "" {
|
if d.ShareLinkPassword == "" {
|
||||||
// Create a no-redirect client
|
// Create a no-redirect client
|
||||||
clientNoDirect := NewNoRedirectCLient()
|
clientNoDirect := NewNoRedirectCLient()
|
||||||
req, err := http.NewRequest("GET", d.ShareLinkURL, nil)
|
req, err := http.NewRequestWithContext(ctx, http.MethodGet, d.ShareLinkURL, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -180,9 +181,9 @@ func (d *OnedriveSharelink) getHeaders() (http.Header, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// getFiles retrieves the files from the OneDrive share link at the specified path
|
// getFiles retrieves the files from the OneDrive share link at the specified path
|
||||||
func (d *OnedriveSharelink) getFiles(path string) ([]Item, error) {
|
func (d *OnedriveSharelink) getFiles(ctx context.Context, path string) ([]Item, error) {
|
||||||
clientNoDirect := NewNoRedirectCLient()
|
clientNoDirect := NewNoRedirectCLient()
|
||||||
req, err := http.NewRequest("GET", d.ShareLinkURL, nil)
|
req, err := http.NewRequestWithContext(ctx, http.MethodGet, d.ShareLinkURL, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -221,11 +222,11 @@ func (d *OnedriveSharelink) getFiles(path string) ([]Item, error) {
|
|||||||
// Get redirectUrl
|
// Get redirectUrl
|
||||||
answer, err := clientNoDirect.Do(req)
|
answer, err := clientNoDirect.Do(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
d.Headers, err = d.getHeaders()
|
d.Headers, err = d.getHeaders(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return d.getFiles(path)
|
return d.getFiles(ctx, path)
|
||||||
}
|
}
|
||||||
defer answer.Body.Close()
|
defer answer.Body.Close()
|
||||||
re := regexp.MustCompile(`templateUrl":"(.*?)"`)
|
re := regexp.MustCompile(`templateUrl":"(.*?)"`)
|
||||||
@ -290,7 +291,7 @@ func (d *OnedriveSharelink) getFiles(path string) ([]Item, error) {
|
|||||||
|
|
||||||
client := &http.Client{}
|
client := &http.Client{}
|
||||||
postUrl := strings.Join(redirectSplitURL[:len(redirectSplitURL)-3], "/") + "/_api/v2.1/graphql"
|
postUrl := strings.Join(redirectSplitURL[:len(redirectSplitURL)-3], "/") + "/_api/v2.1/graphql"
|
||||||
req, err = http.NewRequest("POST", postUrl, strings.NewReader(graphqlVar))
|
req, err = http.NewRequest(http.MethodPost, postUrl, strings.NewReader(graphqlVar))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -298,11 +299,11 @@ func (d *OnedriveSharelink) getFiles(path string) ([]Item, error) {
|
|||||||
|
|
||||||
resp, err := client.Do(req)
|
resp, err := client.Do(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
d.Headers, err = d.getHeaders()
|
d.Headers, err = d.getHeaders(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return d.getFiles(path)
|
return d.getFiles(ctx, path)
|
||||||
}
|
}
|
||||||
defer resp.Body.Close()
|
defer resp.Body.Close()
|
||||||
var graphqlReq GraphQLRequest
|
var graphqlReq GraphQLRequest
|
||||||
@ -323,31 +324,31 @@ func (d *OnedriveSharelink) getFiles(path string) ([]Item, error) {
|
|||||||
|
|
||||||
graphqlReqNEW := GraphQLNEWRequest{}
|
graphqlReqNEW := GraphQLNEWRequest{}
|
||||||
postUrl = strings.Join(redirectSplitURL[:len(redirectSplitURL)-3], "/") + "/_api/web/GetListUsingPath(DecodedUrl=@a1)/RenderListDataAsStream" + nextHref
|
postUrl = strings.Join(redirectSplitURL[:len(redirectSplitURL)-3], "/") + "/_api/web/GetListUsingPath(DecodedUrl=@a1)/RenderListDataAsStream" + nextHref
|
||||||
req, _ = http.NewRequest("POST", postUrl, strings.NewReader(renderListDataAsStreamVar))
|
req, _ = http.NewRequest(http.MethodPost, postUrl, strings.NewReader(renderListDataAsStreamVar))
|
||||||
req.Header = tempHeader
|
req.Header = tempHeader
|
||||||
|
|
||||||
resp, err := client.Do(req)
|
resp, err := client.Do(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
d.Headers, err = d.getHeaders()
|
d.Headers, err = d.getHeaders(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return d.getFiles(path)
|
return d.getFiles(ctx, path)
|
||||||
}
|
}
|
||||||
defer resp.Body.Close()
|
defer resp.Body.Close()
|
||||||
json.NewDecoder(resp.Body).Decode(&graphqlReqNEW)
|
json.NewDecoder(resp.Body).Decode(&graphqlReqNEW)
|
||||||
for graphqlReqNEW.ListData.NextHref != "" {
|
for graphqlReqNEW.ListData.NextHref != "" {
|
||||||
graphqlReqNEW = GraphQLNEWRequest{}
|
graphqlReqNEW = GraphQLNEWRequest{}
|
||||||
postUrl = strings.Join(redirectSplitURL[:len(redirectSplitURL)-3], "/") + "/_api/web/GetListUsingPath(DecodedUrl=@a1)/RenderListDataAsStream" + nextHref
|
postUrl = strings.Join(redirectSplitURL[:len(redirectSplitURL)-3], "/") + "/_api/web/GetListUsingPath(DecodedUrl=@a1)/RenderListDataAsStream" + nextHref
|
||||||
req, _ = http.NewRequest("POST", postUrl, strings.NewReader(renderListDataAsStreamVar))
|
req, _ = http.NewRequest(http.MethodPost, postUrl, strings.NewReader(renderListDataAsStreamVar))
|
||||||
req.Header = tempHeader
|
req.Header = tempHeader
|
||||||
resp, err := client.Do(req)
|
resp, err := client.Do(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
d.Headers, err = d.getHeaders()
|
d.Headers, err = d.getHeaders(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
return d.getFiles(path)
|
return d.getFiles(ctx, path)
|
||||||
}
|
}
|
||||||
defer resp.Body.Close()
|
defer resp.Body.Close()
|
||||||
json.NewDecoder(resp.Body).Decode(&graphqlReqNEW)
|
json.NewDecoder(resp.Body).Decode(&graphqlReqNEW)
|
||||||
|
181
drivers/openlist_share/driver.go
Normal file
181
drivers/openlist_share/driver.go
Normal file
@ -0,0 +1,181 @@
|
|||||||
|
package openlist_share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
stdpath "path"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/server/common"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
type OpenListShare struct {
|
||||||
|
model.Storage
|
||||||
|
Addition
|
||||||
|
serverArchivePreview bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) Config() driver.Config {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) GetAddition() driver.Additional {
|
||||||
|
return &d.Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) Init(ctx context.Context) error {
|
||||||
|
d.Addition.Address = strings.TrimSuffix(d.Addition.Address, "/")
|
||||||
|
var settings common.Resp[map[string]string]
|
||||||
|
_, _, err := d.request("/public/settings", http.MethodGet, func(req *resty.Request) {
|
||||||
|
req.SetResult(&settings)
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
d.serverArchivePreview = settings.Data["share_archive_preview"] == "true"
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) Drop(ctx context.Context) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
|
var resp common.Resp[FsListResp]
|
||||||
|
_, _, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
|
||||||
|
req.SetResult(&resp).SetBody(ListReq{
|
||||||
|
PageReq: model.PageReq{
|
||||||
|
Page: 1,
|
||||||
|
PerPage: 0,
|
||||||
|
},
|
||||||
|
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), dir.GetPath()),
|
||||||
|
Password: d.Pwd,
|
||||||
|
Refresh: false,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var files []model.Obj
|
||||||
|
for _, f := range resp.Data.Content {
|
||||||
|
file := model.ObjThumb{
|
||||||
|
Object: model.Object{
|
||||||
|
Name: f.Name,
|
||||||
|
Modified: f.Modified,
|
||||||
|
Ctime: f.Created,
|
||||||
|
Size: f.Size,
|
||||||
|
IsFolder: f.IsDir,
|
||||||
|
HashInfo: utils.FromString(f.HashInfo),
|
||||||
|
},
|
||||||
|
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
|
||||||
|
}
|
||||||
|
files = append(files, &file)
|
||||||
|
}
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
|
path := utils.FixAndCleanPath(stdpath.Join(d.ShareId, file.GetPath()))
|
||||||
|
u := fmt.Sprintf("%s/sd%s?pwd=%s", d.Address, path, d.Pwd)
|
||||||
|
return &model.Link{URL: u}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
|
||||||
|
if !d.serverArchivePreview || !d.ForwardArchiveReq {
|
||||||
|
return nil, errs.NotImplement
|
||||||
|
}
|
||||||
|
var resp common.Resp[ArchiveMetaResp]
|
||||||
|
_, code, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
|
||||||
|
req.SetResult(&resp).SetBody(ArchiveMetaReq{
|
||||||
|
ArchivePass: args.Password,
|
||||||
|
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), obj.GetPath()),
|
||||||
|
Password: d.Pwd,
|
||||||
|
Refresh: false,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
if code == 202 {
|
||||||
|
return nil, errs.WrongArchivePassword
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var tree []model.ObjTree
|
||||||
|
if resp.Data.Content != nil {
|
||||||
|
tree = make([]model.ObjTree, 0, len(resp.Data.Content))
|
||||||
|
for _, content := range resp.Data.Content {
|
||||||
|
tree = append(tree, &content)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return &model.ArchiveMetaInfo{
|
||||||
|
Comment: resp.Data.Comment,
|
||||||
|
Encrypted: resp.Data.Encrypted,
|
||||||
|
Tree: tree,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
|
||||||
|
if !d.serverArchivePreview || !d.ForwardArchiveReq {
|
||||||
|
return nil, errs.NotImplement
|
||||||
|
}
|
||||||
|
var resp common.Resp[ArchiveListResp]
|
||||||
|
_, code, err := d.request("/fs/archive/list", http.MethodPost, func(req *resty.Request) {
|
||||||
|
req.SetResult(&resp).SetBody(ArchiveListReq{
|
||||||
|
ArchiveMetaReq: ArchiveMetaReq{
|
||||||
|
ArchivePass: args.Password,
|
||||||
|
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), obj.GetPath()),
|
||||||
|
Password: d.Pwd,
|
||||||
|
Refresh: false,
|
||||||
|
},
|
||||||
|
PageReq: model.PageReq{
|
||||||
|
Page: 1,
|
||||||
|
PerPage: 0,
|
||||||
|
},
|
||||||
|
InnerPath: args.InnerPath,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
if code == 202 {
|
||||||
|
return nil, errs.WrongArchivePassword
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var files []model.Obj
|
||||||
|
for _, f := range resp.Data.Content {
|
||||||
|
file := model.ObjThumb{
|
||||||
|
Object: model.Object{
|
||||||
|
Name: f.Name,
|
||||||
|
Modified: f.Modified,
|
||||||
|
Ctime: f.Created,
|
||||||
|
Size: f.Size,
|
||||||
|
IsFolder: f.IsDir,
|
||||||
|
HashInfo: utils.FromString(f.HashInfo),
|
||||||
|
},
|
||||||
|
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
|
||||||
|
}
|
||||||
|
files = append(files, &file)
|
||||||
|
}
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *OpenListShare) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
|
||||||
|
if !d.serverArchivePreview || !d.ForwardArchiveReq {
|
||||||
|
return nil, errs.NotSupport
|
||||||
|
}
|
||||||
|
path := utils.FixAndCleanPath(stdpath.Join(d.ShareId, obj.GetPath()))
|
||||||
|
u := fmt.Sprintf("%s/sad%s?pwd=%s&inner=%s&pass=%s",
|
||||||
|
d.Address,
|
||||||
|
path,
|
||||||
|
d.Pwd,
|
||||||
|
utils.EncodePath(args.InnerPath, true),
|
||||||
|
url.QueryEscape(args.Password))
|
||||||
|
return &model.Link{URL: u}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ driver.Driver = (*OpenListShare)(nil)
|
27
drivers/openlist_share/meta.go
Normal file
27
drivers/openlist_share/meta.go
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
package openlist_share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Addition struct {
|
||||||
|
driver.RootPath
|
||||||
|
Address string `json:"url" required:"true"`
|
||||||
|
ShareId string `json:"sid" required:"true"`
|
||||||
|
Pwd string `json:"pwd"`
|
||||||
|
ForwardArchiveReq bool `json:"forward_archive_requests" default:"true"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var config = driver.Config{
|
||||||
|
Name: "OpenListShare",
|
||||||
|
LocalSort: true,
|
||||||
|
NoUpload: true,
|
||||||
|
DefaultRoot: "/",
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
op.RegisterDriver(func() driver.Driver {
|
||||||
|
return &OpenListShare{}
|
||||||
|
})
|
||||||
|
}
|
111
drivers/openlist_share/types.go
Normal file
111
drivers/openlist_share/types.go
Normal file
@ -0,0 +1,111 @@
|
|||||||
|
package openlist_share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ListReq struct {
|
||||||
|
model.PageReq
|
||||||
|
Path string `json:"path" form:"path"`
|
||||||
|
Password string `json:"password" form:"password"`
|
||||||
|
Refresh bool `json:"refresh"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ObjResp struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Size int64 `json:"size"`
|
||||||
|
IsDir bool `json:"is_dir"`
|
||||||
|
Modified time.Time `json:"modified"`
|
||||||
|
Created time.Time `json:"created"`
|
||||||
|
Sign string `json:"sign"`
|
||||||
|
Thumb string `json:"thumb"`
|
||||||
|
Type int `json:"type"`
|
||||||
|
HashInfo string `json:"hashinfo"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type FsListResp struct {
|
||||||
|
Content []ObjResp `json:"content"`
|
||||||
|
Total int64 `json:"total"`
|
||||||
|
Readme string `json:"readme"`
|
||||||
|
Write bool `json:"write"`
|
||||||
|
Provider string `json:"provider"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ArchiveMetaReq struct {
|
||||||
|
ArchivePass string `json:"archive_pass"`
|
||||||
|
Password string `json:"password"`
|
||||||
|
Path string `json:"path"`
|
||||||
|
Refresh bool `json:"refresh"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type TreeResp struct {
|
||||||
|
ObjResp
|
||||||
|
Children []TreeResp `json:"children"`
|
||||||
|
hashCache *utils.HashInfo
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) GetSize() int64 {
|
||||||
|
return t.Size
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) GetName() string {
|
||||||
|
return t.Name
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) ModTime() time.Time {
|
||||||
|
return t.Modified
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) CreateTime() time.Time {
|
||||||
|
return t.Created
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) IsDir() bool {
|
||||||
|
return t.ObjResp.IsDir
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) GetHash() utils.HashInfo {
|
||||||
|
return utils.FromString(t.HashInfo)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) GetID() string {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) GetPath() string {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) GetChildren() []model.ObjTree {
|
||||||
|
ret := make([]model.ObjTree, 0, len(t.Children))
|
||||||
|
for _, child := range t.Children {
|
||||||
|
ret = append(ret, &child)
|
||||||
|
}
|
||||||
|
return ret
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *TreeResp) Thumb() string {
|
||||||
|
return t.ObjResp.Thumb
|
||||||
|
}
|
||||||
|
|
||||||
|
type ArchiveMetaResp struct {
|
||||||
|
Comment string `json:"comment"`
|
||||||
|
Encrypted bool `json:"encrypted"`
|
||||||
|
Content []TreeResp `json:"content"`
|
||||||
|
RawURL string `json:"raw_url"`
|
||||||
|
Sign string `json:"sign"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ArchiveListReq struct {
|
||||||
|
model.PageReq
|
||||||
|
ArchiveMetaReq
|
||||||
|
InnerPath string `json:"inner_path"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ArchiveListResp struct {
|
||||||
|
Content []ObjResp `json:"content"`
|
||||||
|
Total int64 `json:"total"`
|
||||||
|
}
|
32
drivers/openlist_share/util.go
Normal file
32
drivers/openlist_share/util.go
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
package openlist_share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (d *OpenListShare) request(api, method string, callback base.ReqCallback) ([]byte, int, error) {
|
||||||
|
url := d.Address + "/api" + api
|
||||||
|
req := base.RestyClient.R()
|
||||||
|
if callback != nil {
|
||||||
|
callback(req)
|
||||||
|
}
|
||||||
|
res, err := req.Execute(method, url)
|
||||||
|
if err != nil {
|
||||||
|
code := 0
|
||||||
|
if res != nil {
|
||||||
|
code = res.StatusCode()
|
||||||
|
}
|
||||||
|
return nil, code, err
|
||||||
|
}
|
||||||
|
if res.StatusCode() >= 400 {
|
||||||
|
return nil, res.StatusCode(), fmt.Errorf("request failed, status: %s", res.Status())
|
||||||
|
}
|
||||||
|
code := utils.Json.Get(res.Body(), "code").ToInt()
|
||||||
|
if code != 200 {
|
||||||
|
return nil, code, fmt.Errorf("request failed, code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
|
||||||
|
}
|
||||||
|
return res.Body(), 200, nil
|
||||||
|
}
|
@ -12,6 +12,7 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
hash_extend "github.com/OpenListTeam/OpenList/v4/pkg/utils/hash"
|
hash_extend "github.com/OpenListTeam/OpenList/v4/pkg/utils/hash"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
@ -212,15 +213,11 @@ func (d *PikPak) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *PikPak) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *PikPak) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
hi := stream.GetHash()
|
sha1Str := stream.GetHash().GetHash(hash_extend.GCID)
|
||||||
sha1Str := hi.GetHash(hash_extend.GCID)
|
|
||||||
if len(sha1Str) < hash_extend.GCID.Width {
|
|
||||||
tFile, err := stream.CacheFullInTempFile()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
sha1Str, err = utils.HashFile(hash_extend.GCID, tFile, stream.GetSize())
|
if len(sha1Str) < hash_extend.GCID.Width {
|
||||||
|
var err error
|
||||||
|
_, sha1Str, err = streamPkg.CacheFullAndHash(stream, &up, hash_extend.GCID, stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -438,20 +438,19 @@ func (d *PikPak) UploadByOSS(ctx context.Context, params *S3Params, s model.File
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *PikPak) UploadByMultipart(ctx context.Context, params *S3Params, fileSize int64, s model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *PikPak) UploadByMultipart(ctx context.Context, params *S3Params, fileSize int64, s model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
|
tmpF, err := s.CacheFullAndWriter(&up, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
chunks []oss.FileChunk
|
chunks []oss.FileChunk
|
||||||
parts []oss.UploadPart
|
parts []oss.UploadPart
|
||||||
imur oss.InitiateMultipartUploadResult
|
imur oss.InitiateMultipartUploadResult
|
||||||
ossClient *oss.Client
|
ossClient *oss.Client
|
||||||
bucket *oss.Bucket
|
bucket *oss.Bucket
|
||||||
err error
|
|
||||||
)
|
)
|
||||||
|
|
||||||
tmpF, err := s.CacheFullInTempFile()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if ossClient, err = oss.New(params.Endpoint, params.AccessKeyID, params.AccessKeySecret); err != nil {
|
if ossClient, err = oss.New(params.Endpoint, params.AccessKeyID, params.AccessKeySecret); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -14,7 +14,6 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
)
|
)
|
||||||
@ -158,9 +157,7 @@ func (d *QuarkOpen) Put(ctx context.Context, dstDir model.Obj, stream model.File
|
|||||||
}
|
}
|
||||||
|
|
||||||
if len(writers) > 0 {
|
if len(writers) > 0 {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
_, err := stream.CacheFullAndWriter(&up, io.MultiWriter(writers...))
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, err := streamPkg.CacheFullInTempFileAndWriter(stream, cacheFileProgress, io.MultiWriter(writers...))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -8,14 +8,15 @@ import (
|
|||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
|
|
||||||
"github.com/google/uuid"
|
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
@ -244,11 +245,8 @@ func (d *QuarkOpen) generateProofCode(file model.FileStreamer, proofSeed string,
|
|||||||
// 读取数据
|
// 读取数据
|
||||||
buf := make([]byte, length)
|
buf := make([]byte, length)
|
||||||
n, err := io.ReadFull(reader, buf)
|
n, err := io.ReadFull(reader, buf)
|
||||||
if errors.Is(err, io.ErrUnexpectedEOF) {
|
if n != int(length) {
|
||||||
return "", fmt.Errorf("can't read data, expected=%d, got=%d", length, n)
|
return "", fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", length, n, err)
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to read data: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Base64编码
|
// Base64编码
|
||||||
|
@ -13,7 +13,6 @@ import (
|
|||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
@ -144,9 +143,7 @@ func (d *QuarkOrUC) Put(ctx context.Context, dstDir model.Obj, stream model.File
|
|||||||
}
|
}
|
||||||
|
|
||||||
if len(writers) > 0 {
|
if len(writers) > 0 {
|
||||||
cacheFileProgress := model.UpdateProgressWithRange(up, 0, 50)
|
_, err := stream.CacheFullAndWriter(&up, io.MultiWriter(writers...))
|
||||||
up = model.UpdateProgressWithRange(up, 50, 100)
|
|
||||||
_, err := streamPkg.CacheFullInTempFileAndWriter(stream, cacheFileProgress, io.MultiWriter(writers...))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -149,12 +149,18 @@ func (d *QuarkOrUC) getTranscodingLink(file model.Obj) (*model.Link, error) {
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, info := range resp.Data.VideoList {
|
||||||
|
if info.VideoInfo.URL != "" {
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
URL: resp.Data.VideoList[0].VideoInfo.URL,
|
URL: info.VideoInfo.URL,
|
||||||
ContentLength: resp.Data.VideoList[0].VideoInfo.Size,
|
ContentLength: info.VideoInfo.Size,
|
||||||
Concurrency: 3,
|
Concurrency: 3,
|
||||||
PartSize: 10 * utils.MB,
|
PartSize: 10 * utils.MB,
|
||||||
}, nil
|
}, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, errors.New("no link found")
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *QuarkOrUC) upPre(file model.FileStreamer, parentId string) (UpPreResp, error) {
|
func (d *QuarkOrUC) upPre(file model.FileStreamer, parentId string) (UpPreResp, error) {
|
||||||
|
@ -3,6 +3,7 @@ package quark_uc_tv
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@ -96,7 +97,7 @@ func (d *QuarkUCTV) List(ctx context.Context, dir model.Obj, args model.ListArgs
|
|||||||
pageSize := int64(100)
|
pageSize := int64(100)
|
||||||
for {
|
for {
|
||||||
var filesData FilesData
|
var filesData FilesData
|
||||||
_, err := d.request(ctx, "/file", "GET", func(req *resty.Request) {
|
_, err := d.request(ctx, "/file", http.MethodGet, func(req *resty.Request) {
|
||||||
req.SetQueryParams(map[string]string{
|
req.SetQueryParams(map[string]string{
|
||||||
"method": "list",
|
"method": "list",
|
||||||
"parent_fid": dir.GetID(),
|
"parent_fid": dir.GetID(),
|
||||||
|
@ -95,7 +95,7 @@ func (d *QuarkUCTV) getLoginCode(ctx context.Context) (string, error) {
|
|||||||
QrData string `json:"qr_data"`
|
QrData string `json:"qr_data"`
|
||||||
QueryToken string `json:"query_token"`
|
QueryToken string `json:"query_token"`
|
||||||
}
|
}
|
||||||
_, err := d.request(ctx, pathname, "GET", func(req *resty.Request) {
|
_, err := d.request(ctx, pathname, http.MethodGet, func(req *resty.Request) {
|
||||||
req.SetQueryParams(map[string]string{
|
req.SetQueryParams(map[string]string{
|
||||||
"auth_type": "code",
|
"auth_type": "code",
|
||||||
"client_id": d.conf.clientID,
|
"client_id": d.conf.clientID,
|
||||||
@ -123,7 +123,7 @@ func (d *QuarkUCTV) getCode(ctx context.Context) (string, error) {
|
|||||||
CommonRsp
|
CommonRsp
|
||||||
Code string `json:"code"`
|
Code string `json:"code"`
|
||||||
}
|
}
|
||||||
_, err := d.request(ctx, pathname, "GET", func(req *resty.Request) {
|
_, err := d.request(ctx, pathname, http.MethodGet, func(req *resty.Request) {
|
||||||
req.SetQueryParams(map[string]string{
|
req.SetQueryParams(map[string]string{
|
||||||
"client_id": d.conf.clientID,
|
"client_id": d.conf.clientID,
|
||||||
"scope": "netdisk",
|
"scope": "netdisk",
|
||||||
@ -138,7 +138,7 @@ func (d *QuarkUCTV) getCode(ctx context.Context) (string, error) {
|
|||||||
|
|
||||||
func (d *QuarkUCTV) getRefreshTokenByTV(ctx context.Context, code string, isRefresh bool) error {
|
func (d *QuarkUCTV) getRefreshTokenByTV(ctx context.Context, code string, isRefresh bool) error {
|
||||||
pathname := "/token"
|
pathname := "/token"
|
||||||
_, _, reqID := d.generateReqSign("POST", pathname, d.conf.signKey)
|
_, _, reqID := d.generateReqSign(http.MethodPost, pathname, d.conf.signKey)
|
||||||
u := d.conf.codeApi + pathname
|
u := d.conf.codeApi + pathname
|
||||||
var resp RefreshTokenAuthResp
|
var resp RefreshTokenAuthResp
|
||||||
body := map[string]string{
|
body := map[string]string{
|
||||||
@ -228,12 +228,18 @@ func (d *QuarkUCTV) getTranscodingLink(ctx context.Context, file model.Obj) (*mo
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, info := range fileLink.Data.VideoInfo {
|
||||||
|
if info.URL != "" {
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
URL: fileLink.Data.VideoInfo[0].URL,
|
URL: info.URL,
|
||||||
|
ContentLength: info.Size,
|
||||||
Concurrency: 3,
|
Concurrency: 3,
|
||||||
PartSize: 10 * utils.MB,
|
PartSize: 10 * utils.MB,
|
||||||
ContentLength: fileLink.Data.VideoInfo[0].Size,
|
|
||||||
}, nil
|
}, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, errors.New("no link found")
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *QuarkUCTV) getDownloadLink(ctx context.Context, file model.Obj) (*model.Link, error) {
|
func (d *QuarkUCTV) getDownloadLink(ctx context.Context, file model.Obj) (*model.Link, error) {
|
||||||
|
@ -38,7 +38,7 @@ func getCredentials(AccessKey, SecretKey string) (rst Credentials, err error) {
|
|||||||
sign := hex.EncodeToString(hmacObj.Sum(nil))
|
sign := hex.EncodeToString(hmacObj.Sum(nil))
|
||||||
Authorization := "TOKEN " + AccessKey + ":" + sign
|
Authorization := "TOKEN " + AccessKey + ":" + sign
|
||||||
|
|
||||||
req, err := http.NewRequest("POST", "https://api.dogecloud.com"+apiPath, strings.NewReader(string(reqBody)))
|
req, err := http.NewRequest(http.MethodPost, "https://api.dogecloud.com"+apiPath, strings.NewReader(string(reqBody)))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return rst, err
|
return rst, err
|
||||||
}
|
}
|
||||||
|
@ -63,20 +63,20 @@ func (d *SFTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
if remoteFile != nil && !d.Config().OnlyLinkMFile {
|
mFile := &stream.RateLimitFile{
|
||||||
|
File: remoteFile,
|
||||||
|
Limiter: stream.ServerDownloadLimit,
|
||||||
|
Ctx: ctx,
|
||||||
|
}
|
||||||
|
if !d.Config().OnlyLinkMFile {
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
RangeReader: &model.FileRangeReader{
|
RangeReader: stream.GetRangeReaderFromMFile(file.GetSize(), mFile),
|
||||||
RangeReaderIF: stream.RateLimitRangeReaderFunc(stream.GetRangeReaderFromMFile(file.GetSize(), remoteFile)),
|
|
||||||
},
|
|
||||||
SyncClosers: utils.NewSyncClosers(remoteFile),
|
SyncClosers: utils.NewSyncClosers(remoteFile),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
MFile: &stream.RateLimitFile{
|
MFile: mFile,
|
||||||
File: remoteFile,
|
SyncClosers: utils.NewSyncClosers(remoteFile),
|
||||||
Limiter: stream.ServerDownloadLimit,
|
|
||||||
Ctx: ctx,
|
|
||||||
},
|
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -13,8 +13,8 @@ import (
|
|||||||
// do others that not defined in Driver interface
|
// do others that not defined in Driver interface
|
||||||
|
|
||||||
func (d *SFTP) initClient() error {
|
func (d *SFTP) initClient() error {
|
||||||
err, _, _ := singleflight.ErrorGroup.Do(fmt.Sprintf("SFTP.initClient:%p", d), func() (error, error) {
|
_, err, _ := singleflight.AnyGroup.Do(fmt.Sprintf("SFTP.initClient:%p", d), func() (any, error) {
|
||||||
return d._initClient(), nil
|
return nil, d._initClient()
|
||||||
})
|
})
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -81,19 +81,20 @@ func (d *SMB) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*m
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
d.updateLastConnTime()
|
d.updateLastConnTime()
|
||||||
if remoteFile != nil && !d.Config().OnlyLinkMFile {
|
mFile := &stream.RateLimitFile{
|
||||||
return &model.Link{
|
|
||||||
RangeReader: &model.FileRangeReader{
|
|
||||||
RangeReaderIF: stream.RateLimitRangeReaderFunc(stream.GetRangeReaderFromMFile(file.GetSize(), remoteFile)),
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
return &model.Link{
|
|
||||||
MFile: &stream.RateLimitFile{
|
|
||||||
File: remoteFile,
|
File: remoteFile,
|
||||||
Limiter: stream.ServerDownloadLimit,
|
Limiter: stream.ServerDownloadLimit,
|
||||||
Ctx: ctx,
|
Ctx: ctx,
|
||||||
},
|
}
|
||||||
|
if !d.Config().OnlyLinkMFile {
|
||||||
|
return &model.Link{
|
||||||
|
RangeReader: stream.GetRangeReaderFromMFile(file.GetSize(), mFile),
|
||||||
|
SyncClosers: utils.NewSyncClosers(remoteFile),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
return &model.Link{
|
||||||
|
MFile: mFile,
|
||||||
|
SyncClosers: utils.NewSyncClosers(remoteFile),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -28,8 +28,8 @@ func (d *SMB) getLastConnTime() time.Time {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *SMB) initFS() error {
|
func (d *SMB) initFS() error {
|
||||||
err, _, _ := singleflight.ErrorGroup.Do(fmt.Sprintf("SMB.initFS:%p", d), func() (error, error) {
|
_, err, _ := singleflight.AnyGroup.Do(fmt.Sprintf("SMB.initFS:%p", d), func() (any, error) {
|
||||||
return d._initFS(), nil
|
return nil, d._initFS()
|
||||||
})
|
})
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -3,13 +3,17 @@ package strm
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
stdpath "path"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/fs"
|
"github.com/OpenListTeam/OpenList/v4/internal/fs"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/sign"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/server/common"
|
||||||
)
|
)
|
||||||
|
|
||||||
type Strm struct {
|
type Strm struct {
|
||||||
@ -18,6 +22,9 @@ type Strm struct {
|
|||||||
pathMap map[string][]string
|
pathMap map[string][]string
|
||||||
autoFlatten bool
|
autoFlatten bool
|
||||||
oneKey string
|
oneKey string
|
||||||
|
|
||||||
|
supportSuffix map[string]struct{}
|
||||||
|
downloadSuffix map[string]struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Strm) Config() driver.Config {
|
func (d *Strm) Config() driver.Config {
|
||||||
@ -51,12 +58,24 @@ func (d *Strm) Init(ctx context.Context) error {
|
|||||||
d.autoFlatten = false
|
d.autoFlatten = false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
d.supportSuffix = supportSuffix()
|
||||||
if d.FilterFileTypes != "" {
|
if d.FilterFileTypes != "" {
|
||||||
types := strings.Split(d.FilterFileTypes, ",")
|
types := strings.Split(d.FilterFileTypes, ",")
|
||||||
for _, ext := range types {
|
for _, ext := range types {
|
||||||
ext = strings.ToLower(strings.TrimSpace(ext))
|
ext = strings.ToLower(strings.TrimSpace(ext))
|
||||||
if ext != "" {
|
if ext != "" {
|
||||||
supportSuffix[ext] = struct{}{}
|
d.supportSuffix[ext] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d.downloadSuffix = downloadSuffix()
|
||||||
|
if d.DownloadFileTypes != "" {
|
||||||
|
downloadTypes := strings.Split(d.DownloadFileTypes, ",")
|
||||||
|
for _, ext := range downloadTypes {
|
||||||
|
ext = strings.ToLower(strings.TrimSpace(ext))
|
||||||
|
if ext != "" {
|
||||||
|
d.downloadSuffix[ext] = struct{}{}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -65,6 +84,8 @@ func (d *Strm) Init(ctx context.Context) error {
|
|||||||
|
|
||||||
func (d *Strm) Drop(ctx context.Context) error {
|
func (d *Strm) Drop(ctx context.Context) error {
|
||||||
d.pathMap = nil
|
d.pathMap = nil
|
||||||
|
d.downloadSuffix = nil
|
||||||
|
d.supportSuffix = nil
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -82,10 +103,25 @@ func (d *Strm) Get(ctx context.Context, path string) (model.Obj, error) {
|
|||||||
return nil, errs.ObjectNotFound
|
return nil, errs.ObjectNotFound
|
||||||
}
|
}
|
||||||
for _, dst := range dsts {
|
for _, dst := range dsts {
|
||||||
obj, err := d.get(ctx, path, dst, sub)
|
reqPath := stdpath.Join(dst, sub)
|
||||||
if err == nil {
|
obj, err := fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
|
||||||
return obj, nil
|
if err != nil {
|
||||||
|
continue
|
||||||
}
|
}
|
||||||
|
// fs.Get 没报错,说明不是strm生成的路径,需要直接返回
|
||||||
|
size := int64(0)
|
||||||
|
if !obj.IsDir() {
|
||||||
|
size = obj.GetSize()
|
||||||
|
path = reqPath //把路径设置为真实的,供Link直接读取
|
||||||
|
}
|
||||||
|
return &model.Object{
|
||||||
|
Path: path,
|
||||||
|
Name: obj.GetName(),
|
||||||
|
Size: size,
|
||||||
|
Modified: obj.ModTime(),
|
||||||
|
IsFolder: obj.IsDir(),
|
||||||
|
HashInfo: obj.GetHash(),
|
||||||
|
}, nil
|
||||||
}
|
}
|
||||||
return nil, errs.ObjectNotFound
|
return nil, errs.ObjectNotFound
|
||||||
}
|
}
|
||||||
@ -112,34 +148,34 @@ func (d *Strm) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Strm) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
func (d *Strm) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
|
if file.GetID() == "strm" {
|
||||||
link := d.getLink(ctx, file.GetPath())
|
link := d.getLink(ctx, file.GetPath())
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
MFile: strings.NewReader(link),
|
MFile: strings.NewReader(link),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
// ftp,s3
|
||||||
|
if common.GetApiUrl(ctx) == "" {
|
||||||
|
args.Redirect = false
|
||||||
|
}
|
||||||
|
reqPath := file.GetPath()
|
||||||
|
link, _, err := d.link(ctx, reqPath, args)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
func (d *Strm) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
if link == nil {
|
||||||
return errors.New("strm Driver cannot make dir")
|
return &model.Link{
|
||||||
}
|
URL: fmt.Sprintf("%s/p%s?sign=%s",
|
||||||
|
common.GetApiUrl(ctx),
|
||||||
|
utils.EncodePath(reqPath, true),
|
||||||
|
sign.Sign(reqPath)),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (d *Strm) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
resultLink := *link
|
||||||
return errors.New("strm Driver cannot move file")
|
resultLink.SyncClosers = utils.NewSyncClosers(link)
|
||||||
}
|
return &resultLink, nil
|
||||||
|
|
||||||
func (d *Strm) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
|
||||||
return errors.New("strm Driver cannot rename file")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Strm) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
|
||||||
return errors.New("strm Driver cannot copy file")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Strm) Remove(ctx context.Context, obj model.Obj) error {
|
|
||||||
return errors.New("strm Driver cannot remove file")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Strm) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
|
|
||||||
return errors.New("strm Driver cannot put file")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var _ driver.Driver = (*Strm)(nil)
|
var _ driver.Driver = (*Strm)(nil)
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user