Compare commits

..

18 Commits

Author SHA1 Message Date
f029df88e4 Add Terms of Use and Privacy Policy links to READMEs
Added links to the Terms of Use and Privacy Policy in the English, Chinese, Japanese, and Dutch README files to provide users with easy access to legal information.
2025-07-01 14:00:36 +08:00
00f825d9e2 Update documentation links in README files
Replaced plain documentation URLs with labeled links and icons in English, Chinese, Japanese, and Dutch README files for improved clarity and user experience.
2025-07-01 13:53:29 +08:00
732dcfa5b1 fix format 2025-07-01 13:40:52 +08:00
e2ad8eabb8 Revert "test large name"
This reverts commit affedc845b.
2025-07-01 13:38:28 +08:00
affedc845b test large name 2025-07-01 13:37:43 +08:00
382cd6425f Add Dutch README and update language links
Added a new Dutch translation (README_nl.md) and updated language navigation links in the English, Chinese, and Japanese README files to include Dutch.
2025-07-01 13:34:42 +08:00
e880acb71d Add AGPL-3.0 license links to README files
Updated the English, Chinese, and Japanese README files to include direct links to the AGPL-3.0 license text and the LICENSE file for clarity and easier access.
2025-07-01 13:29:51 +08:00
a0d1eadf3e Move language and links sections below logo in READMEs
Repositioned the language selection and related links sections to appear after the logo and separator in README.md, README_cn.md, and README_ja.md for improved layout consistency.
2025-07-01 13:25:52 +08:00
70a0a32b7b Revise and unify README files across languages
Updated README.md, README_cn.md, and README_ja.md to improve structure, add navigation links, clarify project purpose, and unify feature lists. Enhanced formatting, added acknowledgments to original authors, and improved legal/disclaimer sections for consistency across English, Chinese, and Japanese documentation.
2025-07-01 13:23:36 +08:00
2f32120908 Update Go Report Card badge URL in README
Changed the Go Report Card badge to reference v3 instead of v4. This ensures the badge displays the correct status for the intended version.
2025-07-01 12:56:26 +08:00
0fdfa2b365 Update README.md 2025-07-01 12:53:43 +08:00
82713611c0 Update Go Report Card badge URL in README
Changed the Go Report Card badge link to remove the '/v3' suffix, ensuring it points to the correct repository path.
2025-07-01 12:53:22 +08:00
41acb3e865 Update project description in README
Revised the introductory paragraph to emphasize OpenList's resilience and community-driven nature as a fork of AList, highlighting its commitment to defending open source against trust-based attacks.
2025-07-01 12:52:46 +08:00
77aca6609a Update README.md 2025-07-01 12:50:45 +08:00
63a597f802 Improve README badge formatting and alignment
Reformatted the badge section in the README for better readability and visual alignment. Updated the div to use 'align="center"' and placed each badge on its own line with proper indentation.
2025-07-01 12:49:57 +08:00
fcf7530dd8 Update logo size and remove migration note in README
Set explicit width and height for the logo image and removed the note about migration progress, reflecting project updates.
2025-07-01 12:46:38 +08:00
5f0645ded8 Revert README header to HTML
Replaces markdown-based center alignment and badge/image syntax with HTML tags for better visual formatting and consistency in the README header.
2025-07-01 12:45:04 +08:00
0f7ba9599d Revise README formatting and update project info
Refactored the README to use markdown badge/link syntax, improved formatting, and clarified the disclaimer section. Updated Docker Deploy status, added a Contact Us section, and reordered the Contributors section for better project transparency and communication.
2025-07-01 12:42:59 +08:00
292 changed files with 5479 additions and 8272 deletions

View File

@ -25,11 +25,11 @@ body:
- label: | - label: |
我确认我的描述清晰,语法礼貌,能帮助开发者快速定位问题,并符合社区规则。 我确认我的描述清晰,语法礼貌,能帮助开发者快速定位问题,并符合社区规则。
- label: | - label: |
我已确认阅读了[OpenList文档](https://doc.oplist.org)。 我已确认阅读了[OpenList文档](https://docs.oplist.org)。
- label: | - label: |
我已确认没有重复的问题或讨论。 我已确认没有重复的问题或讨论。
- label: | - label: |
我已确认是`OpenList`的问题,而不是其他原因(例如 [网络](https://doc.oplist.org/faq/howto#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host-1) `依赖`或`操作`)。 我已确认是`OpenList`的问题,而不是其他原因(例如 [网络](https://docs.oplist.org/zh/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host) `依赖`或`操作`)。
- label: | - label: |
我认为此问题必须由`OpenList`处理,而非第三方。 我认为此问题必须由`OpenList`处理,而非第三方。
- label: | - label: |
@ -72,7 +72,7 @@ body:
attributes: attributes:
label: 日志(可选) label: 日志(可选)
description: | description: |
请复制粘贴错误日志,或者截图。(可隐藏隐私字段) [查看方法](https://doc.oplist.org/faq/howto#%E5%A6%82%E4%BD%95%E5%BF%AB%E9%80%9F%E5%AE%9A%E4%BD%8Dbug) 请复制粘贴错误日志,或者截图。(可隐藏隐私字段)
- type: textarea - type: textarea
id: reproduction id: reproduction
attributes: attributes:

View File

@ -25,11 +25,11 @@ body:
- label: | - label: |
I confirm my description is clear, polite, helps developers quickly locate the issue, and complies with community rules. I confirm my description is clear, polite, helps developers quickly locate the issue, and complies with community rules.
- label: | - label: |
I have read the [OpenList documentation](https://doc.oplist.org). I have read the [OpenList documentation](https://docs.oplist.org).
- label: | - label: |
I confirm there are no duplicate issues or discussions. I confirm there are no duplicate issues or discussions.
- label: | - label: |
I confirm this is an `OpenList` issue, not caused by other reasons (such as [network](https://doc.oplist.org/faq/howto#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host-1), dependencies, or operation). I confirm this is an `OpenList` issue, not caused by other reasons (such as [network](https://docs.oplist.org/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host), dependencies, or operation).
- label: | - label: |
I believe this issue must be handled by `OpenList` and not by a third party. I believe this issue must be handled by `OpenList` and not by a third party.
- label: | - label: |
@ -72,7 +72,7 @@ body:
attributes: attributes:
label: Logs (optional) label: Logs (optional)
description: | description: |
Please copy and paste any relevant log output or screenshots. (You may mask sensitive fields) [Guide](https://doc.oplist.org/faq/howto#how-to-quickly-locate-bugs) Please copy and paste any relevant log output or screenshots. (You may mask sensitive fields)
- type: textarea - type: textarea
id: reproduction id: reproduction
attributes: attributes:

View File

@ -19,7 +19,7 @@ body:
- label: | - label: |
我确认我的描述清晰,语法礼貌,能帮助开发者快速定位问题,并符合社区规则。 我确认我的描述清晰,语法礼貌,能帮助开发者快速定位问题,并符合社区规则。
- label: | - label: |
我已确认阅读了[OpenList文档](https://doc.oplist.org)。 我已确认阅读了[OpenList文档](https://docs.oplist.org)。
- label: | - label: |
我已确认没有重复的问题或讨论。 我已确认没有重复的问题或讨论。
- label: | - label: |

View File

@ -19,7 +19,7 @@ body:
- label: | - label: |
I confirm my description is clear, polite, helps developers quickly locate the issue, and complies with community rules. I confirm my description is clear, polite, helps developers quickly locate the issue, and complies with community rules.
- label: | - label: |
I have read the [OpenList documentation](https://doc.oplist.org). I have read the [OpenList documentation](https://docs.oplist.org).
- label: | - label: |
I confirm there are no duplicate issues or discussions. I confirm there are no duplicate issues or discussions.
- label: | - label: |

21
.github/config.yml vendored Normal file
View File

@ -0,0 +1,21 @@
# Configuration for welcome - https://github.com/behaviorbot/welcome
# Configuration for new-issue-welcome - https://github.com/behaviorbot/new-issue-welcome
# Comment to be posted to on first time issues
newIssueWelcomeComment: >
Thanks for opening your first issue here! Be sure to follow the issue template!
# Configuration for new-pr-welcome - https://github.com/behaviorbot/new-pr-welcome
# Comment to be posted to on PRs from first time contributors in your repository
newPRWelcomeComment: >
Thanks for opening this pull request! Please check out our contributing guidelines.
# Configuration for first-pr-merge - https://github.com/behaviorbot/first-pr-merge
# Comment to be posted to on pull requests merged by a first time user
firstPRMergeComment: >
Congrats on merging your first pull request! We here at behavior bot are proud of you!
# It is recommend to include as many gifs and emojis as possible

21
.github/stale.yml vendored Normal file
View File

@ -0,0 +1,21 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 44
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 20
# Issues with these labels will never be considered stale
exemptLabels:
- accepted
- security
- working
- pr-welcome
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
This issue was closed due to inactive more than 52 days. You can reopen or
recreate it if you think it should continue. Thank you for your contributions again.

View File

@ -2,7 +2,7 @@ name: Beta Release builds
on: on:
push: push:
branches: ["main"] branches: [ 'main' ]
workflow_dispatch: workflow_dispatch:
concurrency: concurrency:
@ -14,8 +14,12 @@ permissions:
jobs: jobs:
changelog: changelog:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Beta Release Changelog name: Beta Release Changelog
runs-on: ubuntu-latest runs-on: ${{ matrix.platform }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -61,19 +65,17 @@ jobs:
strategy: strategy:
matrix: matrix:
include: include:
- target: "!(*musl*|*windows-arm64*|*windows7-*|*android*|*freebsd*)" # xgo and loongarch - target: '!(*musl*|*windows-arm64*|*android*|*freebsd*)' # xgo
hash: "md5" hash: "md5"
- target: "linux-!(arm*)-musl*" #musl-not-arm - target: 'linux-!(arm*)-musl*' #musl-not-arm
hash: "md5-linux-musl" hash: "md5-linux-musl"
- target: "linux-arm*-musl*" #musl-arm - target: 'linux-arm*-musl*' #musl-arm
hash: "md5-linux-musl-arm" hash: "md5-linux-musl-arm"
- target: "windows-arm64" #win-arm64 - target: 'windows-arm64' #win-arm64
hash: "md5-windows-arm64" hash: "md5-windows-arm64"
- target: "windows7-*" #win7 - target: 'android-*' #android
hash: "md5-windows7"
- target: "android-*" #android
hash: "md5-android" hash: "md5-android"
- target: "freebsd-*" #freebsd - target: 'freebsd-*' #freebsd
hash: "md5-freebsd" hash: "md5-freebsd"
name: Beta Release name: Beta Release
@ -87,29 +89,27 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: "1.24.5" go-version: '1.22'
- name: Setup web - name: Setup web
run: bash build.sh dev web run: bash build.sh dev web
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Build - name: Build
uses: OpenListTeam/cgo-actions@v1.2.2 uses: OpenListTeam/cgo-actions@v1.1.2
with: with:
targets: ${{ matrix.target }} targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch musl-target-format: $os-$musl-$arch
github-token: ${{ secrets.GITHUB_TOKEN }}
out-dir: build out-dir: build
output: openlist-$target$ext output: openlist-$target$ext
musl-base-url: "https://github.com/OpenListTeam/musl-compilers/releases/latest/download/" musl-base-url: "https://github.com/OpenListTeam/musl-compilers/releases/latest/download/"
x-flags: | x-flags: |
github.com/OpenListTeam/OpenList/v4/internal/conf.BuiltAt=$built_at github.com/OpenListTeam/OpenList/internal/conf.BuiltAt=$built_at
github.com/OpenListTeam/OpenList/v4/internal/conf.GitAuthor=The OpenList Projects Contributors <noreply@openlist.team> github.com/OpenListTeam/OpenList/internal/conf.GitAuthor=OpenList
github.com/OpenListTeam/OpenList/v4/internal/conf.GitCommit=$git_commit github.com/OpenListTeam/OpenList/internal/conf.GitCommit=$git_commit
github.com/OpenListTeam/OpenList/v4/internal/conf.Version=$tag github.com/OpenListTeam/OpenList/internal/conf.Version=$tag
github.com/OpenListTeam/OpenList/v4/internal/conf.WebVersion=rolling github.com/OpenListTeam/OpenList/internal/conf.WebVersion=dev
- name: Compress - name: Compress
run: | run: |

View File

@ -1,8 +1,10 @@
name: Test Build name: Test Build
on: on:
push:
branches: [ 'main' ]
pull_request: pull_request:
branches: ["main"] branches: [ 'main' ]
workflow_dispatch: workflow_dispatch:
concurrency: concurrency:
@ -13,6 +15,7 @@ jobs:
build: build:
strategy: strategy:
matrix: matrix:
platform: [ubuntu-latest]
target: target:
- darwin-amd64 - darwin-amd64
- darwin-arm64 - darwin-arm64
@ -21,9 +24,10 @@ jobs:
- linux-amd64-musl - linux-amd64-musl
- windows-arm64 - windows-arm64
- android-arm64 - android-arm64
name: Build ${{ matrix.target }} name: Build
runs-on: ubuntu-latest runs-on: ${{ matrix.platform }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -33,31 +37,28 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: "1.24.5" go-version: '1.22'
- name: Setup web - name: Setup web
run: bash build.sh dev web run: bash build.sh dev web
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Build - name: Build
uses: OpenListTeam/cgo-actions@v1.2.2 uses: OpenListTeam/cgo-actions@v1.1.2
with: with:
targets: ${{ matrix.target }} targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch musl-target-format: $os-$musl-$arch
github-token: ${{ secrets.GITHUB_TOKEN }}
out-dir: build out-dir: build
x-flags: | x-flags: |
github.com/OpenListTeam/OpenList/v4/internal/conf.BuiltAt=$built_at github.com/OpenListTeam/OpenList/internal/conf.BuiltAt=$built_at
github.com/OpenListTeam/OpenList/v4/internal/conf.GitAuthor=The OpenList Projects Contributors <noreply@openlist.team> github.com/OpenListTeam/OpenList/internal/conf.GitAuthor=OpenList
github.com/OpenListTeam/OpenList/v4/internal/conf.GitCommit=$git_commit github.com/OpenListTeam/OpenList/internal/conf.GitCommit=$git_commit
github.com/OpenListTeam/OpenList/v4/internal/conf.Version=$tag github.com/OpenListTeam/OpenList/internal/conf.Version=$tag
github.com/OpenListTeam/OpenList/v4/internal/conf.WebVersion=rolling github.com/OpenListTeam/OpenList/internal/conf.WebVersion=dev
output: openlist$ext
- name: Upload artifact - name: Upload artifact
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: openlist_${{ steps.short-sha.outputs.sha }}_${{ matrix.target }} name: openlist_${{ env.SHA }}_${{ matrix.target }}
path: build/* path: build/*

View File

@ -1,4 +1,4 @@
name: Release Automatic changelog name: Automatic changelog
on: on:
push: push:

View File

@ -1,61 +0,0 @@
name: Issue or PR Auto Reply
on:
issues:
types: [opened]
pull_request:
types: [opened]
permissions:
issues: write
pull-requests: write
jobs:
auto-reply:
runs-on: ubuntu-latest
if: github.event_name == 'issues'
steps:
- name: Check issue for unchecked tasks and reply
uses: actions/github-script@v7
with:
script: |
const issueBody = context.payload.issue.body || "";
const unchecked = /- \[ \] /.test(issueBody);
let comment = "感谢您联系OpenList。我们会尽快回复您。\n";
comment += "Thanks for contacting OpenList. We will reply to you as soon as possible.\n\n";
if (unchecked) {
comment += "由于您提出的 Issue 中包含部分未确认的项目,为了更好地管理项目,在人工审核后可能会直接关闭此问题。\n";
comment += "如果您能确认并补充相关未确认项目的信息,欢迎随时重新提交。我们会及时关注并处理。感谢您的理解与支持!\n";
comment += "Since your issue contains some unchecked tasks, it may be closed after manual review.\n";
comment += "If you can confirm and provide information for the unchecked tasks, feel free to resubmit.\n";
comment += "We will pay attention and handle it in a timely manner.\n\n";
comment += "感谢您的理解与支持!\n";
comment += "Thank you for your understanding and support!\n";
}
await github.rest.issues.createComment({
...context.repo,
issue_number: context.issue.number,
body: comment
});
pr-title-check:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- name: Check PR title for required prefix and comment
uses: actions/github-script@v7
with:
script: |
const title = context.payload.pull_request.title || "";
const ok = /^(feat|docs|fix|style|refactor|chore)\(.+?\): /i.test(title);
if (!ok) {
let comment = "⚠️ PR 标题需以 `feat(): `, `docs(): `, `fix(): `, `style(): `, `refactor(): `, `chore(): ` 其中之一开头,例如:`feat(component): 新增功能`。\n";
comment += "⚠️ The PR title must start with `feat(): `, `docs(): `, `fix(): `, `style(): `, or `refactor(): `, `chore(): `. For example: `feat(component): add new feature`.\n\n";
comment += "如果跨多个组件,请使用主要组件作为前缀,并在标题中枚举、描述中说明。\n";
comment += "If it spans multiple components, use the main component as the prefix and enumerate in the title, describe in the body.\n\n";
await github.rest.issues.createComment({
...context.repo,
issue_number: context.issue.number,
body: comment
});
}

View File

@ -8,34 +8,24 @@ permissions:
contents: write contents: write
jobs: jobs:
# Set release to prerelease first
prerelease:
name: Set Prerelease
runs-on: ubuntu-latest
steps:
- name: Prerelease
uses: irongut/EditRelease@v1.2.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
id: ${{ github.event.release.id }}
prerelease: true
# Main release job for all platforms
release: release:
needs: prerelease
strategy: strategy:
matrix: matrix:
build-type: [ 'standard', 'lite' ] platform: [ ubuntu-latest ]
target-platform: [ '', 'android', 'freebsd', 'linux_musl', 'linux_musl_arm' ] go-version: [ '1.21' ]
name: Release ${{ matrix.target-platform && format('{0} ', matrix.target-platform) || '' }}${{ matrix.build-type == 'lite' && 'Lite' || '' }} name: Release
runs-on: ubuntu-latest runs-on: ${{ matrix.platform }}
steps: steps:
- name: Free Disk Space (Ubuntu) - name: Free Disk Space (Ubuntu)
if: matrix.target-platform == ''
uses: jlumbroso/free-disk-space@main uses: jlumbroso/free-disk-space@main
with: with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true android: true
dotnet: true dotnet: true
haskell: true haskell: true
@ -43,10 +33,17 @@ jobs:
docker-images: true docker-images: true
swap-storage: true swap-storage: true
- name: Prerelease
uses: irongut/EditRelease@v1.2.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
id: ${{ github.event.release.id }}
prerelease: true
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: '1.24' go-version: ${{ matrix.go-version }}
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -54,7 +51,6 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Install dependencies - name: Install dependencies
if: matrix.target-platform == ''
run: | run: |
sudo snap install zig --classic --beta sudo snap install zig --classic --beta
docker pull crazymax/xgo:latest docker pull crazymax/xgo:latest
@ -63,10 +59,70 @@ jobs:
- name: Build - name: Build
run: | run: |
bash build.sh release ${{ matrix.build-type == 'lite' && 'lite' || '' }} ${{ matrix.target-platform }} bash build.sh release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*
prerelease: false
release-lite:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release Lite
runs-on: ${{ matrix.platform }}
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: true
swap-storage: true
- name: Prerelease
uses: irongut/EditRelease@v1.2.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
id: ${{ github.event.release.id }}
prerelease: true
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install dependencies
run: |
sudo snap install zig --classic --beta
docker pull crazymax/xgo:latest
go install github.com/crazy-max/xgo@latest
sudo apt install upx
- name: Build
run: |
bash build.sh release lite
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Upload assets - name: Upload assets
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2

69
.github/workflows/release_android.yml vendored Normal file
View File

@ -0,0 +1,69 @@
name: Release builds (Android)
on:
release:
types: [ published ]
permissions:
contents: write
jobs:
release_android:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release android
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*
release_android_lite:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release lite android
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*

View File

@ -31,8 +31,11 @@ env:
REGISTRY: ghcr.io REGISTRY: ghcr.io
ARTIFACT_NAME: 'binaries_docker_release' ARTIFACT_NAME: 'binaries_docker_release'
ARTIFACT_NAME_LITE: 'binaries_docker_release_lite' ARTIFACT_NAME_LITE: 'binaries_docker_release_lite'
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/ppc64le,linux/riscv64,linux/loong64' ### Temporarily disable Docker builds for linux/s390x architectures for unknown reasons. RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64'
IMAGE_PUSH: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }} IMAGE_PUSH: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }}
IMAGE_IS_PROD: ${{ github.ref_type == 'tag' || github.event.inputs.as_latest == 'true' }}
IMAGE_TAGS_BETA: |
type=raw,value=beta,enable={{is_default_branch}}
permissions: permissions:
packages: write packages: write
@ -62,11 +65,17 @@ jobs:
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build go binary (beta)
if: env.IMAGE_IS_PROD != 'true'
run: bash build.sh beta docker-multiplatform
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build go binary (release) - name: Build go binary (release)
if: env.IMAGE_IS_PROD == 'true'
run: bash build.sh release docker-multiplatform run: bash build.sh release docker-multiplatform
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
@ -79,7 +88,7 @@ jobs:
!build/musl-libs/** !build/musl-libs/**
build_binary_lite: build_binary_lite:
name: Build Binaries for Docker Release (Lite) name: Build Binaries for Docker Release
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
@ -102,11 +111,17 @@ jobs:
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build go binary (beta)
if: env.IMAGE_IS_PROD != 'true'
run: bash build.sh beta lite docker-multiplatform
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build go binary (release) - name: Build go binary (release)
if: env.IMAGE_IS_PROD == 'true'
run: bash build.sh release lite docker-multiplatform run: bash build.sh release lite docker-multiplatform
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
@ -127,19 +142,15 @@ jobs:
image: ["latest", "ffmpeg", "aria2", "aio"] image: ["latest", "ffmpeg", "aria2", "aio"]
include: include:
- image: "latest" - image: "latest"
base_image_tag: "base"
build_arg: "" build_arg: ""
tag_favor: "" tag_favor: ""
- image: "ffmpeg" - image: "ffmpeg"
base_image_tag: "ffmpeg"
build_arg: INSTALL_FFMPEG=true build_arg: INSTALL_FFMPEG=true
tag_favor: "suffix=-ffmpeg,onlatest=true" tag_favor: "suffix=-ffmpeg,onlatest=true"
- image: "aria2" - image: "aria2"
base_image_tag: "aria2"
build_arg: INSTALL_ARIA2=true build_arg: INSTALL_ARIA2=true
tag_favor: "suffix=-aria2,onlatest=true" tag_favor: "suffix=-aria2,onlatest=true"
- image: "aio" - image: "aio"
base_image_tag: "aio"
build_arg: | build_arg: |
INSTALL_FFMPEG=true INSTALL_FFMPEG=true
INSTALL_ARIA2=true INSTALL_ARIA2=true
@ -170,7 +181,7 @@ jobs:
if: env.IMAGE_PUSH == 'true' if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
username: ${{ vars.DOCKERHUB_ORG_NAME_BACKUP || env.DOCKERHUB_ORG_NAME }} username: ${{ env.DOCKERHUB_ORG_NAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta - name: Docker meta
@ -181,11 +192,13 @@ jobs:
${{ env.REGISTRY }}/${{ env.GHCR_ORG_NAME }}/${{ env.IMAGE_NAME }} ${{ env.REGISTRY }}/${{ env.GHCR_ORG_NAME }}/${{ env.IMAGE_NAME }}
${{ env.DOCKERHUB_ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }} ${{ env.DOCKERHUB_ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }}
tags: > tags: >
${{ github.event_name == 'workflow_dispatch' ${{ env.IMAGE_IS_PROD == 'true' && (
github.event_name == 'workflow_dispatch'
&& format('type=raw,value={0}', github.event.inputs.manual_tag) && format('type=raw,value={0}', github.event.inputs.manual_tag)
|| format('type=raw,value={0}', github.ref_name) }} || format('type=raw,value={0}', github.ref_name)
) || env.IMAGE_TAGS_BETA }}
flavor: | flavor: |
latest=${{ github.event_name == 'push' || github.event.inputs.as_latest == 'true' }} latest=${{ env.IMAGE_IS_PROD }}
${{ matrix.tag_favor }} ${{ matrix.tag_favor }}
- name: Build and push - name: Build and push
@ -195,35 +208,29 @@ jobs:
context: . context: .
file: Dockerfile.ci file: Dockerfile.ci
push: ${{ env.IMAGE_PUSH == 'true' }} push: ${{ env.IMAGE_PUSH == 'true' }}
build-args: | build-args: ${{ matrix.build_arg }}
BASE_IMAGE_TAG=${{ matrix.base_image_tag }}
${{ matrix.build_arg }}
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
platforms: ${{ env.RELEASE_PLATFORMS }} platforms: ${{ env.RELEASE_PLATFORMS }}
release_docker_lite: release_docker_lite:
needs: build_binary_lite needs: build_binary_lite
name: Release Docker image (Lite) name: Release Docker image
runs-on: ubuntu-latest runs-on: ubuntu-latest
strategy: strategy:
matrix: matrix:
image: ["latest", "ffmpeg", "aria2", "aio"] image: ["latest", "ffmpeg", "aria2", "aio"]
include: include:
- image: "latest" - image: "latest"
base_image_tag: "base"
build_arg: "" build_arg: ""
tag_favor: "suffix=-lite,onlatest=true" tag_favor: "suffix=-lite,onlatest=true"
- image: "ffmpeg" - image: "ffmpeg"
base_image_tag: "ffmpeg"
build_arg: INSTALL_FFMPEG=true build_arg: INSTALL_FFMPEG=true
tag_favor: "suffix=-lite-ffmpeg,onlatest=true" tag_favor: "suffix=-lite-ffmpeg,onlatest=true"
- image: "aria2" - image: "aria2"
base_image_tag: "aria2"
build_arg: INSTALL_ARIA2=true build_arg: INSTALL_ARIA2=true
tag_favor: "suffix=-lite-aria2,onlatest=true" tag_favor: "suffix=-lite-aria2,onlatest=true"
- image: "aio" - image: "aio"
base_image_tag: "aio"
build_arg: | build_arg: |
INSTALL_FFMPEG=true INSTALL_FFMPEG=true
INSTALL_ARIA2=true INSTALL_ARIA2=true
@ -254,7 +261,7 @@ jobs:
if: env.IMAGE_PUSH == 'true' if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
username: ${{ vars.DOCKERHUB_ORG_NAME_BACKUP || env.DOCKERHUB_ORG_NAME }} username: ${{ env.DOCKERHUB_ORG_NAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta - name: Docker meta
@ -265,11 +272,13 @@ jobs:
${{ env.REGISTRY }}/${{ env.GHCR_ORG_NAME }}/${{ env.IMAGE_NAME }} ${{ env.REGISTRY }}/${{ env.GHCR_ORG_NAME }}/${{ env.IMAGE_NAME }}
${{ env.DOCKERHUB_ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }} ${{ env.DOCKERHUB_ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }}
tags: > tags: >
${{ github.event_name == 'workflow_dispatch' ${{ env.IMAGE_IS_PROD == 'true' && (
github.event_name == 'workflow_dispatch'
&& format('type=raw,value={0}', github.event.inputs.manual_tag) && format('type=raw,value={0}', github.event.inputs.manual_tag)
|| format('type=raw,value={0}', github.ref_name) }} || format('type=raw,value={0}', github.ref_name)
) || env.IMAGE_TAGS_BETA }}
flavor: | flavor: |
latest=${{ github.event_name == 'push' || github.event.inputs.as_latest == 'true' }} latest=${{ env.IMAGE_IS_PROD }}
${{ matrix.tag_favor }} ${{ matrix.tag_favor }}
- name: Build and push - name: Build and push
@ -279,9 +288,7 @@ jobs:
context: . context: .
file: Dockerfile.ci file: Dockerfile.ci
push: ${{ env.IMAGE_PUSH == 'true' }} push: ${{ env.IMAGE_PUSH == 'true' }}
build-args: | build-args: ${{ matrix.build_arg }}
BASE_IMAGE_TAG=${{ matrix.base_image_tag }}
${{ matrix.build_arg }}
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
platforms: ${{ env.RELEASE_PLATFORMS }} platforms: ${{ env.RELEASE_PLATFORMS }}

69
.github/workflows/release_freebsd.yml vendored Normal file
View File

@ -0,0 +1,69 @@
name: Release builds (Freebsd)
on:
release:
types: [ published ]
permissions:
contents: write
jobs:
release_freebsd:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release freebsd
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*
release_freebsd_lite:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release lite freebsd
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*

View File

@ -0,0 +1,69 @@
name: Release builds (linux_musl)
on:
release:
types: [ published ]
permissions:
contents: write
jobs:
release_linux_musl:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release linux_musl
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*
release_linux_musl_lite:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release lite linux_musl
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*

View File

@ -0,0 +1,70 @@
name: Release builds (linux_musl_arm)
on:
release:
types: [ published ]
permissions:
contents: write
jobs:
release_linux_musl_arm:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release linux_musl_arm
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*
release_linux_musl_arm_lite:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release lite linux_musl_arm
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*

View File

@ -1,38 +0,0 @@
name: Sync to Gitee
on:
push:
branches:
- main
workflow_dispatch:
jobs:
sync:
runs-on: ubuntu-latest
name: Sync GitHub to Gitee
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.GITEE_SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan gitee.com >> ~/.ssh/known_hosts
- name: Create single commit and push
run: |
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
# Create a new branch
git checkout --orphan new-main
git add .
git commit -m "Sync from GitHub: $(date)"
# Add Gitee remote and force push
git remote add gitee ${{ vars.GITEE_REPO_URL }}
git push --force gitee new-main:main

View File

@ -1,4 +1,4 @@
name: Beta Release (Docker) name: Docker Beta Release
on: on:
workflow_dispatch: workflow_dispatch:
@ -20,7 +20,8 @@ env:
IMAGE_NAME_DOCKERHUB: openlist IMAGE_NAME_DOCKERHUB: openlist
REGISTRY: ghcr.io REGISTRY: ghcr.io
ARTIFACT_NAME: 'binaries_docker_release' ARTIFACT_NAME: 'binaries_docker_release'
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/ppc64le,linux/riscv64,linux/loong64' ### Temporarily disable Docker builds for linux/s390x architectures for unknown reasons. ARTIFACT_NAME_LITE: 'binaries_docker_release_lite'
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64'
IMAGE_PUSH: ${{ github.event_name == 'push' }} IMAGE_PUSH: ${{ github.event_name == 'push' }}
IMAGE_TAGS_BETA: | IMAGE_TAGS_BETA: |
type=ref,event=pr type=ref,event=pr
@ -28,7 +29,7 @@ env:
jobs: jobs:
build_binary: build_binary:
name: Build Binaries for Docker Release (Beta) name: Build Binaries for Docker Release
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
@ -55,7 +56,6 @@ jobs:
run: bash build.sh beta docker-multiplatform run: bash build.sh beta docker-multiplatform
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
@ -69,7 +69,7 @@ jobs:
release_docker: release_docker:
needs: build_binary needs: build_binary
name: Release Docker image (Beta) name: Release Docker image
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
packages: write packages: write
@ -78,19 +78,15 @@ jobs:
image: ["latest", "ffmpeg", "aria2", "aio"] image: ["latest", "ffmpeg", "aria2", "aio"]
include: include:
- image: "latest" - image: "latest"
base_image_tag: "base"
build_arg: "" build_arg: ""
tag_favor: "" tag_favor: ""
- image: "ffmpeg" - image: "ffmpeg"
base_image_tag: "ffmpeg"
build_arg: INSTALL_FFMPEG=true build_arg: INSTALL_FFMPEG=true
tag_favor: "suffix=-ffmpeg,onlatest=true" tag_favor: "suffix=-ffmpeg,onlatest=true"
- image: "aria2" - image: "aria2"
base_image_tag: "aria2"
build_arg: INSTALL_ARIA2=true build_arg: INSTALL_ARIA2=true
tag_favor: "suffix=-aria2,onlatest=true" tag_favor: "suffix=-aria2,onlatest=true"
- image: "aio" - image: "aio"
base_image_tag: "aio"
build_arg: | build_arg: |
INSTALL_FFMPEG=true INSTALL_FFMPEG=true
INSTALL_ARIA2=true INSTALL_ARIA2=true
@ -121,7 +117,7 @@ jobs:
if: env.IMAGE_PUSH == 'true' if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
username: ${{ vars.DOCKERHUB_ORG_NAME_BACKUP || env.DOCKERHUB_ORG_NAME }} username: ${{ env.DOCKERHUB_ORG_NAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta - name: Docker meta
@ -142,9 +138,7 @@ jobs:
context: . context: .
file: Dockerfile.ci file: Dockerfile.ci
push: ${{ env.IMAGE_PUSH == 'true' }} push: ${{ env.IMAGE_PUSH == 'true' }}
build-args: | build-args: ${{ matrix.build_arg }}
BASE_IMAGE_TAG=${{ matrix.base_image_tag }}
${{ matrix.build_arg }}
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
platforms: ${{ env.RELEASE_PLATFORMS }} platforms: ${{ env.RELEASE_PLATFORMS }}

View File

@ -19,7 +19,7 @@ jobs:
uses: peter-evans/repository-dispatch@v3 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.EXTERNAL_REPO_TOKEN_LUCI_APP_OPENLIST }} token: ${{ secrets.EXTERNAL_REPO_TOKEN_LUCI_APP_OPENLIST }}
repository: ${{ vars.HOOK_REPO || 'OpenListTeam/OpenList-OpenWRT' }} repository: ${{ vars.HOOK_REPO || 'OpenListTeam/luci-app-openlist' }}
event-type: update-hashes event-type: update-hashes
client-payload: | client-payload: |
{ {

View File

@ -1,7 +1,4 @@
### Default image is base. You can add other support by modifying BASE_IMAGE_TAG. The following parameters are supported: base (default), aria2, ffmpeg, aio FROM docker.io/library/alpine:edge AS builder
ARG BASE_IMAGE_TAG=base
FROM alpine:edge AS builder
LABEL stage=go-builder LABEL stage=go-builder
WORKDIR /app/ WORKDIR /app/
RUN apk add --no-cache bash curl jq gcc git go musl-dev RUN apk add --no-cache bash curl jq gcc git go musl-dev
@ -10,26 +7,36 @@ RUN go mod download
COPY ./ ./ COPY ./ ./
RUN bash build.sh release docker RUN bash build.sh release docker
FROM openlistteam/openlist-base-image:${BASE_IMAGE_TAG} FROM alpine:edge
LABEL MAINTAINER="OpenList"
ARG INSTALL_FFMPEG=false ARG INSTALL_FFMPEG=false
ARG INSTALL_ARIA2=false ARG INSTALL_ARIA2=false
ARG USER=openlist LABEL MAINTAINER="OpenList"
ARG UID=1001
ARG GID=1001
WORKDIR /opt/openlist/ WORKDIR /opt/openlist/
RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache bash ca-certificates su-exec tzdata; \
[ "$INSTALL_FFMPEG" = "true" ] && apk add --no-cache ffmpeg; \
[ "$INSTALL_ARIA2" = "true" ] && apk add --no-cache curl aria2 && \
mkdir -p /opt/aria2/.aria2 && \
wget https://github.com/P3TERX/aria2.conf/archive/refs/heads/master.tar.gz -O /tmp/aria-conf.tar.gz && \
tar -zxvf /tmp/aria-conf.tar.gz -C /opt/aria2/.aria2 --strip-components=1 && rm -f /tmp/aria-conf.tar.gz && \
sed -i 's|rpc-secret|#rpc-secret|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/script.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/script.conf && \
touch /opt/aria2/.aria2/aria2.session && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY --chmod=755 --from=builder /app/bin/openlist ./ COPY --chmod=755 --from=builder /app/bin/openlist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN adduser -u ${UID} -g ${GID} -h /opt/openlist/data -D -s /bin/sh ${USER} \
&& chown -R ${UID}:${GID} /opt \
&& chown -R ${UID}:${GID} /entrypoint.sh
USER ${USER}
RUN /entrypoint.sh version RUN /entrypoint.sh version
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2} ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/openlist/data/ VOLUME /opt/openlist/data/
EXPOSE 5244 5245 EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ] CMD [ "/entrypoint.sh" ]

View File

@ -1,26 +1,34 @@
ARG BASE_IMAGE_TAG=base FROM docker.io/library/alpine:edge
FROM ghcr.io/openlistteam/openlist-base-image:${BASE_IMAGE_TAG}
LABEL MAINTAINER="OpenList"
ARG TARGETPLATFORM ARG TARGETPLATFORM
ARG INSTALL_FFMPEG=false ARG INSTALL_FFMPEG=false
ARG INSTALL_ARIA2=false ARG INSTALL_ARIA2=false
ARG USER=openlist LABEL MAINTAINER="OpenList"
ARG UID=1001
ARG GID=1001
WORKDIR /opt/openlist/ WORKDIR /opt/openlist/
RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache bash ca-certificates su-exec tzdata; \
[ "$INSTALL_FFMPEG" = "true" ] && apk add --no-cache ffmpeg; \
[ "$INSTALL_ARIA2" = "true" ] && apk add --no-cache curl aria2 && \
mkdir -p /opt/aria2/.aria2 && \
wget https://github.com/P3TERX/aria2.conf/archive/refs/heads/master.tar.gz -O /tmp/aria-conf.tar.gz && \
tar -zxvf /tmp/aria-conf.tar.gz -C /opt/aria2/.aria2 --strip-components=1 && rm -f /tmp/aria-conf.tar.gz && \
sed -i 's|rpc-secret|#rpc-secret|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/script.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/script.conf && \
touch /opt/aria2/.aria2/aria2.session && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY --chmod=755 /build/${TARGETPLATFORM}/openlist ./ COPY --chmod=755 /build/${TARGETPLATFORM}/openlist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN adduser -u ${UID} -g ${GID} -h /opt/openlist/data -D -s /bin/sh ${USER} \
&& chown -R ${UID}:${GID} /opt \
&& chown -R ${UID}:${GID} /entrypoint.sh
USER ${USER}
RUN /entrypoint.sh version RUN /entrypoint.sh version
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2} ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/openlist/data/ VOLUME /opt/openlist/data/
EXPOSE 5244 5245 EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ] CMD [ "/entrypoint.sh" ]

View File

@ -20,34 +20,6 @@
- [CODE OF CONDUCT](./CODE_OF_CONDUCT.md) - [CODE OF CONDUCT](./CODE_OF_CONDUCT.md)
- [LICENSE](./LICENSE) - [LICENSE](./LICENSE)
## Disclaimer
OpenList is an open-source project independently maintained by the OpenList Team, following the AGPL-3.0 license and committed to maintaining complete code openness and modification transparency.
We have noticed the emergence of some third-party projects in the community with names similar to this project, such as OpenListApp/OpenListApp, as well as some paid proprietary software using the same or similar naming. To avoid user confusion, we hereby declare:
- OpenList has no official association with any third-party derivative projects.
- All software, code, and services of this project are maintained by the OpenList Team and are freely available on GitHub.
- Project documentation and API services primarily rely on charitable resources provided by Cloudflare. There are currently no paid plans or commercial deployments, and the use of existing features does not involve any costs.
We respect the community's rights to free use and derivative development, but we also strongly urge downstream projects:
- Should not use the "OpenList" name for impersonation promotion or commercial gain;
- Must not distribute OpenList-based code in a closed-source manner or violate AGPL license terms.
To better maintain healthy ecosystem development, we recommend:
- Clearly indicate the project source and choose appropriate open-source licenses in accordance with the open-source spirit;
- If involving commercial use, please avoid using "OpenList" or any confusing naming as the project name;
- If you need to use materials located under OpenListTeam/Logo, you may modify and use them under compliance with the agreement.
Thank you for your support and understanding of the OpenList project.
## Features ## Features
- [x] Multiple storages - [x] Multiple storages
@ -106,9 +78,10 @@ Thank you for your support and understanding of the OpenList project.
## Document ## Document
- 📘 [Global Site](https://doc.oplist.org) - 📘 [Docs & Install Guide](https://docs.oplist.org)
- 📚 [Backup Site](https://doc.openlist.team) - 📚 [Backup Docs Site](https://docs.openlist.team)
- 🌏 [CN Site](https://doc.oplist.org.cn) - ⚖️ [Terms of Use](https://docs.oplist.org/terms)
- 🔒 [Privacy Policy](https://docs.oplist.org/privacy)
## Demo ## Demo

View File

@ -20,34 +20,6 @@
- [行为准则](./CODE_OF_CONDUCT.md) - [行为准则](./CODE_OF_CONDUCT.md)
- [许可证](./LICENSE) - [许可证](./LICENSE)
## 免责声明
OpenList 是一个由 OpenList 团队独立维护的开源项目,遵循 AGPL-3.0 许可证,致力于保持完整的代码开放性和修改透明性。
我们注意到社区中出现了一些与本项目名称相似的第三方项目,如 OpenListApp/OpenListApp以及部分采用相同或近似命名的收费专有软件。为避免用户误解现声明如下
- OpenList 与任何第三方衍生项目无官方关联。
- 本项目的全部软件、代码与服务由 OpenList 团队维护,可在 GitHub 免费获取。
- 项目文档与 API 服务均主要依托于 Cloudflare 提供的公益资源,目前无任何收费计划或商业部署,现有功能使用不涉及任何支出。
我们尊重社区的自由使用与衍生开发权利,但也强烈呼吁下游项目:
- 不应以“OpenList”名义进行冒名宣传或获取商业利益
- 不得将基于 OpenList 的代码进行闭源分发或违反 AGPL 许可证条款。
为了更好地维护生态健康发展,我们建议:
- 明确注明项目来源,并以符合开源精神的方式选择适当的开源许可证;
- 如涉及商业用途请避免使用“OpenList”或任何会产生混淆的方式作为项目名称
- 若需使用本项目位于 OpenListTeam/Logo 下的素材,可在遵守协议的前提下进行修改后使用。
感谢您对 OpenList 项目的支持与理解。
## 功能 ## 功能
- [x] 多种存储 - [x] 多种存储
@ -106,9 +78,10 @@ OpenList 是一个由 OpenList 团队独立维护的开源项目,遵循 AGPL-3
## 文档 ## 文档
- 🌏 [国内站点](https://doc.oplist.org.cn) - 📘 [文档与安装指南](https://docs.oplist.org)
- 📘 [海外站点](https://doc.oplist.org) - 📚 [备用文档站点](https://docs.openlist.team)
- 📚 [备用站点](https://doc.openlist.team) - ⚖️ [使用条款](https://docs.oplist.org/terms)
- 🔒 [隐私政策](https://docs.oplist.org/privacy)
## 演示 ## 演示

View File

@ -20,34 +20,6 @@
- [行動規範](./CODE_OF_CONDUCT.md) - [行動規範](./CODE_OF_CONDUCT.md)
- [ライセンス](./LICENSE) - [ライセンス](./LICENSE)
## 免責事項
OpenListは、OpenListチームが独立して維持するオープンソースプロジェクトであり、AGPL-3.0ライセンスに従い、完全なコードの開放性と変更の透明性を維持することに専念しています。
コミュニティ内で、OpenListApp/OpenListAppなど、本プロジェクトと類似した名称を持つサードパーティプロジェクトや、同一または類似した命名を採用する有料専有ソフトウェアが出現していることを確認しています。ユーザーの誤解を避けるため、以下のように宣言いたします
- OpenListは、いかなるサードパーティ派生プロジェクトとも公式な関連性はありません。
- 本プロジェクトのすべてのソフトウェア、コード、サービスはOpenListチームによって維持され、GitHubで無料で取得できます。
- プロジェクトドキュメントとAPIサービスは主にCloudflareが提供する公益リソースに依存しており、現在有料プランや商業展開はなく、既存機能の使用に費用は発生しません。
私たちはコミュニティの自由な使用と派生開発の権利を尊重しますが、下流プロジェクトに強く呼びかけます:
- 「OpenList」の名前で偽装宣伝や商業利益を得るべきではありません
- OpenListベースのコードをクローズドソースで配布したり、AGPLライセンス条項に違反してはいけません。
エコシステムの健全な発展をより良く維持するため、以下を推奨します:
- プロジェクトの出典を明確に示し、オープンソース精神に合致する適切なオープンソースライセンスを選択する;
- 商業用途が関わる場合は、「OpenList」や混乱を招く可能性のある名前をプロジェクト名として使用することを避ける
- OpenListTeam/Logo下の素材を使用する必要がある場合は、協定を遵守した上で修正して使用できます。
OpenListプロジェクトへのご支援とご理解をありがとうございます。
## 特徴 ## 特徴
- [x] 複数ストレージ - [x] 複数ストレージ
@ -106,9 +78,10 @@ OpenListプロジェクトへのご支援とご理解をありがとうござい
## ドキュメント ## ドキュメント
- 📘 [グローバルサイト](https://doc.oplist.org) - 📘 [ドキュメント・インストールガイド](https://docs.oplist.org)
- 📚 [バックアップサイト](https://doc.openlist.team) - 📚 [バックアップドキュメントサイト](https://docs.openlist.team)
- 🌏 [CNサイト](https://doc.oplist.org.cn) - ⚖️ [利用規約](https://docs.oplist.org/terms)
- 🔒 [プライバシーポリシー](https://docs.oplist.org/privacy)
## デモ ## デモ

View File

@ -20,34 +20,6 @@
- [Gedragscode](./CODE_OF_CONDUCT.md) - [Gedragscode](./CODE_OF_CONDUCT.md)
- [Licentie](./LICENSE) - [Licentie](./LICENSE)
## Disclaimer
OpenList is een open-source project dat onafhankelijk wordt onderhouden door het OpenList Team, volgend op de AGPL-3.0 licentie en toegewijd aan het behouden van volledige code openheid en transparantie van wijzigingen.
We hebben gemerkt dat er in de gemeenschap enkele derde partij projecten zijn verschenen met namen vergelijkbaar met dit project, zoals OpenListApp/OpenListApp, evenals enkele betaalde eigendomssoftware die dezelfde of soortgelijke naamgeving gebruikt. Om verwarring bij gebruikers te voorkomen, verklaren we hierbij:
- OpenList heeft geen officiële associatie met enige derde partij afgeleide projecten.
- Alle software, code en diensten van dit project worden onderhouden door het OpenList Team en zijn gratis beschikbaar op GitHub.
- Projectdocumentatie en API diensten zijn voornamelijk afhankelijk van liefdadigheidsbronnen verstrekt door Cloudflare. Er zijn momenteel geen betaalplannen of commerciële implementaties, en het gebruik van bestaande functies brengt geen kosten met zich mee.
We respecteren de rechten van de gemeenschap voor vrij gebruik en afgeleide ontwikkeling, maar we roepen downstream projecten ook ten zeerste op:
- Mogen niet de "OpenList" naam gebruiken voor namaakpromotie of commercieel gewin;
- Mogen OpenList-gebaseerde code niet distribueren op een closed-source manier of AGPL licentievoorwaarden schenden.
Om een gezonde ecosysteemontwikkeling beter te onderhouden, bevelen we aan:
- Duidelijk de projectbron aangeven en passende open-source licenties kiezen in overeenstemming met de open-source geest;
- Bij commercieel gebruik, vermijd het gebruik van "OpenList" of enige verwarrende naamgeving als projectnaam;
- Als u materialen onder OpenListTeam/Logo moet gebruiken, kunt u deze wijzigen en gebruiken onder naleving van de overeenkomst.
Dank u voor uw ondersteuning en begrip
## Functies ## Functies
- [x] Meerdere opslagmogelijkheden - [x] Meerdere opslagmogelijkheden
@ -106,9 +78,10 @@ Dank u voor uw ondersteuning en begrip
## Documentatie ## Documentatie
- 📘 [Global Site](https://doc.oplist.org) - 📘 [Documentatie & Installatiegids](https://docs.oplist.org)
- 📚 [Backup Site](https://doc.openlist.team) - 📚 [Back-up documentatiesite](https://docs.openlist.team)
- 🌏 [CN Site](https://doc.oplist.org.cn) - ⚖️ [Gebruiksvoorwaarden](https://docs.oplist.org/terms)
- 🔒 [Privacybeleid](https://docs.oplist.org/privacy)
## Demo ## Demo

265
build.sh
View File

@ -4,9 +4,6 @@ builtAt="$(date +'%F %T %z')"
gitAuthor="The OpenList Projects Contributors <noreply@openlist.team>" gitAuthor="The OpenList Projects Contributors <noreply@openlist.team>"
gitCommit=$(git log --pretty=format:"%h" -1) gitCommit=$(git log --pretty=format:"%h" -1)
# Set frontend repository, default to OpenListTeam/OpenList-Frontend
frontendRepo="${FRONTEND_REPO:-OpenListTeam/OpenList-Frontend}"
githubAuthArgs="" githubAuthArgs=""
if [ -n "$GITHUB_TOKEN" ]; then if [ -n "$GITHUB_TOKEN" ]; then
githubAuthArgs="--header \"Authorization: Bearer $GITHUB_TOKEN\"" githubAuthArgs="--header \"Authorization: Bearer $GITHUB_TOKEN\""
@ -20,15 +17,15 @@ fi
if [ "$1" = "dev" ]; then if [ "$1" = "dev" ]; then
version="dev" version="dev"
webVersion="rolling" webVersion="dev"
elif [ "$1" = "beta" ]; then elif [ "$1" = "beta" ]; then
version="beta" version="beta"
webVersion="rolling" webVersion="dev"
else else
git tag -d beta || true git tag -d beta || true
# Always true if there's no tag # Always true if there's no tag
version=$(git describe --abbrev=0 --tags 2>/dev/null || echo "v0.0.0") version=$(git describe --abbrev=0 --tags 2>/dev/null || echo "v0.0.0")
webVersion=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/$frontendRepo/releases/latest\"" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g') webVersion=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest\"" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
fi fi
echo "backend version: $version" echo "backend version: $version"
@ -41,28 +38,37 @@ fi
ldflags="\ ldflags="\
-w -s \ -w -s \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.BuiltAt=$builtAt' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.BuiltAt=$builtAt' \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.GitAuthor=$gitAuthor' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.GitAuthor=$gitAuthor' \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.GitCommit=$gitCommit' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.GitCommit=$gitCommit' \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.Version=$version' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.Version=$version' \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.WebVersion=$webVersion' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.WebVersion=$webVersion' \
" "
FetchWebRolling() { FetchWebDev() {
pre_release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/$frontendRepo/releases/tags/rolling\"") pre_release_tag=$(eval "curl -fsSL --max-time 2 $githubAuthArgs https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases" | jq -r 'map(select(.prerelease)) | first | .tag_name')
if [ -z "$pre_release_tag" ] || [ "$pre_release_tag" == "null" ]; then
# fall back to latest release
pre_release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest\"")
else
pre_release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/tags/$pre_release_tag\"")
fi
pre_release_assets=$(echo "$pre_release_json" | jq -r '.assets[].browser_download_url') pre_release_assets=$(echo "$pre_release_json" | jq -r '.assets[].browser_download_url')
# There is no lite for rolling if [ "$useLite" = true ]; then
pre_release_tar_url=$(echo "$pre_release_assets" | grep "openlist-frontend-dist-lite" | grep "\.tar\.gz$")
else
pre_release_tar_url=$(echo "$pre_release_assets" | grep "openlist-frontend-dist" | grep -v "lite" | grep "\.tar\.gz$") pre_release_tar_url=$(echo "$pre_release_assets" | grep "openlist-frontend-dist" | grep -v "lite" | grep "\.tar\.gz$")
fi
curl -fsSL "$pre_release_tar_url" -o dist.tar.gz curl -fsSL "$pre_release_tar_url" -o web-dist-dev.tar.gz
rm -rf public/dist && mkdir -p public/dist rm -rf public/dist && mkdir -p public/dist
tar -zxvf dist.tar.gz -C public/dist tar -zxvf web-dist-dev.tar.gz -C public/dist
rm -rf dist.tar.gz rm -rf web-dist-dev.tar.gz
} }
FetchWebRelease() { FetchWebRelease() {
release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/$frontendRepo/releases/latest\"") release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest\"")
release_assets=$(echo "$release_json" | jq -r '.assets[].browser_download_url') release_assets=$(echo "$release_json" | jq -r '.assets[].browser_download_url')
if [ "$useLite" = true ]; then if [ "$useLite" = true ]; then
@ -89,45 +95,6 @@ BuildWinArm64() {
go build -o "$1" -ldflags="$ldflags" -tags=jsoniter . go build -o "$1" -ldflags="$ldflags" -tags=jsoniter .
} }
BuildWin7() {
# Setup Win7 Go compiler (patched version that supports Windows 7)
go_version=$(go version | grep -o 'go[0-9]\+\.[0-9]\+\.[0-9]\+' | sed 's/go//')
echo "Detected Go version: $go_version"
curl -fsSL --retry 3 -o go-win7.zip -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/XTLS/go-win7/releases/download/patched-${go_version}/go-for-win7-linux-amd64.zip"
rm -rf go-win7
unzip go-win7.zip -d go-win7
rm go-win7.zip
# Set permissions for all wrapper files
chmod +x ./wrapper/zcc-win7
chmod +x ./wrapper/zcxx-win7
chmod +x ./wrapper/zcc-win7-386
chmod +x ./wrapper/zcxx-win7-386
# Build for both 386 and amd64 architectures
for arch in "386" "amd64"; do
echo "building for windows7-${arch}"
export GOOS=windows
export GOARCH=${arch}
export CGO_ENABLED=1
# Use architecture-specific wrapper files
if [ "$arch" = "386" ]; then
export CC=$(pwd)/wrapper/zcc-win7-386
export CXX=$(pwd)/wrapper/zcxx-win7-386
else
export CC=$(pwd)/wrapper/zcc-win7
export CXX=$(pwd)/wrapper/zcxx-win7
fi
# Use the patched Go compiler for Win7 compatibility
$(pwd)/go-win7/bin/go build -o "${1}-${arch}.exe" -ldflags="$ldflags" -tags=jsoniter .
done
}
BuildDev() { BuildDev() {
rm -rf .git/ rm -rf .git/
mkdir -p "dist" mkdir -p "dist"
@ -154,8 +121,8 @@ BuildDev() {
xgo -targets=windows/amd64,darwin/amd64,darwin/arm64 -out "$appName" -ldflags="$ldflags" -tags=jsoniter . xgo -targets=windows/amd64,darwin/amd64,darwin/arm64 -out "$appName" -ldflags="$ldflags" -tags=jsoniter .
mv "$appName"-* dist mv "$appName"-* dist
cd dist cd dist
# cp ./"$appName"-windows-amd64.exe ./"$appName"-windows-amd64-upx.exe cp ./"$appName"-windows-amd64.exe ./"$appName"-windows-amd64-upx.exe
# upx -9 ./"$appName"-windows-amd64-upx.exe upx -9 ./"$appName"-windows-amd64-upx.exe
find . -type f -print0 | xargs -0 md5sum >md5.txt find . -type f -print0 | xargs -0 md5sum >md5.txt
cat md5.txt cat md5.txt
} }
@ -167,7 +134,7 @@ BuildDocker() {
PrepareBuildDockerMusl() { PrepareBuildDockerMusl() {
mkdir -p build/musl-libs mkdir -p build/musl-libs
BASE="https://github.com/OpenListTeam/musl-compilers/releases/latest/download/" BASE="https://github.com/OpenListTeam/musl-compilers/releases/latest/download/"
FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross i486-linux-musl-cross armv6-linux-musleabihf-cross armv7l-linux-musleabihf-cross riscv64-linux-musl-cross powerpc64le-linux-musl-cross loongarch64-linux-musl-cross) ## Disable s390x-linux-musl-cross builds FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross i486-linux-musl-cross s390x-linux-musl-cross armv6-linux-musleabihf-cross armv7l-linux-musleabihf-cross riscv64-linux-musl-cross powerpc64le-linux-musl-cross)
for i in "${FILES[@]}"; do for i in "${FILES[@]}"; do
url="${BASE}${i}.tgz" url="${BASE}${i}.tgz"
lib_tgz="build/${i}.tgz" lib_tgz="build/${i}.tgz"
@ -186,8 +153,8 @@ BuildDockerMultiplatform() {
docker_lflags="--extldflags '-static -fpic' $ldflags" docker_lflags="--extldflags '-static -fpic' $ldflags"
export CGO_ENABLED=1 export CGO_ENABLED=1
OS_ARCHES=(linux-amd64 linux-arm64 linux-386 linux-riscv64 linux-ppc64le linux-loong64) ## Disable linux-s390x builds OS_ARCHES=(linux-amd64 linux-arm64 linux-386 linux-s390x linux-riscv64 linux-ppc64le)
CGO_ARGS=(x86_64-linux-musl-gcc aarch64-linux-musl-gcc i486-linux-musl-gcc riscv64-linux-musl-gcc powerpc64le-linux-musl-gcc loongarch64-linux-musl-gcc) ## Disable s390x-linux-musl-gcc builds CGO_ARGS=(x86_64-linux-musl-gcc aarch64-linux-musl-gcc i486-linux-musl-gcc s390x-linux-musl-gcc riscv64-linux-musl-gcc powerpc64le-linux-musl-gcc)
for i in "${!OS_ARCHES[@]}"; do for i in "${!OS_ARCHES[@]}"; do
os_arch=${OS_ARCHES[$i]} os_arch=${OS_ARCHES[$i]}
cgo_cc=${CGO_ARGS[$i]} cgo_cc=${CGO_ARGS[$i]}
@ -219,171 +186,12 @@ BuildRelease() {
rm -rf .git/ rm -rf .git/
mkdir -p "build" mkdir -p "build"
BuildWinArm64 ./build/"$appName"-windows-arm64.exe BuildWinArm64 ./build/"$appName"-windows-arm64.exe
BuildWin7 ./build/"$appName"-windows7
xgo -out "$appName" -ldflags="$ldflags" -tags=jsoniter . xgo -out "$appName" -ldflags="$ldflags" -tags=jsoniter .
# why? Because some target platforms seem to have issues with upx compression # why? Because some target platforms seem to have issues with upx compression
# upx -9 ./"$appName"-linux-amd64 upx -9 ./"$appName"-linux-amd64
# cp ./"$appName"-windows-amd64.exe ./"$appName"-windows-amd64-upx.exe cp ./"$appName"-windows-amd64.exe ./"$appName"-windows-amd64-upx.exe
# upx -9 ./"$appName"-windows-amd64-upx.exe upx -9 ./"$appName"-windows-amd64-upx.exe
mv "$appName"-* build mv "$appName"-* build
# Build LoongArch with glibc (both old world abi1.0 and new world abi2.0)
# Separate from musl builds to avoid cache conflicts
BuildLoongGLIBC ./build/$appName-linux-loong64-abi1.0 abi1.0
BuildLoongGLIBC ./build/$appName-linux-loong64 abi2.0
}
BuildLoongGLIBC() {
local target_abi="$2"
local output_file="$1"
local oldWorldGoVersion="1.24.3"
if [ "$target_abi" = "abi1.0" ]; then
echo building for linux-loong64-abi1.0
else
echo building for linux-loong64-abi2.0
target_abi="abi2.0" # Default to abi2.0 if not specified
fi
# Note: No longer need global cache cleanup since ABI1.0 uses isolated cache directory
echo "Using optimized cache strategy: ABI1.0 has isolated cache, ABI2.0 uses standard cache"
if [ "$target_abi" = "abi1.0" ]; then
# Setup abi1.0 toolchain and patched Go compiler similar to cgo-action implementation
echo "Setting up Loongson old-world ABI1.0 toolchain and patched Go compiler..."
# Download and setup patched Go compiler for old-world
if ! curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/go${oldWorldGoVersion}.linux-amd64.tar.gz" \
-o go-loong64-abi1.0.tar.gz; then
echo "Error: Failed to download patched Go compiler for old-world ABI1.0"
if [ -n "$GITHUB_TOKEN" ]; then
echo "Error output from curl:"
curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/go${oldWorldGoVersion}.linux-amd64.tar.gz" \
-o go-loong64-abi1.0.tar.gz || true
fi
return 1
fi
rm -rf go-loong64-abi1.0
mkdir go-loong64-abi1.0
if ! tar -xzf go-loong64-abi1.0.tar.gz -C go-loong64-abi1.0 --strip-components=1; then
echo "Error: Failed to extract patched Go compiler"
return 1
fi
rm go-loong64-abi1.0.tar.gz
# Download and setup GCC toolchain for old-world
if ! curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/loongson-gnu-toolchain-8.3.novec-x86_64-loongarch64-linux-gnu-rc1.1.tar.xz" \
-o gcc8-loong64-abi1.0.tar.xz; then
echo "Error: Failed to download GCC toolchain for old-world ABI1.0"
if [ -n "$GITHUB_TOKEN" ]; then
echo "Error output from curl:"
curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/loongson-gnu-toolchain-8.3.novec-x86_64-loongarch64-linux-gnu-rc1.1.tar.xz" \
-o gcc8-loong64-abi1.0.tar.xz || true
fi
return 1
fi
rm -rf gcc8-loong64-abi1.0
mkdir gcc8-loong64-abi1.0
if ! tar -Jxf gcc8-loong64-abi1.0.tar.xz -C gcc8-loong64-abi1.0 --strip-components=1; then
echo "Error: Failed to extract GCC toolchain"
return 1
fi
rm gcc8-loong64-abi1.0.tar.xz
# Setup separate cache directory for ABI1.0 to avoid cache pollution
abi1_cache_dir="$(pwd)/go-loong64-abi1.0-cache"
mkdir -p "$abi1_cache_dir"
echo "Using separate cache directory for ABI1.0: $abi1_cache_dir"
# Use patched Go compiler for old-world build (critical for ABI1.0 compatibility)
echo "Building with patched Go compiler for old-world ABI1.0..."
echo "Using isolated cache directory: $abi1_cache_dir"
# Use env command to set environment variables locally without affecting global environment
if ! env GOOS=linux GOARCH=loong64 \
CC="$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-gcc" \
CXX="$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-g++" \
CGO_ENABLED=1 \
GOCACHE="$abi1_cache_dir" \
$(pwd)/go-loong64-abi1.0/bin/go build -a -o "$output_file" -ldflags="$ldflags" -tags=jsoniter .; then
echo "Error: Build failed with patched Go compiler"
echo "Attempting retry with cache cleanup..."
env GOCACHE="$abi1_cache_dir" $(pwd)/go-loong64-abi1.0/bin/go clean -cache
if ! env GOOS=linux GOARCH=loong64 \
CC="$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-gcc" \
CXX="$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-g++" \
CGO_ENABLED=1 \
GOCACHE="$abi1_cache_dir" \
$(pwd)/go-loong64-abi1.0/bin/go build -a -o "$output_file" -ldflags="$ldflags" -tags=jsoniter .; then
echo "Error: Build failed again after cache cleanup"
echo "Build environment details:"
echo "GOOS=linux"
echo "GOARCH=loong64"
echo "CC=$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-gcc"
echo "CXX=$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-g++"
echo "CGO_ENABLED=1"
echo "GOCACHE=$abi1_cache_dir"
echo "Go version: $($(pwd)/go-loong64-abi1.0/bin/go version)"
echo "GCC version: $($(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-gcc --version | head -1)"
return 1
fi
fi
else
# Setup abi2.0 toolchain for new world glibc build
echo "Setting up new-world ABI2.0 toolchain..."
if ! curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/cross-tools/releases/download/20250507/x86_64-cross-tools-loongarch64-unknown-linux-gnu-legacy.tar.xz" \
-o gcc12-loong64-abi2.0.tar.xz; then
echo "Error: Failed to download GCC toolchain for new-world ABI2.0"
if [ -n "$GITHUB_TOKEN" ]; then
echo "Error output from curl:"
curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/cross-tools/releases/download/20250507/x86_64-cross-tools-loongarch64-unknown-linux-gnu-legacy.tar.xz" \
-o gcc12-loong64-abi2.0.tar.xz || true
fi
return 1
fi
rm -rf gcc12-loong64-abi2.0
mkdir gcc12-loong64-abi2.0
if ! tar -Jxf gcc12-loong64-abi2.0.tar.xz -C gcc12-loong64-abi2.0 --strip-components=1; then
echo "Error: Failed to extract GCC toolchain"
return 1
fi
rm gcc12-loong64-abi2.0.tar.xz
export GOOS=linux
export GOARCH=loong64
export CC=$(pwd)/gcc12-loong64-abi2.0/bin/loongarch64-unknown-linux-gnu-gcc
export CXX=$(pwd)/gcc12-loong64-abi2.0/bin/loongarch64-unknown-linux-gnu-g++
export CGO_ENABLED=1
# Use standard Go compiler for new-world build
echo "Building with standard Go compiler for new-world ABI2.0..."
if ! go build -a -o "$output_file" -ldflags="$ldflags" -tags=jsoniter .; then
echo "Error: Build failed with standard Go compiler"
echo "Attempting retry with cache cleanup..."
go clean -cache
if ! go build -a -o "$output_file" -ldflags="$ldflags" -tags=jsoniter .; then
echo "Error: Build failed again after cache cleanup"
echo "Build environment details:"
echo "GOOS=$GOOS"
echo "GOARCH=$GOARCH"
echo "CC=$CC"
echo "CXX=$CXX"
echo "CGO_ENABLED=$CGO_ENABLED"
echo "Go version: $(go version)"
echo "GCC version: $($CC --version | head -1)"
return 1
fi
fi
fi
} }
BuildReleaseLinuxMusl() { BuildReleaseLinuxMusl() {
@ -441,7 +249,6 @@ BuildReleaseLinuxMuslArm() {
done done
} }
BuildReleaseAndroid() { BuildReleaseAndroid() {
rm -rf .git/ rm -rf .git/
mkdir -p "build" mkdir -p "build"
@ -471,7 +278,6 @@ BuildReleaseFreeBSD() {
freebsd_version=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/freebsd/freebsd-src/tags\"" | \ freebsd_version=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/freebsd/freebsd-src/tags\"" | \
jq -r '.[].name' | \ jq -r '.[].name' | \
grep '^release/14\.' | \ grep '^release/14\.' | \
grep -v -- '-p[0-9]*$' | \
sort -V | \ sort -V | \
tail -1 | \ tail -1 | \
sed 's/release\///' | \ sed 's/release\///' | \
@ -537,7 +343,7 @@ MakeRelease() {
tar -czvf compress/"$i$liteSuffix".tar.gz "$appName" tar -czvf compress/"$i$liteSuffix".tar.gz "$appName"
rm -f "$appName" rm -f "$appName"
done done
for i in $(find . -type f \( -name "$appName-windows-*" -o -name "$appName-windows7-*" \)); do for i in $(find . -type f -name "$appName-windows-*"); do
cp "$i" "$appName".exe cp "$i" "$appName".exe
zip compress/$(echo $i | sed 's/\.[^.]*$//')$liteSuffix.zip "$appName".exe zip compress/$(echo $i | sed 's/\.[^.]*$//')$liteSuffix.zip "$appName".exe
rm -f "$appName".exe rm -f "$appName".exe
@ -584,7 +390,7 @@ for arg in "$@"; do
done done
if [ "$buildType" = "dev" ]; then if [ "$buildType" = "dev" ]; then
FetchWebRolling FetchWebDev
if [ "$dockerType" = "docker" ]; then if [ "$dockerType" = "docker" ]; then
BuildDocker BuildDocker
elif [ "$dockerType" = "docker-multiplatform" ]; then elif [ "$dockerType" = "docker-multiplatform" ]; then
@ -596,7 +402,7 @@ if [ "$buildType" = "dev" ]; then
fi fi
elif [ "$buildType" = "release" -o "$buildType" = "beta" ]; then elif [ "$buildType" = "release" -o "$buildType" = "beta" ]; then
if [ "$buildType" = "beta" ]; then if [ "$buildType" = "beta" ]; then
FetchWebRolling FetchWebDev
else else
FetchWebRelease FetchWebRelease
fi fi
@ -677,5 +483,4 @@ else
echo -e " $0 release" echo -e " $0 release"
echo -e " $0 release lite" echo -e " $0 release lite"
echo -e " $0 release docker lite" echo -e " $0 release docker lite"
echo -e " $0 release linux_musl"
fi fi

View File

@ -4,8 +4,6 @@ Copyright © 2022 NAME HERE <EMAIL ADDRESS>
package cmd package cmd
import ( import (
"fmt"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/setting" "github.com/OpenListTeam/OpenList/v4/internal/setting"
@ -26,11 +24,10 @@ var AdminCmd = &cobra.Command{
if err != nil { if err != nil {
utils.Log.Errorf("failed get admin user: %+v", err) utils.Log.Errorf("failed get admin user: %+v", err)
} else { } else {
utils.Log.Infof("get admin user from CLI") utils.Log.Infof("Admin user's username: %s", admin.Username)
fmt.Println("Admin user's username:", admin.Username) utils.Log.Infof("The password can only be output at the first startup, and then stored as a hash value, which cannot be reversed")
fmt.Println("The password can only be output at the first startup, and then stored as a hash value, which cannot be reversed") utils.Log.Infof("You can reset the password with a random string by running [openlist admin random]")
fmt.Println("You can reset the password with a random string by running [openlist admin random]") utils.Log.Infof("You can also set a new password by running [openlist admin set NEW_PASSWORD]")
fmt.Println("You can also set a new password by running [openlist admin set NEW_PASSWORD]")
} }
}, },
} }
@ -39,7 +36,6 @@ var RandomPasswordCmd = &cobra.Command{
Use: "random", Use: "random",
Short: "Reset admin user's password to a random string", Short: "Reset admin user's password to a random string",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
utils.Log.Infof("reset admin user's password to a random string from CLI")
newPwd := random.String(8) newPwd := random.String(8)
setAdminPassword(newPwd) setAdminPassword(newPwd)
}, },
@ -48,12 +44,12 @@ var RandomPasswordCmd = &cobra.Command{
var SetPasswordCmd = &cobra.Command{ var SetPasswordCmd = &cobra.Command{
Use: "set", Use: "set",
Short: "Set admin user's password", Short: "Set admin user's password",
RunE: func(cmd *cobra.Command, args []string) error { Run: func(cmd *cobra.Command, args []string) {
if len(args) == 0 { if len(args) == 0 {
return fmt.Errorf("Please enter the new password") utils.Log.Errorf("Please enter the new password")
return
} }
setAdminPassword(args[0]) setAdminPassword(args[0])
return nil
}, },
} }
@ -64,8 +60,7 @@ var ShowTokenCmd = &cobra.Command{
Init() Init()
defer Release() defer Release()
token := setting.GetStr(conf.Token) token := setting.GetStr(conf.Token)
utils.Log.Infof("show admin token from CLI") utils.Log.Infof("Admin token: %s", token)
fmt.Println("Admin token:", token)
}, },
} }
@ -82,10 +77,9 @@ func setAdminPassword(pwd string) {
utils.Log.Errorf("failed update admin user: %+v", err) utils.Log.Errorf("failed update admin user: %+v", err)
return return
} }
utils.Log.Infof("admin user has been update from CLI") utils.Log.Infof("admin user has been updated:")
fmt.Println("admin user has been updated:") utils.Log.Infof("username: %s", admin.Username)
fmt.Println("username:", admin.Username) utils.Log.Infof("password: %s", pwd)
fmt.Println("password:", pwd)
DelAdminCacheOnline() DelAdminCacheOnline()
} }

View File

@ -4,8 +4,6 @@ Copyright © 2022 NAME HERE <EMAIL ADDRESS>
package cmd package cmd
import ( import (
"fmt"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@ -26,8 +24,7 @@ var Cancel2FACmd = &cobra.Command{
if err != nil { if err != nil {
utils.Log.Errorf("failed to cancel 2FA: %+v", err) utils.Log.Errorf("failed to cancel 2FA: %+v", err)
} else { } else {
utils.Log.Infof("2FA is canceled from CLI") utils.Log.Info("2FA canceled")
fmt.Println("2FA canceled")
DelAdminCacheOnline() DelAdminCacheOnline()
} }
} }

View File

@ -16,7 +16,7 @@ var RootCmd = &cobra.Command{
Short: "A file list program that supports multiple storage.", Short: "A file list program that supports multiple storage.",
Long: `A file list program that supports multiple storage, Long: `A file list program that supports multiple storage,
built with love by OpenListTeam. built with love by OpenListTeam.
Complete documentation is available at https://doc.oplist.org/`, Complete documentation is available at https://docs.openlist.team/`,
} }
func Execute() { func Execute() {

View File

@ -19,7 +19,6 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/fs" "github.com/OpenListTeam/OpenList/v4/internal/fs"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server" "github.com/OpenListTeam/OpenList/v4/server"
"github.com/OpenListTeam/OpenList/v4/server/middlewares"
"github.com/OpenListTeam/sftpd-openlist" "github.com/OpenListTeam/sftpd-openlist"
ftpserver "github.com/fclairamb/ftpserverlib" ftpserver "github.com/fclairamb/ftpserverlib"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
@ -48,15 +47,7 @@ the address is defined in config file`,
gin.SetMode(gin.ReleaseMode) gin.SetMode(gin.ReleaseMode)
} }
r := gin.New() r := gin.New()
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
// gin log
if conf.Conf.Log.Filter.Enable {
r.Use(middlewares.FilteredLogger())
} else {
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out))
}
r.Use(gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r) server.Init(r)
var httpHandler http.Handler = r var httpHandler http.Handler = r
if conf.Conf.Scheme.EnableH2c { if conf.Conf.Scheme.EnableH2c {
@ -65,7 +56,6 @@ the address is defined in config file`,
var httpSrv, httpsSrv, unixSrv *http.Server var httpSrv, httpsSrv, unixSrv *http.Server
if conf.Conf.Scheme.HttpPort != -1 { if conf.Conf.Scheme.HttpPort != -1 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort) httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
fmt.Printf("start HTTP server @ %s\n", httpBase)
utils.Log.Infof("start HTTP server @ %s", httpBase) utils.Log.Infof("start HTTP server @ %s", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler} httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler}
go func() { go func() {
@ -77,7 +67,6 @@ the address is defined in config file`,
} }
if conf.Conf.Scheme.HttpsPort != -1 { if conf.Conf.Scheme.HttpsPort != -1 {
httpsBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpsPort) httpsBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpsPort)
fmt.Printf("start HTTPS server @ %s\n", httpsBase)
utils.Log.Infof("start HTTPS server @ %s", httpsBase) utils.Log.Infof("start HTTPS server @ %s", httpsBase)
httpsSrv = &http.Server{Addr: httpsBase, Handler: r} httpsSrv = &http.Server{Addr: httpsBase, Handler: r}
go func() { go func() {
@ -88,7 +77,6 @@ the address is defined in config file`,
}() }()
} }
if conf.Conf.Scheme.UnixFile != "" { if conf.Conf.Scheme.UnixFile != "" {
fmt.Printf("start unix server @ %s\n", conf.Conf.Scheme.UnixFile)
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile) utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: httpHandler} unixSrv = &http.Server{Handler: httpHandler}
go func() { go func() {
@ -117,7 +105,6 @@ the address is defined in config file`,
s3r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out)) s3r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.InitS3(s3r) server.InitS3(s3r)
s3Base := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.S3.Port) s3Base := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.S3.Port)
fmt.Printf("start S3 server @ %s\n", s3Base)
utils.Log.Infof("start S3 server @ %s", s3Base) utils.Log.Infof("start S3 server @ %s", s3Base)
go func() { go func() {
var err error var err error
@ -142,7 +129,6 @@ the address is defined in config file`,
if err != nil { if err != nil {
utils.Log.Fatalf("failed to start ftp driver: %s", err.Error()) utils.Log.Fatalf("failed to start ftp driver: %s", err.Error())
} else { } else {
fmt.Printf("start ftp server on %s\n", conf.Conf.FTP.Listen)
utils.Log.Infof("start ftp server on %s", conf.Conf.FTP.Listen) utils.Log.Infof("start ftp server on %s", conf.Conf.FTP.Listen)
go func() { go func() {
ftpServer = ftpserver.NewFtpServer(ftpDriver) ftpServer = ftpserver.NewFtpServer(ftpDriver)
@ -161,7 +147,6 @@ the address is defined in config file`,
if err != nil { if err != nil {
utils.Log.Fatalf("failed to start sftp driver: %s", err.Error()) utils.Log.Fatalf("failed to start sftp driver: %s", err.Error())
} else { } else {
fmt.Printf("start sftp server on %s", conf.Conf.SFTP.Listen)
utils.Log.Infof("start sftp server on %s", conf.Conf.SFTP.Listen) utils.Log.Infof("start sftp server on %s", conf.Conf.SFTP.Listen)
go func() { go func() {
sftpServer = sftpd.NewSftpServer(sftpDriver) sftpServer = sftpd.NewSftpServer(sftpDriver)

View File

@ -4,7 +4,6 @@ Copyright © 2023 NAME HERE <EMAIL ADDRESS>
package cmd package cmd
import ( import (
"fmt"
"os" "os"
"strconv" "strconv"
@ -23,61 +22,28 @@ var storageCmd = &cobra.Command{
} }
var disableStorageCmd = &cobra.Command{ var disableStorageCmd = &cobra.Command{
Use: "disable [mount path]", Use: "disable",
Short: "Disable a storage by mount path", Short: "Disable a storage",
RunE: func(cmd *cobra.Command, args []string) error { Run: func(cmd *cobra.Command, args []string) {
if len(args) < 1 { if len(args) < 1 {
return fmt.Errorf("mount path is required") utils.Log.Errorf("mount path is required")
return
} }
mountPath := args[0] mountPath := args[0]
Init() Init()
defer Release() defer Release()
storage, err := db.GetStorageByMountPath(mountPath) storage, err := db.GetStorageByMountPath(mountPath)
if err != nil { if err != nil {
return fmt.Errorf("failed to query storage: %+v", err) utils.Log.Errorf("failed to query storage: %+v", err)
} } else {
storage.Disabled = true storage.Disabled = true
err = db.UpdateStorage(storage) err = db.UpdateStorage(storage)
if err != nil { if err != nil {
return fmt.Errorf("failed to update storage: %+v", err) utils.Log.Errorf("failed to update storage: %+v", err)
} } else {
utils.Log.Infof("Storage with mount path [%s] has been disabled from CLI", mountPath) utils.Log.Infof("Storage with mount path [%s] have been disabled", mountPath)
fmt.Printf("Storage with mount path [%s] has been disabled\n", mountPath)
return nil
},
}
var deleteStorageCmd = &cobra.Command{
Use: "delete [id]",
Short: "Delete a storage by id",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("id is required")
}
id, err := strconv.Atoi(args[0])
if err != nil {
return fmt.Errorf("id must be a number")
}
if force, _ := cmd.Flags().GetBool("force"); force {
fmt.Printf("Are you sure you want to delete storage with id [%d]? [y/N]: ", id)
var confirm string
fmt.Scanln(&confirm)
if confirm != "y" && confirm != "Y" {
fmt.Println("Delete operation cancelled.")
return nil
} }
} }
Init()
defer Release()
err = db.DeleteStorageById(uint(id))
if err != nil {
return fmt.Errorf("failed to delete storage by id: %+v", err)
}
utils.Log.Infof("Storage with id [%d] have been deleted from CLI", id)
fmt.Printf("Storage with id [%d] have been deleted\n", id)
return nil
}, },
} }
@ -122,14 +88,14 @@ var storageTableHeight int
var listStorageCmd = &cobra.Command{ var listStorageCmd = &cobra.Command{
Use: "list", Use: "list",
Short: "List all storages", Short: "List all storages",
RunE: func(cmd *cobra.Command, args []string) error { Run: func(cmd *cobra.Command, args []string) {
Init() Init()
defer Release() defer Release()
storages, _, err := db.GetStorages(1, -1) storages, _, err := db.GetStorages(1, -1)
if err != nil { if err != nil {
return fmt.Errorf("failed to query storages: %+v", err) utils.Log.Errorf("failed to query storages: %+v", err)
} else { } else {
fmt.Printf("Found %d storages\n", len(storages)) utils.Log.Infof("Found %d storages", len(storages))
columns := []table.Column{ columns := []table.Column{
{Title: "ID", Width: 4}, {Title: "ID", Width: 4},
{Title: "Driver", Width: 16}, {Title: "Driver", Width: 16},
@ -172,11 +138,10 @@ var listStorageCmd = &cobra.Command{
m := model{t} m := model{t}
if _, err := tea.NewProgram(m).Run(); err != nil { if _, err := tea.NewProgram(m).Run(); err != nil {
fmt.Printf("failed to run program: %+v\n", err) utils.Log.Errorf("failed to run program: %+v", err)
os.Exit(1) os.Exit(1)
} }
} }
return nil
}, },
} }
@ -186,8 +151,6 @@ func init() {
storageCmd.AddCommand(disableStorageCmd) storageCmd.AddCommand(disableStorageCmd)
storageCmd.AddCommand(listStorageCmd) storageCmd.AddCommand(listStorageCmd)
storageCmd.PersistentFlags().IntVarP(&storageTableHeight, "height", "H", 10, "Table height") storageCmd.PersistentFlags().IntVarP(&storageTableHeight, "height", "H", 10, "Table height")
storageCmd.AddCommand(deleteStorageCmd)
deleteStorageCmd.Flags().BoolP("force", "f", false, "Force delete without confirmation")
// Here you will define your flags and configuration settings. // Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command // Cobra supports Persistent Flags which will work for this command

View File

@ -6,9 +6,10 @@ services:
ports: ports:
- '5244:5244' - '5244:5244'
- '5245:5245' - '5245:5245'
user: '0:0'
environment: environment:
- PUID=0
- PGID=0
- UMASK=022 - UMASK=022
- TZ=Asia/Shanghai - TZ=UTC
container_name: openlist container_name: openlist
image: 'openlistteam/openlist:latest' image: 'openlistteam/openlist:latest'

View File

@ -186,7 +186,7 @@ func (d *Pan115) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
preHash = strings.ToUpper(preHash) preHash = strings.ToUpper(preHash)
fullHash := stream.GetHash().GetHash(utils.SHA1) fullHash := stream.GetHash().GetHash(utils.SHA1)
if len(fullHash) != utils.SHA1.Width { if len(fullHash) != utils.SHA1.Width {
_, fullHash, err = streamPkg.CacheFullAndHash(stream, &up, utils.SHA1) _, fullHash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA1)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -18,6 +18,7 @@ var config = driver.Config{
Name: "115 Cloud", Name: "115 Cloud",
DefaultRoot: "0", DefaultRoot: "0",
// OnlyProxy: true, // OnlyProxy: true,
// OnlyLocal: true,
// NoOverwriteUpload: true, // NoOverwriteUpload: true,
} }

View File

@ -321,7 +321,7 @@ func (d *Pan115) UploadByMultipart(ctx context.Context, params *driver115.Upload
err error err error
) )
tmpF, err := s.CacheFullAndWriter(&up, nil) tmpF, err := s.CacheFullInTempFile()
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -8,7 +8,6 @@ import (
"strings" "strings"
"time" "time"
sdk "github.com/OpenListTeam/115-sdk-go"
"github.com/OpenListTeam/OpenList/v4/cmd/flags" "github.com/OpenListTeam/OpenList/v4/cmd/flags"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
@ -17,6 +16,7 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/stream" "github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range" "github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
sdk "github.com/OpenListTeam/115-sdk-go"
"golang.org/x/time/rate" "golang.org/x/time/rate"
) )
@ -131,23 +131,6 @@ func (d *Open115) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
}, nil }, nil
} }
func (d *Open115) GetObjInfo(ctx context.Context, path string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
resp, err := d.client.GetFolderInfoByPath(ctx, path)
if err != nil {
return nil, err
}
return &Obj{
Fid: resp.FileID,
Fn: resp.FileName,
Fc: resp.FileCategory,
Sha1: resp.Sha1,
Pc: resp.PickCode,
}, nil
}
func (d *Open115) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) { func (d *Open115) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil { if err := d.WaitLimit(ctx); err != nil {
return nil, err return nil, err
@ -239,7 +222,7 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
} }
sha1 := file.GetHash().GetHash(utils.SHA1) sha1 := file.GetHash().GetHash(utils.SHA1)
if len(sha1) != utils.SHA1.Width { if len(sha1) != utils.SHA1.Width {
_, sha1, err = stream.CacheFullAndHash(file, &up, utils.SHA1) _, sha1, err = stream.CacheFullInTempFileAndHash(file, utils.SHA1)
if err != nil { if err != nil {
return err return err
} }
@ -269,7 +252,6 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
return err return err
} }
if resp.Status == 2 { if resp.Status == 2 {
up(100)
return nil return nil
} }
// 2. two way verify // 2. two way verify
@ -304,7 +286,6 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
return err return err
} }
if resp.Status == 2 { if resp.Status == 2 {
up(100)
return nil return nil
} }
} }
@ -321,22 +302,6 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
return nil return nil
} }
func (d *Open115) OfflineDownload(ctx context.Context, uris []string, dstDir model.Obj) ([]string, error) {
return d.client.AddOfflineTaskURIs(ctx, uris, dstDir.GetID())
}
func (d *Open115) DeleteOfflineTask(ctx context.Context, infoHash string, deleteFiles bool) error {
return d.client.DeleteOfflineTask(ctx, infoHash, deleteFiles)
}
func (d *Open115) OfflineList(ctx context.Context) (*sdk.OfflineTaskListResp, error) {
resp, err := d.client.OfflineTaskList(ctx, 1)
if err != nil {
return nil, err
}
return resp, nil
}
// func (d *Open115) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) { // func (d *Open115) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// // TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional // // TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement // return nil, errs.NotImplement

View File

@ -11,14 +11,23 @@ type Addition struct {
// define other // define other
OrderBy string `json:"order_by" type:"select" options:"file_name,file_size,user_utime,file_type"` OrderBy string `json:"order_by" type:"select" options:"file_name,file_size,user_utime,file_type"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"` OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"`
LimitRate float64 `json:"limit_rate" type:"float" default:"1" help:"limit all api request rate ([limit]r/1s)"` LimitRate float64 `json:"limit_rate,string" type:"float" default:"1" help:"limit all api request rate ([limit]r/1s)"`
AccessToken string `json:"access_token" required:"true"` AccessToken string `json:"access_token" required:"true"`
RefreshToken string `json:"refresh_token" required:"true"` RefreshToken string `json:"refresh_token" required:"true"`
} }
var config = driver.Config{ var config = driver.Config{
Name: "115 Open", Name: "115 Open",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0", DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -6,13 +6,12 @@ import (
"io" "io"
"time" "time"
sdk "github.com/OpenListTeam/115-sdk-go"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/avast/retry-go" "github.com/avast/retry-go"
sdk "github.com/OpenListTeam/115-sdk-go"
) )
func calPartSize(fileSize int64) int64 { func calPartSize(fileSize int64) int64 {
@ -70,6 +69,9 @@ func (d *Open115) singleUpload(ctx context.Context, tempF model.File, tokenResp
// } // }
func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error { func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
fileSize := stream.GetSize()
chunkSize := calPartSize(fileSize)
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken)) ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil { if err != nil {
return err return err
@ -84,13 +86,6 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
return err return err
} }
fileSize := stream.GetSize()
chunkSize := calPartSize(fileSize)
ss, err := streamPkg.NewStreamSectionReader(stream, int(chunkSize), &up)
if err != nil {
return err
}
partNum := (stream.GetSize() + chunkSize - 1) / chunkSize partNum := (stream.GetSize() + chunkSize - 1) / chunkSize
parts := make([]oss.UploadPart, partNum) parts := make([]oss.UploadPart, partNum)
offset := int64(0) offset := int64(0)
@ -103,13 +98,10 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
if i == partNum { if i == partNum {
partSize = fileSize - (i-1)*chunkSize partSize = fileSize - (i-1)*chunkSize
} }
rd, err := ss.GetSectionReader(offset, partSize) rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
if err != nil {
return err
}
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
err = retry.Do(func() error { err = retry.Do(func() error {
rd.Seek(0, io.SeekStart) _ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i)) part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i))
if err != nil { if err != nil {
return err return err
@ -120,7 +112,6 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
retry.Attempts(3), retry.Attempts(3),
retry.DelayType(retry.BackOffDelay), retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second)) retry.Delay(time.Second))
ss.FreeSectionReader(rd)
if err != nil { if err != nil {
return err return err
} }
@ -130,7 +121,7 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
} else { } else {
offset += partSize offset += partSize
} }
up(float64(offset) * 100 / float64(fileSize)) up(float64(offset) / float64(fileSize))
} }
// callbackRespBytes := make([]byte, 1024) // callbackRespBytes := make([]byte, 1024)

View File

@ -19,6 +19,11 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "115 Share", Name: "115 Share",
DefaultRoot: "0", DefaultRoot: "0",
// OnlyProxy: true,
// OnlyLocal: true,
CheckStatus: false,
Alert: "",
NoOverwriteUpload: true,
NoUpload: true, NoUpload: true,
} }

View File

@ -64,6 +64,14 @@ func (d *Pan123) List(ctx context.Context, dir model.Obj, args model.ListArgs) (
func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if f, ok := file.(File); ok { if f, ok := file.(File); ok {
//var resp DownResp
var headers map[string]string
if !utils.IsLocalIPAddr(args.IP) {
headers = map[string]string{
//"X-Real-IP": "1.1.1.1",
"X-Forwarded-For": args.IP,
}
}
data := base.Json{ data := base.Json{
"driveId": 0, "driveId": 0,
"etag": f.Etag, "etag": f.Etag,
@ -75,27 +83,25 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
} }
resp, err := d.Request(DownloadInfo, http.MethodPost, func(req *resty.Request) { resp, err := d.Request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
req.SetBody(data) req.SetBody(data).SetHeaders(headers)
}, nil) }, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
downloadUrl := utils.Json.Get(resp, "data", "DownloadUrl").ToString() downloadUrl := utils.Json.Get(resp, "data", "DownloadUrl").ToString()
ou, err := url.Parse(downloadUrl) u, err := url.Parse(downloadUrl)
if err != nil { if err != nil {
return nil, err return nil, err
} }
u_ := ou.String() nu := u.Query().Get("params")
nu := ou.Query().Get("params")
if nu != "" { if nu != "" {
du, _ := base64.StdEncoding.DecodeString(nu) du, _ := base64.StdEncoding.DecodeString(nu)
u, err := url.Parse(string(du)) u, err = url.Parse(string(du))
if err != nil { if err != nil {
return nil, err return nil, err
} }
u_ = u.String()
} }
u_ := u.String()
log.Debug("download url: ", u_) log.Debug("download url: ", u_)
res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_) res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_)
if err != nil { if err != nil {
@ -112,7 +118,7 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString() link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString()
} }
link.Header = http.Header{ link.Header = http.Header{
"Referer": []string{fmt.Sprintf("%s://%s/", ou.Scheme, ou.Host)}, "Referer": []string{"https://www.123pan.com/"},
} }
return &link, nil return &link, nil
} else { } else {
@ -182,7 +188,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStrea
etag := file.GetHash().GetHash(utils.MD5) etag := file.GetHash().GetHash(utils.MD5)
var err error var err error
if len(etag) < utils.MD5.Width { if len(etag) < utils.MD5.Width {
_, etag, err = stream.CacheFullAndHash(file, &up, utils.MD5) _, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil { if err != nil {
return err return err
} }

View File

@ -12,7 +12,6 @@ type Addition struct {
//OrderBy string `json:"order_by" type:"select" options:"file_id,file_name,size,update_at" default:"file_name"` //OrderBy string `json:"order_by" type:"select" options:"file_id,file_name,size,update_at" default:"file_name"`
//OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"` //OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
AccessToken string AccessToken string
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
} }
var config = driver.Config{ var config = driver.Config{
@ -23,11 +22,6 @@ var config = driver.Config{
func init() { func init() {
op.RegisterDriver(func() driver.Driver { op.RegisterDriver(func() driver.Driver {
// 新增默认选项 要在RegisterDriver初始化设置 才会对正在使用的用户生效 return &Pan123{}
return &Pan123{
Addition: Addition{
UploadThread: 3,
},
}
}) })
} }

View File

@ -6,16 +6,11 @@ import (
"io" "io"
"net/http" "net/http"
"strconv" "strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
) )
@ -74,21 +69,18 @@ func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp, file model.F
} }
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error { func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error {
// fetch s3 pre signed urls tmpF, err := file.CacheFullInTempFile()
size := file.GetSize()
chunkSize := int64(16 * utils.MB)
chunkCount := 1
if size > chunkSize {
chunkCount = int((size + chunkSize - 1) / chunkSize)
}
ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up)
if err != nil { if err != nil {
return err return err
} }
// fetch s3 pre signed urls
size := file.GetSize()
chunkSize := min(size, 16*utils.MB)
chunkCount := int(size / chunkSize)
lastChunkSize := size % chunkSize lastChunkSize := size % chunkSize
if lastChunkSize == 0 { if lastChunkSize > 0 {
chunkCount++
} else {
lastChunkSize = chunkSize lastChunkSize = chunkSize
} }
// only 1 batch is allowed // only 1 batch is allowed
@ -98,57 +90,46 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
batchSize = 10 batchSize = 10
getS3UploadUrl = d.getS3PreSignedUrls getS3UploadUrl = d.getS3PreSignedUrls
} }
thread := min(int(chunkCount), d.UploadThread)
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
for i := 1; i <= chunkCount; i += batchSize { for i := 1; i <= chunkCount; i += batchSize {
if utils.IsCanceled(uploadCtx) { if utils.IsCanceled(ctx) {
break return ctx.Err()
} }
start := i start := i
end := min(i+batchSize, chunkCount+1) end := min(i+batchSize, chunkCount+1)
s3PreSignedUrls, err := getS3UploadUrl(uploadCtx, upReq, start, end) s3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, start, end)
if err != nil { if err != nil {
return err return err
} }
// upload each chunk // upload each chunk
for cur := start; cur < end; cur++ { for j := start; j < end; j++ {
if utils.IsCanceled(uploadCtx) { if utils.IsCanceled(ctx) {
break return ctx.Err()
} }
offset := int64(cur-1) * chunkSize
curSize := chunkSize curSize := chunkSize
if cur == chunkCount { if j == chunkCount {
curSize = lastChunkSize curSize = lastChunkSize
} }
var reader *stream.SectionReader err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.NewSectionReader(tmpF, chunkSize*int64(j-1), curSize), curSize, false, getS3UploadUrl)
var rateLimitedRd io.Reader
threadG.GoWithLifecycle(errgroup.Lifecycle{
Before: func(ctx context.Context) error {
if reader == nil {
var err error
reader, err = ss.GetSectionReader(offset, curSize)
if err != nil { if err != nil {
return err return err
} }
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader) up(float64(j) * 100 / float64(chunkCount))
} }
return nil }
}, // complete s3 upload
Do: func(ctx context.Context) error { return d.completeS3(ctx, upReq, file, chunkCount > 1)
reader.Seek(0, io.SeekStart) }
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader *io.SectionReader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)] uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)]
if uploadUrl == "" { if uploadUrl == "" {
return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls) return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls)
} }
reader.Seek(0, io.SeekStart) req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, reader))
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, rateLimitedRd)
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.ContentLength = curSize req.ContentLength = curSize
//req.Header.Set("Content-Length", strconv.FormatInt(curSize, 10)) //req.Header.Set("Content-Length", strconv.FormatInt(curSize, 10))
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
@ -157,18 +138,18 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
} }
defer res.Body.Close() defer res.Body.Close()
if res.StatusCode == http.StatusForbidden { if res.StatusCode == http.StatusForbidden {
singleflight.AnyGroup.Do(fmt.Sprintf("Pan123.newUpload_%p", threadG), func() (any, error) { if retry {
newS3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, cur, end) return fmt.Errorf("upload s3 chunk %d failed, status code: %d", cur, res.StatusCode)
if err != nil {
return nil, err
} }
s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls // refresh s3 pre signed urls
return nil, nil newS3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, cur, end)
})
if err != nil { if err != nil {
return err return err
} }
return fmt.Errorf("upload s3 chunk %d failed, status code: %d", cur, res.StatusCode) s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls
// retry
reader.Seek(0, io.SeekStart)
return d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, cur, end, reader, curSize, true, getS3UploadUrl)
} }
if res.StatusCode != http.StatusOK { if res.StatusCode != http.StatusOK {
body, err := io.ReadAll(res.Body) body, err := io.ReadAll(res.Body)
@ -177,20 +158,5 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
} }
return fmt.Errorf("upload s3 chunk %d failed, status code: %d, body: %s", cur, res.StatusCode, body) return fmt.Errorf("upload s3 chunk %d failed, status code: %d, body: %s", cur, res.StatusCode, body)
} }
progress := 10.0 + 85.0*float64(threadG.Success())/float64(chunkCount)
up(progress)
return nil return nil
},
After: func(err error) {
ss.FreeSectionReader(reader)
},
})
}
}
if err := threadG.Wait(); err != nil {
return err
}
defer up(100)
// complete s3 upload
return d.completeS3(ctx, upReq, file, chunkCount > 1)
} }

View File

@ -2,9 +2,7 @@ package _123_open
import ( import (
"context" "context"
"fmt"
"strconv" "strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/v4/internal/errs"
@ -97,22 +95,6 @@ func (d *Open123) Rename(ctx context.Context, srcObj model.Obj, newName string)
} }
func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
// 尝试使用上传+MD5秒传功能实现复制
// 1. 创建文件
// parentFileID 父目录id上传到根目录时填写 0
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
if err != nil {
return fmt.Errorf("parse parentFileID error: %v", err)
}
etag := srcObj.(File).Etag
createResp, err := d.create(parentFileId, srcObj.GetName(), etag, srcObj.GetSize(), 2, false)
if err != nil {
return err
}
// 是否秒传
if createResp.Data.Reuse {
return nil
}
return errs.NotSupport return errs.NotSupport
} }
@ -122,64 +104,26 @@ func (d *Open123) Remove(ctx context.Context, obj model.Obj) error {
return d.trash(fileId) return d.trash(fileId)
} }
func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) { func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
// 1. 创建文件
// parentFileID 父目录id上传到根目录时填写 0
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64) parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("parse parentFileID error: %v", err)
}
// etag 文件md5
etag := file.GetHash().GetHash(utils.MD5) etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width { if len(etag) < utils.MD5.Width {
_, etag, err = stream.CacheFullAndHash(file, &up, utils.MD5) _, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil { if err != nil {
return nil, err return err
} }
} }
createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false) createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false)
if err != nil { if err != nil {
return nil, err return err
} }
// 是否秒传
if createResp.Data.Reuse { if createResp.Data.Reuse {
// 秒传成功才会返回正确的 FileID否则为 0 return nil
if createResp.Data.FileID != 0 {
return File{
FileName: file.GetName(),
Size: file.GetSize(),
FileId: createResp.Data.FileID,
Type: 2,
Etag: etag,
}, nil
}
} }
up(10)
// 2. 上传分片 return d.Upload(ctx, file, createResp, up)
err = d.Upload(ctx, file, createResp, up)
if err != nil {
return nil, err
}
// 3. 上传完毕
for range 60 {
uploadCompleteResp, err := d.complete(createResp.Data.PreuploadID)
// 返回错误代码未知20103文档也没有具体说
if err == nil && uploadCompleteResp.Data.Completed && uploadCompleteResp.Data.FileID != 0 {
up(100)
return File{
FileName: file.GetName(),
Size: file.GetSize(),
FileId: uploadCompleteResp.Data.FileID,
Type: 2,
Etag: etag,
}, nil
}
// 若接口返回的completed为 false 时则需间隔1秒继续轮询此接口获取上传最终结果。
time.Sleep(time.Second)
}
return nil, fmt.Errorf("upload complete timeout")
} }
var _ driver.Driver = (*Open123)(nil) var _ driver.Driver = (*Open123)(nil)
var _ driver.PutResult = (*Open123)(nil)

View File

@ -73,9 +73,7 @@ func (f File) GetName() string {
} }
func (f File) CreateTime() time.Time { func (f File) CreateTime() time.Time {
// 返回的时间没有时区信息,默认 UTC+8 parsedTime, err := time.Parse("2006-01-02 15:04:05", f.CreateAt)
loc := time.FixedZone("UTC+8", 8*60*60)
parsedTime, err := time.ParseInLocation("2006-01-02 15:04:05", f.CreateAt, loc)
if err != nil { if err != nil {
return time.Now() return time.Now()
} }
@ -83,9 +81,7 @@ func (f File) CreateTime() time.Time {
} }
func (f File) ModTime() time.Time { func (f File) ModTime() time.Time {
// 返回的时间没有时区信息,默认 UTC+8 parsedTime, err := time.Parse("2006-01-02 15:04:05", f.UpdateAt)
loc := time.FixedZone("UTC+8", 8*60*60)
parsedTime, err := time.ParseInLocation("2006-01-02 15:04:05", f.UpdateAt, loc)
if err != nil { if err != nil {
return time.Now() return time.Now()
} }
@ -158,7 +154,6 @@ type DownloadInfoResp struct {
} `json:"data"` } `json:"data"`
} }
// 创建文件V2返回
type UploadCreateResp struct { type UploadCreateResp struct {
BaseResp BaseResp
Data struct { Data struct {
@ -166,15 +161,45 @@ type UploadCreateResp struct {
PreuploadID string `json:"preuploadID"` PreuploadID string `json:"preuploadID"`
Reuse bool `json:"reuse"` Reuse bool `json:"reuse"`
SliceSize int64 `json:"sliceSize"` SliceSize int64 `json:"sliceSize"`
Servers []string `json:"servers"`
} `json:"data"` } `json:"data"`
} }
// 上传完毕V2返回 type UploadUrlResp struct {
BaseResp
Data struct {
PresignedURL string `json:"presignedURL"`
}
}
type UploadCompleteResp struct { type UploadCompleteResp struct {
BaseResp
Data struct {
Async bool `json:"async"`
Completed bool `json:"completed"`
FileID int64 `json:"fileID"`
} `json:"data"`
}
type UploadAsyncResp struct {
BaseResp BaseResp
Data struct { Data struct {
Completed bool `json:"completed"` Completed bool `json:"completed"`
FileID int64 `json:"fileID"` FileID int64 `json:"fileID"`
} `json:"data"` } `json:"data"`
} }
type UploadResp struct {
BaseResp
Data struct {
AccessKeyId string `json:"AccessKeyId"`
Bucket string `json:"Bucket"`
Key string `json:"Key"`
SecretAccessKey string `json:"SecretAccessKey"`
SessionToken string `json:"SessionToken"`
FileId int64 `json:"FileId"`
Reuse bool `json:"Reuse"`
EndPoint string `json:"EndPoint"`
StorageNode string `json:"StorageNode"`
UploadId string `json:"UploadId"`
} `json:"data"`
}

View File

@ -1,28 +1,21 @@
package _123_open package _123_open
import ( import (
"bytes"
"context" "context"
"encoding/json"
"fmt"
"io"
"mime/multipart"
"net/http" "net/http"
"strconv"
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup" "github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go" "github.com/avast/retry-go"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
) )
// 创建文件 V2
func (d *Open123) create(parentFileID int64, filename string, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) { func (d *Open123) create(parentFileID int64, filename string, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) {
var resp UploadCreateResp var resp UploadCreateResp
_, err := d.Request(UploadCreate, http.MethodPost, func(req *resty.Request) { _, err := d.Request(UploadCreate, http.MethodPost, func(req *resty.Request) {
@ -41,136 +34,21 @@ func (d *Open123) create(parentFileID int64, filename string, etag string, size
return &resp, nil return &resp, nil
} }
// 上传分片 V2 func (d *Open123) url(preuploadID string, sliceNo int64) (string, error) {
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createResp *UploadCreateResp, up driver.UpdateProgress) error { // get upload url
uploadDomain := createResp.Data.Servers[0] var resp UploadUrlResp
size := file.GetSize() _, err := d.Request(UploadUrl, http.MethodPost, func(req *resty.Request) {
chunkSize := createResp.Data.SliceSize req.SetBody(base.Json{
"preuploadId": preuploadID,
ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up) "sliceNo": sliceNo,
if err != nil {
return err
}
uploadNums := (size + chunkSize - 1) / chunkSize
thread := min(int(uploadNums), d.UploadThread)
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
for partIndex := range uploadNums {
if utils.IsCanceled(uploadCtx) {
break
}
partIndex := partIndex
partNumber := partIndex + 1 // 分片号从1开始
offset := partIndex * chunkSize
size := min(chunkSize, size-offset)
var reader *stream.SectionReader
var rateLimitedRd io.Reader
sliceMD5 := ""
// 表单
b := bytes.NewBuffer(make([]byte, 0, 2048))
threadG.GoWithLifecycle(errgroup.Lifecycle{
Before: func(ctx context.Context) error {
if reader == nil {
var err error
// 每个分片一个reader
reader, err = ss.GetSectionReader(offset, size)
if err != nil {
return err
}
// 计算当前分片的MD5
sliceMD5, err = utils.HashReader(utils.MD5, reader)
if err != nil {
return err
}
}
return nil
},
Do: func(ctx context.Context) error {
// 重置分片reader位置因为HashReader、上一次失败已经读取到分片EOF
reader.Seek(0, io.SeekStart)
b.Reset()
w := multipart.NewWriter(b)
// 添加表单字段
err = w.WriteField("preuploadID", createResp.Data.PreuploadID)
if err != nil {
return err
}
err = w.WriteField("sliceNo", strconv.FormatInt(partNumber, 10))
if err != nil {
return err
}
err = w.WriteField("sliceMD5", sliceMD5)
if err != nil {
return err
}
// 写入文件内容
_, err = w.CreateFormFile("slice", fmt.Sprintf("%s.part%d", file.GetName(), partNumber))
if err != nil {
return err
}
headSize := b.Len()
err = w.Close()
if err != nil {
return err
}
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd = driver.NewLimitedUploadStream(ctx, io.MultiReader(head, reader, tail))
// 创建请求并设置header
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadDomain+"/upload/v2/file/slice", rateLimitedRd)
if err != nil {
return err
}
// 设置请求头
req.Header.Add("Authorization", "Bearer "+d.AccessToken)
req.Header.Add("Content-Type", w.FormDataContentType())
req.Header.Add("Platform", "open_platform")
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return fmt.Errorf("slice %d upload failed, status code: %d", partNumber, res.StatusCode)
}
var resp BaseResp
respBody, err := io.ReadAll(res.Body)
if err != nil {
return err
}
err = json.Unmarshal(respBody, &resp)
if err != nil {
return err
}
if resp.Code != 0 {
return fmt.Errorf("slice %d upload failed: %s", partNumber, resp.Message)
}
progress := 10.0 + 85.0*float64(threadG.Success())/float64(uploadNums)
up(progress)
return nil
},
After: func(err error) {
ss.FreeSectionReader(reader)
},
}) })
}, &resp)
if err != nil {
return "", err
}
return resp.Data.PresignedURL, nil
} }
if err := threadG.Wait(); err != nil {
return err
}
return nil
}
// 上传完毕
func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) { func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) {
var resp UploadCompleteResp var resp UploadCompleteResp
_, err := d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) { _, err := d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) {
@ -183,3 +61,91 @@ func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) {
} }
return &resp, nil return &resp, nil
} }
func (d *Open123) async(preuploadID string) (*UploadAsyncResp, error) {
var resp UploadAsyncResp
_, err := d.Request(UploadAsync, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"preuploadID": preuploadID,
})
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createResp *UploadCreateResp, up driver.UpdateProgress) error {
size := file.GetSize()
chunkSize := createResp.Data.SliceSize
uploadNums := (size + chunkSize - 1) / chunkSize
threadG, uploadCtx := errgroup.NewGroupWithContext(ctx, d.UploadThread,
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
for partIndex := int64(0); partIndex < uploadNums; partIndex++ {
if utils.IsCanceled(uploadCtx) {
return ctx.Err()
}
partIndex := partIndex
partNumber := partIndex + 1 // 分片号从1开始
offset := partIndex * chunkSize
size := min(chunkSize, size-offset)
limitedReader, err := file.RangeRead(http_range.Range{
Start: offset,
Length: size})
if err != nil {
return err
}
limitedReader = driver.NewLimitedUploadStream(ctx, limitedReader)
threadG.Go(func(ctx context.Context) error {
uploadPartUrl, err := d.url(createResp.Data.PreuploadID, partNumber)
if err != nil {
return err
}
req, err := http.NewRequestWithContext(ctx, "PUT", uploadPartUrl, limitedReader)
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = size
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
_ = res.Body.Close()
progress := 10.0 + 85.0*float64(threadG.Success())/float64(uploadNums)
up(progress)
return nil
})
}
if err := threadG.Wait(); err != nil {
return err
}
uploadCompleteResp, err := d.complete(createResp.Data.PreuploadID)
if err != nil {
return err
}
if uploadCompleteResp.Data.Async == false || uploadCompleteResp.Data.Completed {
return nil
}
for {
uploadAsyncResp, err := d.async(createResp.Data.PreuploadID)
if err != nil {
return err
}
if uploadAsyncResp.Data.Completed {
break
}
}
up(100)
return nil
}

View File

@ -19,14 +19,16 @@ var ( //不同情况下获取的AccessTokenQPS限制不同 如下模块化易于
AccessToken = InitApiInfo(Api+"/api/v1/access_token", 1) AccessToken = InitApiInfo(Api+"/api/v1/access_token", 1)
RefreshToken = InitApiInfo(Api+"/api/v1/oauth2/access_token", 1) RefreshToken = InitApiInfo(Api+"/api/v1/oauth2/access_token", 1)
UserInfo = InitApiInfo(Api+"/api/v1/user/info", 1) UserInfo = InitApiInfo(Api+"/api/v1/user/info", 1)
FileList = InitApiInfo(Api+"/api/v2/file/list", 3) FileList = InitApiInfo(Api+"/api/v2/file/list", 4)
DownloadInfo = InitApiInfo(Api+"/api/v1/file/download_info", 0) DownloadInfo = InitApiInfo(Api+"/api/v1/file/download_info", 0)
Mkdir = InitApiInfo(Api+"/upload/v1/file/mkdir", 2) Mkdir = InitApiInfo(Api+"/upload/v1/file/mkdir", 2)
Move = InitApiInfo(Api+"/api/v1/file/move", 1) Move = InitApiInfo(Api+"/api/v1/file/move", 1)
Rename = InitApiInfo(Api+"/api/v1/file/name", 1) Rename = InitApiInfo(Api+"/api/v1/file/name", 1)
Trash = InitApiInfo(Api+"/api/v1/file/trash", 2) Trash = InitApiInfo(Api+"/api/v1/file/trash", 2)
UploadCreate = InitApiInfo(Api+"/upload/v2/file/create", 2) UploadCreate = InitApiInfo(Api+"/upload/v1/file/create", 2)
UploadComplete = InitApiInfo(Api+"/upload/v2/file/upload_complete", 0) UploadUrl = InitApiInfo(Api+"/upload/v1/file/get_upload_url", 0)
UploadComplete = InitApiInfo(Api+"/upload/v1/file/upload_complete", 0)
UploadAsync = InitApiInfo(Api+"/upload/v1/file/upload_async_result", 1)
) )
func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) { func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {

View File

@ -70,6 +70,14 @@ func (d *Pan123Share) List(ctx context.Context, dir model.Obj, args model.ListAr
func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
// TODO return link of file, required // TODO return link of file, required
if f, ok := file.(File); ok { if f, ok := file.(File); ok {
//var resp DownResp
var headers map[string]string
if !utils.IsLocalIPAddr(args.IP) {
headers = map[string]string{
//"X-Real-IP": "1.1.1.1",
"X-Forwarded-For": args.IP,
}
}
data := base.Json{ data := base.Json{
"shareKey": d.ShareKey, "shareKey": d.ShareKey,
"SharePwd": d.SharePwd, "SharePwd": d.SharePwd,
@ -79,27 +87,25 @@ func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkA
"size": f.Size, "size": f.Size,
} }
resp, err := d.request(DownloadInfo, http.MethodPost, func(req *resty.Request) { resp, err := d.request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
req.SetBody(data) req.SetBody(data).SetHeaders(headers)
}, nil) }, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
downloadUrl := utils.Json.Get(resp, "data", "DownloadURL").ToString() downloadUrl := utils.Json.Get(resp, "data", "DownloadURL").ToString()
ou, err := url.Parse(downloadUrl) u, err := url.Parse(downloadUrl)
if err != nil { if err != nil {
return nil, err return nil, err
} }
u_ := ou.String() nu := u.Query().Get("params")
nu := ou.Query().Get("params")
if nu != "" { if nu != "" {
du, _ := base64.StdEncoding.DecodeString(nu) du, _ := base64.StdEncoding.DecodeString(nu)
u, err := url.Parse(string(du)) u, err = url.Parse(string(du))
if err != nil { if err != nil {
return nil, err return nil, err
} }
u_ = u.String()
} }
u_ := u.String()
log.Debug("download url: ", u_) log.Debug("download url: ", u_)
res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_) res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_)
if err != nil { if err != nil {
@ -116,7 +122,7 @@ func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkA
link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString() link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString()
} }
link.Header = http.Header{ link.Header = http.Header{
"Referer": []string{fmt.Sprintf("%s://%s/", ou.Scheme, ou.Host)}, "Referer": []string{"https://www.123pan.com/"},
} }
return &link, nil return &link, nil
} }

View File

@ -17,8 +17,15 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "123PanShare", Name: "123PanShare",
LocalSort: true, LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: true, NoUpload: true,
NeedMs: false,
DefaultRoot: "0", DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -522,17 +522,19 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
var err error var err error
fullHash := stream.GetHash().GetHash(utils.SHA256) fullHash := stream.GetHash().GetHash(utils.SHA256)
if len(fullHash) != utils.SHA256.Width { if len(fullHash) != utils.SHA256.Width {
_, fullHash, err = streamPkg.CacheFullAndHash(stream, &up, utils.SHA256) _, fullHash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA256)
if err != nil { if err != nil {
return err return err
} }
} }
size := stream.GetSize() size := stream.GetSize()
partSize := d.getPartSize(size) var partSize = d.getPartSize(size)
part := int64(1) part := size / partSize
if size > partSize { if size%partSize > 0 {
part = (size + partSize - 1) / partSize part++
} else if part == 0 {
part = 1
} }
partInfos := make([]PartInfo, 0, part) partInfos := make([]PartInfo, 0, part)
for i := int64(0); i < part; i++ { for i := int64(0); i < part; i++ {
@ -634,10 +636,11 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
// Update Progress // Update Progress
r := io.TeeReader(limitReader, p) r := io.TeeReader(limitReader, p)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadPartInfo.UploadUrl, r) req, err := http.NewRequest("PUT", uploadPartInfo.UploadUrl, r)
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream") req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(partSize)) req.Header.Set("Content-Length", fmt.Sprint(partSize))
req.Header.Set("Origin", "https://yun.139.com") req.Header.Set("Origin", "https://yun.139.com")
@ -783,10 +786,12 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
size := stream.GetSize() size := stream.GetSize()
// Progress // Progress
p := driver.NewProgress(size, up) p := driver.NewProgress(size, up)
partSize := d.getPartSize(size) var partSize = d.getPartSize(size)
part := int64(1) part := size / partSize
if size > partSize { if size%partSize > 0 {
part = (size + partSize - 1) / partSize part++
} else if part == 0 {
part = 1
} }
rateLimited := driver.NewLimitedUploadStream(ctx, stream) rateLimited := driver.NewLimitedUploadStream(ctx, stream)
for i := int64(0); i < part; i++ { for i := int64(0); i < part; i++ {
@ -800,10 +805,12 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
limitReader := io.LimitReader(rateLimited, byteSize) limitReader := io.LimitReader(rateLimited, byteSize)
// Update Progress // Update Progress
r := io.TeeReader(limitReader, p) r := io.TeeReader(limitReader, p)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, resp.Data.UploadResult.RedirectionURL, r) req, err := http.NewRequest("POST", resp.Data.UploadResult.RedirectionURL, r)
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName())) req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName()))
req.Header.Set("contentSize", strconv.FormatInt(size, 10)) req.Header.Set("contentSize", strconv.FormatInt(size, 10))
req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1)) req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1))

View File

@ -365,10 +365,11 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
log.Debugf("uploadData: %+v", uploadData) log.Debugf("uploadData: %+v", uploadData)
requestURL := uploadData.RequestURL requestURL := uploadData.RequestURL
uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&") uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&")
req, err := http.NewRequestWithContext(ctx, http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData))) req, err := http.NewRequest(http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
for _, v := range uploadHeaders { for _, v := range uploadHeaders {
i := strings.Index(v, "=") i := strings.Index(v, "=")
req.Header.Set(v[0:i], v[i+1:]) req.Header.Set(v[0:i], v[i+1:])

View File

@ -5,19 +5,17 @@ import (
"encoding/base64" "encoding/base64"
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"github.com/skip2/go-qrcode"
"io" "io"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
"time" "time"
"github.com/skip2/go-qrcode"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
@ -313,14 +311,11 @@ func (y *Cloud189TV) RapidUpload(ctx context.Context, dstDir model.Obj, stream m
// 旧版本上传,家庭云不支持覆盖 // 旧版本上传,家庭云不支持覆盖
func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) { func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
fileMd5 := file.GetHash().GetHash(utils.MD5) tempFile, err := file.CacheFullInTempFile()
var tempFile = file.GetFile() if err != nil {
var err error return nil, err
if len(fileMd5) != utils.MD5.Width {
tempFile, fileMd5, err = stream.CacheFullAndHash(file, &up, utils.MD5)
} else if tempFile == nil {
tempFile, err = file.CacheFullAndWriter(&up, nil)
} }
fileMd5, err := utils.HashFile(utils.MD5, tempFile)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -350,7 +345,7 @@ func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId) header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
} }
_, err := y.put(ctx, status.FileUploadUrl, header, true, tempFile, isFamily) _, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile), isFamily)
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" { if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
return nil, err return nil, err
} }

View File

@ -7,7 +7,6 @@ import (
"encoding/hex" "encoding/hex"
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"hash"
"io" "io"
"net/http" "net/http"
"net/http/cookiejar" "net/http/cookiejar"
@ -473,7 +472,7 @@ func (y *Cloud189PC) refreshSession() (err error) {
// 无法上传大小为0的文件 // 无法上传大小为0的文件
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) { func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
size := file.GetSize() size := file.GetSize()
sliceSize := min(size, partSize(size)) sliceSize := partSize(size)
params := Params{ params := Params{
"parentFolderId": dstDir.GetID(), "parentFolderId": dstDir.GetID(),
@ -501,71 +500,43 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
return nil, err return nil, err
} }
ss, err := stream.NewStreamSectionReader(file, int(sliceSize), &up) threadG, upCtx := errgroup.NewGroupWithContext(ctx, y.uploadThread,
if err != nil {
return nil, err
}
threadG, upCtx := errgroup.NewOrderedGroupWithContext(ctx, y.uploadThread,
retry.Attempts(3), retry.Attempts(3),
retry.Delay(time.Second), retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay)) retry.DelayType(retry.BackOffDelay))
count := 1 count := int(size / sliceSize)
if size > sliceSize {
count = int((size + sliceSize - 1) / sliceSize)
}
lastPartSize := size % sliceSize lastPartSize := size % sliceSize
if lastPartSize == 0 { if lastPartSize > 0 {
count++
} else {
lastPartSize = sliceSize lastPartSize = sliceSize
} }
fileMd5 := utils.MD5.NewFunc()
silceMd5Hexs := make([]string, 0, count)
silceMd5 := utils.MD5.NewFunc() silceMd5 := utils.MD5.NewFunc()
var writers io.Writer = silceMd5 silceMd5Hexs := make([]string, 0, count)
teeReader := io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5))
fileMd5Hex := file.GetHash().GetHash(utils.MD5) byteSize := sliceSize
var fileMd5 hash.Hash
if len(fileMd5Hex) != utils.MD5.Width {
fileMd5 = utils.MD5.NewFunc()
writers = io.MultiWriter(silceMd5, fileMd5)
}
for i := 1; i <= count; i++ { for i := 1; i <= count; i++ {
if utils.IsCanceled(upCtx) { if utils.IsCanceled(upCtx) {
break break
} }
offset := int64((i)-1) * sliceSize
size := sliceSize
if i == count { if i == count {
size = lastPartSize byteSize = lastPartSize
}
partInfo := ""
var reader *stream.SectionReader
var rateLimitedRd io.Reader
threadG.GoWithLifecycle(errgroup.Lifecycle{
Before: func(ctx context.Context) error {
if reader == nil {
var err error
reader, err = ss.GetSectionReader(offset, size)
if err != nil {
return err
} }
byteData := make([]byte, byteSize)
// 读取块
silceMd5.Reset() silceMd5.Reset()
w, err := utils.CopyWithBuffer(writers, reader) if _, err := io.ReadFull(teeReader, byteData); err != io.EOF && err != nil {
if w != size { return nil, err
return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", size, w, err)
} }
// 计算块md5并进行hex和base64编码 // 计算块md5并进行hex和base64编码
md5Bytes := silceMd5.Sum(nil) md5Bytes := silceMd5.Sum(nil)
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Bytes))) silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Bytes)))
partInfo = fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes)) partInfo := fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader) threadG.Go(func(ctx context.Context) error {
}
return nil
},
Do: func(ctx context.Context) error {
reader.Seek(0, io.SeekStart)
uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo) uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
if err != nil { if err != nil {
return err return err
@ -574,26 +545,19 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
// step.4 上传切片 // step.4 上传切片
uploadUrl := uploadUrls[0] uploadUrl := uploadUrls[0]
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, _, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false,
driver.NewLimitedUploadStream(ctx, rateLimitedRd), isFamily) driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)), isFamily)
if err != nil { if err != nil {
return err return err
} }
up(float64(threadG.Success()) * 100 / float64(count)) up(float64(threadG.Success()) * 100 / float64(count))
return nil return nil
}, })
After: func(err error) {
ss.FreeSectionReader(reader)
},
},
)
} }
if err = threadG.Wait(); err != nil { if err = threadG.Wait(); err != nil {
return nil, err return nil, err
} }
if fileMd5 != nil { fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
fileMd5Hex = strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
}
sliceMd5Hex := fileMd5Hex sliceMd5Hex := fileMd5Hex
if file.GetSize() > sliceSize { if file.GetSize() > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n"))) sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
@ -656,12 +620,11 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
cache = tmpF cache = tmpF
} }
sliceSize := partSize(size) sliceSize := partSize(size)
count := 1 count := int(size / sliceSize)
if size > sliceSize {
count = int((size + sliceSize - 1) / sliceSize)
}
lastSliceSize := size % sliceSize lastSliceSize := size % sliceSize
if lastSliceSize == 0 { if lastSliceSize > 0 {
count++
} else {
lastSliceSize = sliceSize lastSliceSize = sliceSize
} }
@ -775,8 +738,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
} }
// step.4 上传切片 // step.4 上传切片
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize)) _, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(cache, offset, byteSize), isFamily)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, rateLimitedRd, isFamily)
if err != nil { if err != nil {
return err return err
} }
@ -858,7 +820,7 @@ func (y *Cloud189PC) GetMultiUploadUrls(ctx context.Context, isFamily bool, uplo
// 旧版本上传,家庭云不支持覆盖 // 旧版本上传,家庭云不支持覆盖
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) { func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
tempFile, fileMd5, err := stream.CacheFullAndHash(file, &up, utils.MD5) tempFile, fileMd5, err := stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -3,9 +3,7 @@ package alias
import ( import (
"context" "context"
"errors" "errors"
"fmt"
"io" "io"
"net/url"
stdpath "path" stdpath "path"
"strings" "strings"
@ -13,11 +11,8 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/fs" "github.com/OpenListTeam/OpenList/v4/internal/fs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/sign"
"github.com/OpenListTeam/OpenList/v4/internal/stream" "github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
) )
type Alias struct { type Alias struct {
@ -80,18 +75,10 @@ func (d *Alias) Get(ctx context.Context, path string) (model.Obj, error) {
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
for _, dst := range dsts { for _, dst := range dsts {
obj, err := fs.Get(ctx, stdpath.Join(dst, sub), &fs.GetArgs{NoLog: true}) obj, err := d.get(ctx, path, dst, sub)
if err != nil { if err == nil {
continue return obj, nil
} }
return &model.Object{
Path: path,
Name: obj.GetName(),
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}, nil
} }
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
@ -109,27 +96,7 @@ func (d *Alias) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
var objs []model.Obj var objs []model.Obj
fsArgs := &fs.ListArgs{NoLog: true, Refresh: args.Refresh} fsArgs := &fs.ListArgs{NoLog: true, Refresh: args.Refresh}
for _, dst := range dsts { for _, dst := range dsts {
tmp, err := fs.List(ctx, stdpath.Join(dst, sub), fsArgs) tmp, err := d.list(ctx, dst, sub, fsArgs)
if err == nil {
tmp, err = utils.SliceConvert(tmp, func(obj model.Obj) (model.Obj, error) {
thumb, ok := model.GetThumb(obj)
objRes := model.Object{
Name: obj.GetName(),
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
}
if !ok {
return &objRes, nil
}
return &model.ObjThumb{
Object: objRes,
Thumbnail: model.Thumbnail{
Thumbnail: thumb,
},
}, nil
})
}
if err == nil { if err == nil {
objs = append(objs, tmp...) objs = append(objs, tmp...)
} }
@ -143,45 +110,21 @@ func (d *Alias) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
if !ok { if !ok {
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
// proxy || ftp,s3
if common.GetApiUrl(ctx) == "" {
args.Redirect = false
}
for _, dst := range dsts { for _, dst := range dsts {
reqPath := stdpath.Join(dst, sub) link, err := d.link(ctx, dst, sub, args)
link, fi, err := d.link(ctx, reqPath, args) if err == nil {
if err != nil { if !args.Redirect && len(link.URL) > 0 {
continue // 正常情况下 多并发 仅支持返回URL的驱动
} // alias套娃alias 可以让crypt、mega等驱动(不返回URL的) 支持并发
if link == nil {
// 重定向且需要通过代理
return &model.Link{
URL: fmt.Sprintf("%s/p%s?sign=%s",
common.GetApiUrl(ctx),
utils.EncodePath(reqPath, true),
sign.Sign(reqPath)),
}, nil
}
resultLink := *link
resultLink.SyncClosers = utils.NewSyncClosers(link)
if args.Redirect {
return &resultLink, nil
}
if resultLink.ContentLength == 0 {
resultLink.ContentLength = fi.GetSize()
}
if resultLink.MFile != nil {
return &resultLink, nil
}
if d.DownloadConcurrency > 0 { if d.DownloadConcurrency > 0 {
resultLink.Concurrency = d.DownloadConcurrency link.Concurrency = d.DownloadConcurrency
} }
if d.DownloadPartSize > 0 { if d.DownloadPartSize > 0 {
resultLink.PartSize = d.DownloadPartSize * utils.KB link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
} }
return &resultLink, nil
} }
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
@ -223,8 +166,7 @@ func (d *Alias) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
} }
if len(srcPath) == len(dstPath) { if len(srcPath) == len(dstPath) {
for i := range srcPath { for i := range srcPath {
_, e := fs.Move(ctx, *srcPath[i], *dstPath[i]) err = errors.Join(err, fs.Move(ctx, *srcPath[i], *dstPath[i]))
err = errors.Join(err, e)
} }
return err return err
} else { } else {
@ -308,29 +250,20 @@ func (d *Alias) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer,
reqPath, err := d.getReqPath(ctx, dstDir, true) reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil { if err == nil {
if len(reqPath) == 1 { if len(reqPath) == 1 {
storage, reqActualPath, err := op.GetStorageAndActualPath(*reqPath[0]) return fs.PutDirectly(ctx, *reqPath[0], s)
if err != nil {
return err
}
return op.Put(ctx, storage, reqActualPath, &stream.FileStream{
Obj: s,
Mimetype: s.GetMimetype(),
Reader: s,
}, up)
} else { } else {
file, err := s.CacheFullAndWriter(nil, nil) defer s.Close()
file, err := s.CacheFullInTempFile()
if err != nil { if err != nil {
return err return err
} }
count := float64(len(reqPath) + 1) for _, path := range reqPath {
up(100 / count)
for i, path := range reqPath {
err = errors.Join(err, fs.PutDirectly(ctx, *path, &stream.FileStream{ err = errors.Join(err, fs.PutDirectly(ctx, *path, &stream.FileStream{
Obj: s, Obj: s,
Mimetype: s.GetMimetype(), Mimetype: s.GetMimetype(),
WebPutAsTask: s.NeedStore(),
Reader: file, Reader: file,
})) }))
up(float64(i+2) / float64(count) * 100)
_, e := file.Seek(0, io.SeekStart) _, e := file.Seek(0, io.SeekStart)
if e != nil { if e != nil {
return errors.Join(err, e) return errors.Join(err, e)
@ -402,24 +335,18 @@ func (d *Alias) Extract(ctx context.Context, obj model.Obj, args model.ArchiveIn
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
for _, dst := range dsts { for _, dst := range dsts {
reqPath := stdpath.Join(dst, sub) link, err := d.extract(ctx, dst, sub, args)
link, err := d.extract(ctx, reqPath, args) if err == nil {
if err != nil { if !args.Redirect && len(link.URL) > 0 {
continue if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
} }
if link == nil { if d.DownloadPartSize > 0 {
return &model.Link{ link.PartSize = d.DownloadPartSize * utils.KB
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s", }
common.GetApiUrl(ctx), }
utils.EncodePath(reqPath, true), return link, nil
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
sign.SignArchive(reqPath)),
}, nil
} }
resultLink := *link
resultLink.SyncClosers = utils.NewSyncClosers(link)
return &resultLink, nil
} }
return nil, errs.NotImplement return nil, errs.NotImplement
} }

View File

@ -2,6 +2,8 @@ package alias
import ( import (
"context" "context"
"fmt"
"net/url"
stdpath "path" stdpath "path"
"strings" "strings"
@ -10,6 +12,8 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/fs" "github.com/OpenListTeam/OpenList/v4/internal/fs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/sign"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common" "github.com/OpenListTeam/OpenList/v4/server/common"
) )
@ -50,22 +54,79 @@ func (d *Alias) getRootAndPath(path string) (string, string) {
return parts[0], parts[1] return parts[0], parts[1]
} }
func (d *Alias) link(ctx context.Context, reqPath string, args model.LinkArgs) (*model.Link, model.Obj, error) { func (d *Alias) get(ctx context.Context, path string, dst, sub string) (model.Obj, error) {
obj, err := fs.Get(ctx, stdpath.Join(dst, sub), &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
}
return &model.Object{
Path: path,
Name: obj.GetName(),
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}, nil
}
func (d *Alias) list(ctx context.Context, dst, sub string, args *fs.ListArgs) ([]model.Obj, error) {
objs, err := fs.List(ctx, stdpath.Join(dst, sub), args)
// the obj must implement the model.SetPath interface
// return objs, err
if err != nil {
return nil, err
}
return utils.SliceConvert(objs, func(obj model.Obj) (model.Obj, error) {
thumb, ok := model.GetThumb(obj)
objRes := model.Object{
Name: obj.GetName(),
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
}
if !ok {
return &objRes, nil
}
return &model.ObjThumb{
Object: objRes,
Thumbnail: model.Thumbnail{
Thumbnail: thumb,
},
}, nil
})
}
func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
// 参考 crypt 驱动
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath) storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil { if err != nil {
return nil, nil, err return nil, err
} }
if !args.Redirect { useRawLink := len(common.GetApiUrl(ctx)) == 0 // ftp、s3
return op.Link(ctx, storage, reqActualPath, args) if !useRawLink {
_, ok := storage.(*Alias)
useRawLink = !ok && !args.Redirect
} }
obj, err := fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true}) if useRawLink {
link, _, err := op.Link(ctx, storage, reqActualPath, args)
return link, err
}
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err != nil { if err != nil {
return nil, nil, err return nil, err
} }
if common.ShouldProxy(storage, stdpath.Base(reqPath)) { if common.ShouldProxy(storage, stdpath.Base(sub)) {
return nil, obj, nil link := &model.Link{
URL: fmt.Sprintf("%s/p%s?sign=%s",
common.GetApiUrl(ctx),
utils.EncodePath(reqPath, true),
sign.Sign(reqPath)),
} }
return op.Link(ctx, storage, reqActualPath, args) return link, nil
}
link, _, err := op.Link(ctx, storage, reqActualPath, args)
return link, err
} }
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj, isParent bool) ([]*string, error) { func (d *Alias) getReqPath(ctx context.Context, obj model.Obj, isParent bool) ([]*string, error) {
@ -136,7 +197,8 @@ func (d *Alias) listArchive(ctx context.Context, dst, sub string, args model.Arc
return nil, errs.NotImplement return nil, errs.NotImplement
} }
func (d *Alias) extract(ctx context.Context, reqPath string, args model.ArchiveInnerArgs) (*model.Link, error) { func (d *Alias) extract(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath) storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil { if err != nil {
return nil, err return nil, err
@ -144,12 +206,20 @@ func (d *Alias) extract(ctx context.Context, reqPath string, args model.ArchiveI
if _, ok := storage.(driver.ArchiveReader); !ok { if _, ok := storage.(driver.ArchiveReader); !ok {
return nil, errs.NotImplement return nil, errs.NotImplement
} }
if args.Redirect && common.ShouldProxy(storage, stdpath.Base(reqPath)) { if args.Redirect && common.ShouldProxy(storage, stdpath.Base(sub)) {
_, err := fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true}) _, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err == nil { if err != nil {
return nil, err return nil, err
} }
return nil, nil link := &model.Link{
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s",
common.GetApiUrl(ctx),
utils.EncodePath(reqPath, true),
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
sign.SignArchive(reqPath)),
}
return link, nil
} }
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args) link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err return link, err

View File

@ -165,7 +165,7 @@ func (d *AliDrive) Remove(ctx context.Context, obj model.Obj) error {
} }
func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.FileStreamer, up driver.UpdateProgress) error { func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.FileStreamer, up driver.UpdateProgress) error {
file := &stream.FileStream{ file := stream.FileStream{
Obj: streamer, Obj: streamer,
Reader: streamer, Reader: streamer,
Mimetype: streamer.GetMimetype(), Mimetype: streamer.GetMimetype(),
@ -209,7 +209,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
io.Closer io.Closer
}{ }{
Reader: io.MultiReader(buf, file), Reader: io.MultiReader(buf, file),
Closer: file, Closer: &file,
} }
} }
} else { } else {
@ -297,10 +297,11 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
if d.InternalUpload { if d.InternalUpload {
url = partInfo.InternalUploadUrl url = partInfo.InternalUploadUrl
} }
req, err := http.NewRequestWithContext(ctx, http.MethodPut, url, io.LimitReader(rateLimited, DEFAULT)) req, err := http.NewRequest("PUT", url, io.LimitReader(rateLimited, DEFAULT))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return err return err

View File

@ -3,6 +3,7 @@ package aliyundrive_open
import ( import (
"context" "context"
"errors" "errors"
"fmt"
"net/http" "net/http"
"path/filepath" "path/filepath"
"time" "time"
@ -12,6 +13,7 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/rateg"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
@ -22,7 +24,8 @@ type AliyundriveOpen struct {
DriveId string DriveId string
limiter *limiter limitList func(ctx context.Context, data base.Json) (*Files, error)
limitLink func(ctx context.Context, file model.Obj) (*model.Link, error)
ref *AliyundriveOpen ref *AliyundriveOpen
} }
@ -35,23 +38,25 @@ func (d *AliyundriveOpen) GetAddition() driver.Additional {
} }
func (d *AliyundriveOpen) Init(ctx context.Context) error { func (d *AliyundriveOpen) Init(ctx context.Context) error {
d.limiter = getLimiterForUser(globalLimiterUserID) // First create a globally shared limiter to limit the initial requests.
if d.LIVPDownloadFormat == "" { if d.LIVPDownloadFormat == "" {
d.LIVPDownloadFormat = "jpeg" d.LIVPDownloadFormat = "jpeg"
} }
if d.DriveType == "" { if d.DriveType == "" {
d.DriveType = "default" d.DriveType = "default"
} }
res, err := d.request(ctx, limiterOther, "/adrive/v1.0/user/getDriveInfo", http.MethodPost, nil) res, err := d.request("/adrive/v1.0/user/getDriveInfo", http.MethodPost, nil)
if err != nil { if err != nil {
d.limiter.free()
d.limiter = nil
return err return err
} }
d.DriveId = utils.Json.Get(res, d.DriveType+"_drive_id").ToString() d.DriveId = utils.Json.Get(res, d.DriveType+"_drive_id").ToString()
userid := utils.Json.Get(res, "user_id").ToString() d.limitList = rateg.LimitFnCtx(d.list, rateg.LimitFnOption{
d.limiter.free() Limit: 4,
d.limiter = getLimiterForUser(userid) // Allocate a corresponding limiter for each user. Bucket: 1,
})
d.limitLink = rateg.LimitFnCtx(d.link, rateg.LimitFnOption{
Limit: 1,
Bucket: 1,
})
return nil return nil
} }
@ -65,8 +70,6 @@ func (d *AliyundriveOpen) InitReference(storage driver.Driver) error {
} }
func (d *AliyundriveOpen) Drop(ctx context.Context) error { func (d *AliyundriveOpen) Drop(ctx context.Context) error {
d.limiter.free()
d.limiter = nil
d.ref = nil d.ref = nil
return nil return nil
} }
@ -84,6 +87,9 @@ func (d *AliyundriveOpen) GetRoot(ctx context.Context) (model.Obj, error) {
} }
func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) { func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.limitList == nil {
return nil, fmt.Errorf("driver not init")
}
files, err := d.getFiles(ctx, dir.GetID()) files, err := d.getFiles(ctx, dir.GetID())
if err != nil { if err != nil {
return nil, err return nil, err
@ -101,8 +107,8 @@ func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.Li
return objs, err return objs, err
} }
func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *AliyundriveOpen) link(ctx context.Context, file model.Obj) (*model.Link, error) {
res, err := d.request(ctx, limiterLink, "/adrive/v1.0/openFile/getDownloadUrl", http.MethodPost, func(req *resty.Request) { res, err := d.request("/adrive/v1.0/openFile/getDownloadUrl", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"file_id": file.GetID(), "file_id": file.GetID(),
@ -126,10 +132,17 @@ func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.L
}, nil }, nil
} }
func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.limitLink == nil {
return nil, fmt.Errorf("driver not init")
}
return d.limitLink(ctx, file)
}
func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) { func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
nowTime, _ := getNowTime() nowTime, _ := getNowTime()
newDir := File{CreatedAt: nowTime, UpdatedAt: nowTime} newDir := File{CreatedAt: nowTime, UpdatedAt: nowTime}
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) { _, err := d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"parent_file_id": parentDir.GetID(), "parent_file_id": parentDir.GetID(),
@ -155,7 +168,7 @@ func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirN
func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) { func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
var resp MoveOrCopyResp var resp MoveOrCopyResp
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/move", http.MethodPost, func(req *resty.Request) { _, err := d.request("/adrive/v1.0/openFile/move", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"file_id": srcObj.GetID(), "file_id": srcObj.GetID(),
@ -185,7 +198,7 @@ func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (m
func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) { func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
var newFile File var newFile File
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/update", http.MethodPost, func(req *resty.Request) { _, err := d.request("/adrive/v1.0/openFile/update", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"file_id": srcObj.GetID(), "file_id": srcObj.GetID(),
@ -217,7 +230,7 @@ func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName
func (d *AliyundriveOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *AliyundriveOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
var resp MoveOrCopyResp var resp MoveOrCopyResp
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/copy", http.MethodPost, func(req *resty.Request) { _, err := d.request("/adrive/v1.0/openFile/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"file_id": srcObj.GetID(), "file_id": srcObj.GetID(),
@ -243,7 +256,7 @@ func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
if d.RemoveWay == "delete" { if d.RemoveWay == "delete" {
uri = "/adrive/v1.0/openFile/delete" uri = "/adrive/v1.0/openFile/delete"
} }
_, err := d.request(ctx, limiterOther, uri, http.MethodPost, func(req *resty.Request) { _, err := d.request(uri, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"file_id": obj.GetID(), "file_id": obj.GetID(),
@ -282,7 +295,7 @@ func (d *AliyundriveOpen) Other(ctx context.Context, args model.OtherArgs) (inte
default: default:
return nil, errs.NotSupport return nil, errs.NotSupport
} }
_, err := d.request(ctx, limiterOther, uri, http.MethodPost, func(req *resty.Request) { _, err := d.request(uri, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp) req.SetBody(data).SetResult(&resp)
}) })
if err != nil { if err != nil {

View File

@ -1,96 +0,0 @@
package aliyundrive_open
import (
"context"
"fmt"
"sync"
"golang.org/x/time/rate"
)
// See document https://www.yuque.com/aliyundrive/zpfszx/mqocg38hlxzc5vcd
// See issue https://github.com/OpenListTeam/OpenList/issues/724
// We got limit per user per app, so the limiter should be global.
type limiterType int
const (
limiterList limiterType = iota
limiterLink
limiterOther
)
const (
listRateLimit = 3.9 // 4 per second in document, but we use 3.9 per second to be safe
linkRateLimit = 0.9 // 1 per second in document, but we use 0.9 per second to be safe
otherRateLimit = 14.9 // 15 per second in document, but we use 14.9 per second to be safe
globalLimiterUserID = "" // Global limiter user ID, used to limit the initial requests.
)
type limiter struct {
usedBy int
list *rate.Limiter
link *rate.Limiter
other *rate.Limiter
}
var limiters = make(map[string]*limiter)
var limitersLock = &sync.Mutex{}
func getLimiterForUser(userid string) *limiter {
limitersLock.Lock()
defer limitersLock.Unlock()
defer func() {
// Clean up limiters that are no longer used.
for id, lim := range limiters {
if lim.usedBy <= 0 && id != globalLimiterUserID { // Do not delete the global limiter.
delete(limiters, id)
}
}
}()
if lim, ok := limiters[userid]; ok {
lim.usedBy++
return lim
}
lim := &limiter{
usedBy: 1,
list: rate.NewLimiter(rate.Limit(listRateLimit), 1),
link: rate.NewLimiter(rate.Limit(linkRateLimit), 1),
other: rate.NewLimiter(rate.Limit(otherRateLimit), 1),
}
limiters[userid] = lim
return lim
}
func (l *limiter) wait(ctx context.Context, typ limiterType) error {
if l == nil {
return fmt.Errorf("driver not init")
}
switch typ {
case limiterList:
return l.list.Wait(ctx)
case limiterLink:
return l.link.Wait(ctx)
case limiterOther:
return l.other.Wait(ctx)
default:
return fmt.Errorf("unknown limiter type")
}
}
func (l *limiter) free() {
if l == nil {
return
}
limitersLock.Lock()
defer limitersLock.Unlock()
l.usedBy--
}
func (d *AliyundriveOpen) wait(ctx context.Context, typ limiterType) error {
if d == nil {
return fmt.Errorf("driver not init")
}
if d.ref != nil {
return d.ref.wait(ctx, typ) // If this is a reference driver, wait on the reference driver.
}
return d.limiter.wait(ctx, typ)
}

View File

@ -12,7 +12,6 @@ type Addition struct {
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"` OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"` OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
UseOnlineAPI bool `json:"use_online_api" default:"true"` UseOnlineAPI bool `json:"use_online_api" default:"true"`
AlipanType string `json:"alipan_type" required:"true" type:"select" default:"default" options:"default,alipanTV"`
APIAddress string `json:"api_url_address" default:"https://api.oplist.org/alicloud/renewapi"` APIAddress string `json:"api_url_address" default:"https://api.oplist.org/alicloud/renewapi"`
ClientID string `json:"client_id" help:"Keep it empty if you don't have one"` ClientID string `json:"client_id" help:"Keep it empty if you don't have one"`
ClientSecret string `json:"client_secret" help:"Keep it empty if you don't have one"` ClientSecret string `json:"client_secret" help:"Keep it empty if you don't have one"`
@ -25,6 +24,12 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "AliyundriveOpen", Name: "AliyundriveOpen",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "root", DefaultRoot: "root",
NoOverwriteUpload: true, NoOverwriteUpload: true,
} }

View File

@ -50,10 +50,10 @@ func calPartSize(fileSize int64) int64 {
return partSize return partSize
} }
func (d *AliyundriveOpen) getUploadUrl(ctx context.Context, count int, fileId, uploadId string) ([]PartInfo, error) { func (d *AliyundriveOpen) getUploadUrl(count int, fileId, uploadId string) ([]PartInfo, error) {
partInfoList := makePartInfos(count) partInfoList := makePartInfos(count)
var resp CreateResp var resp CreateResp
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/getUploadUrl", http.MethodPost, func(req *resty.Request) { _, err := d.request("/adrive/v1.0/openFile/getUploadUrl", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"file_id": fileId, "file_id": fileId,
@ -69,7 +69,7 @@ func (d *AliyundriveOpen) uploadPart(ctx context.Context, r io.Reader, partInfo
if d.InternalUpload { if d.InternalUpload {
uploadUrl = strings.ReplaceAll(uploadUrl, "https://cn-beijing-data.aliyundrive.net/", "http://ccp-bj29-bj-1592982087.oss-cn-beijing-internal.aliyuncs.com/") uploadUrl = strings.ReplaceAll(uploadUrl, "https://cn-beijing-data.aliyundrive.net/", "http://ccp-bj29-bj-1592982087.oss-cn-beijing-internal.aliyuncs.com/")
} }
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, r) req, err := http.NewRequestWithContext(ctx, "PUT", uploadUrl, r)
if err != nil { if err != nil {
return err return err
} }
@ -84,10 +84,10 @@ func (d *AliyundriveOpen) uploadPart(ctx context.Context, r io.Reader, partInfo
return nil return nil
} }
func (d *AliyundriveOpen) completeUpload(ctx context.Context, fileId, uploadId string) (model.Obj, error) { func (d *AliyundriveOpen) completeUpload(fileId, uploadId string) (model.Obj, error) {
// 3. complete // 3. complete
var newFile File var newFile File
_, err := d.request(ctx, limiterOther, "/adrive/v1.0/openFile/complete", http.MethodPost, func(req *resty.Request) { _, err := d.request("/adrive/v1.0/openFile/complete", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"file_id": fileId, "file_id": fileId,
@ -137,8 +137,11 @@ func (d *AliyundriveOpen) calProofCode(stream model.FileStreamer) (string, error
} }
buf := make([]byte, length) buf := make([]byte, length)
n, err := io.ReadFull(reader, buf) n, err := io.ReadFull(reader, buf)
if n != int(length) { if err == io.ErrUnexpectedEOF {
return "", fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", length, n, err) return "", fmt.Errorf("can't read data, expected=%d, got=%d", len(buf), n)
}
if err != nil {
return "", err
} }
return base64.StdEncoding.EncodeToString(buf), nil return base64.StdEncoding.EncodeToString(buf), nil
} }
@ -180,7 +183,7 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
createData["pre_hash"] = hash createData["pre_hash"] = hash
} }
var createResp CreateResp var createResp CreateResp
_, err, e := d.requestReturnErrResp(ctx, limiterOther, "/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) { _, err, e := d.requestReturnErrResp("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp) req.SetBody(createData).SetResult(&createResp)
}) })
if err != nil { if err != nil {
@ -191,7 +194,7 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
hash := stream.GetHash().GetHash(utils.SHA1) hash := stream.GetHash().GetHash(utils.SHA1)
if len(hash) != utils.SHA1.Width { if len(hash) != utils.SHA1.Width {
_, hash, err = streamPkg.CacheFullAndHash(stream, &up, utils.SHA1) _, hash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA1)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -205,7 +208,7 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
if err != nil { if err != nil {
return nil, fmt.Errorf("cal proof code error: %s", err.Error()) return nil, fmt.Errorf("cal proof code error: %s", err.Error())
} }
_, err = d.request(ctx, limiterOther, "/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) { _, err = d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp) req.SetBody(createData).SetResult(&createResp)
}) })
if err != nil { if err != nil {
@ -216,20 +219,17 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
if !createResp.RapidUpload { if !createResp.RapidUpload {
// 2. normal upload // 2. normal upload
log.Debugf("[aliyundive_open] normal upload") log.Debugf("[aliyundive_open] normal upload")
ss, err := streamPkg.NewStreamSectionReader(stream, int(partSize), &up)
if err != nil {
return nil, err
}
preTime := time.Now() preTime := time.Now()
var offset, length int64 = 0, partSize var offset, length int64 = 0, partSize
//var length
for i := 0; i < len(createResp.PartInfoList); i++ { for i := 0; i < len(createResp.PartInfoList); i++ {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return nil, ctx.Err() return nil, ctx.Err()
} }
// refresh upload url if 50 minutes passed // refresh upload url if 50 minutes passed
if time.Since(preTime) > 50*time.Minute { if time.Since(preTime) > 50*time.Minute {
createResp.PartInfoList, err = d.getUploadUrl(ctx, count, createResp.FileId, createResp.UploadId) createResp.PartInfoList, err = d.getUploadUrl(count, createResp.FileId, createResp.UploadId)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -238,19 +238,22 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
if remain := stream.GetSize() - offset; length > remain { if remain := stream.GetSize() - offset; length > remain {
length = remain length = remain
} }
rd, err := ss.GetSectionReader(offset, length) rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
if rapidUpload {
srd, err := stream.RangeRead(http_range.Range{Start: offset, Length: length})
if err != nil { if err != nil {
return nil, err return nil, err
} }
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd) rd = utils.NewMultiReadable(srd)
}
err = retry.Do(func() error { err = retry.Do(func() error {
rd.Seek(0, io.SeekStart) _ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
return d.uploadPart(ctx, rateLimitedRd, createResp.PartInfoList[i]) return d.uploadPart(ctx, rateLimitedRd, createResp.PartInfoList[i])
}, },
retry.Attempts(3), retry.Attempts(3),
retry.DelayType(retry.BackOffDelay), retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second)) retry.Delay(time.Second))
ss.FreeSectionReader(rd)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -263,5 +266,5 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
log.Debugf("[aliyundrive_open] create file success, resp: %+v", createResp) log.Debugf("[aliyundrive_open] create file success, resp: %+v", createResp)
// 3. complete // 3. complete
return d.completeUpload(ctx, createResp.FileId, createResp.UploadId) return d.completeUpload(createResp.FileId, createResp.UploadId)
} }

View File

@ -19,7 +19,7 @@ import (
// do others that not defined in Driver interface // do others that not defined in Driver interface
func (d *AliyundriveOpen) _refreshToken(ctx context.Context) (string, string, error) { func (d *AliyundriveOpen) _refreshToken() (string, string, error) {
if d.UseOnlineAPI && d.APIAddress != "" { if d.UseOnlineAPI && d.APIAddress != "" {
u := d.APIAddress u := d.APIAddress
var resp struct { var resp struct {
@ -27,23 +27,13 @@ func (d *AliyundriveOpen) _refreshToken(ctx context.Context) (string, string, er
AccessToken string `json:"access_token"` AccessToken string `json:"access_token"`
ErrorMessage string `json:"text"` ErrorMessage string `json:"text"`
} }
_, err := base.RestyClient.R().
// 根据AlipanType选项设置driver_txt
driverTxt := "alicloud_qr"
if d.AlipanType == "alipanTV" {
driverTxt = "alicloud_tv"
}
err := d.wait(ctx, limiterOther)
if err != nil {
return "", "", err
}
_, err = base.RestyClient.R().
SetHeader("User-Agent", "Mozilla/5.0 (Macintosh; Apple macOS 15_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36 Chrome/138.0.0.0 Openlist/425.6.30"). SetHeader("User-Agent", "Mozilla/5.0 (Macintosh; Apple macOS 15_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36 Chrome/138.0.0.0 Openlist/425.6.30").
SetResult(&resp). SetResult(&resp).
SetQueryParams(map[string]string{ SetQueryParams(map[string]string{
"refresh_ui": d.RefreshToken, "refresh_ui": d.RefreshToken,
"server_use": "true", "server_use": "true",
"driver_txt": driverTxt, "driver_txt": "alicloud_qr",
}). }).
Get(u) Get(u)
if err != nil { if err != nil {
@ -53,18 +43,15 @@ func (d *AliyundriveOpen) _refreshToken(ctx context.Context) (string, string, er
if resp.ErrorMessage != "" { if resp.ErrorMessage != "" {
return "", "", fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage) return "", "", fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage)
} }
return "", "", fmt.Errorf("empty token returned from official API, a wrong refresh token may have been used") return "", "", fmt.Errorf("empty token returned from official API")
} }
return resp.RefreshToken, resp.AccessToken, nil return resp.RefreshToken, resp.AccessToken, nil
} }
// 本地刷新逻辑,必须要求 client_id 和 client_secret // 本地刷新逻辑,必须要求 client_id 和 client_secret
if d.ClientID == "" || d.ClientSecret == "" { if d.ClientID == "" || d.ClientSecret == "" {
return "", "", fmt.Errorf("empty ClientID or ClientSecret") return "", "", fmt.Errorf("empty ClientID or ClientSecret")
} }
err := d.wait(ctx, limiterOther)
if err != nil {
return "", "", err
}
url := API_URL + "/oauth/access_token" url := API_URL + "/oauth/access_token"
//var resp base.TokenResp //var resp base.TokenResp
var e ErrResp var e ErrResp
@ -116,18 +103,18 @@ func getSub(token string) (string, error) {
return utils.Json.Get(bs, "sub").ToString(), nil return utils.Json.Get(bs, "sub").ToString(), nil
} }
func (d *AliyundriveOpen) refreshToken(ctx context.Context) error { func (d *AliyundriveOpen) refreshToken() error {
if d.ref != nil { if d.ref != nil {
return d.ref.refreshToken(ctx) return d.ref.refreshToken()
} }
refresh, access, err := d._refreshToken(ctx) refresh, access, err := d._refreshToken()
for i := 0; i < 3; i++ { for i := 0; i < 3; i++ {
if err == nil { if err == nil {
break break
} else { } else {
log.Errorf("[ali_open] failed to refresh token: %s", err) log.Errorf("[ali_open] failed to refresh token: %s", err)
} }
refresh, access, err = d._refreshToken(ctx) refresh, access, err = d._refreshToken()
} }
if err != nil { if err != nil {
return err return err
@ -138,12 +125,12 @@ func (d *AliyundriveOpen) refreshToken(ctx context.Context) error {
return nil return nil
} }
func (d *AliyundriveOpen) request(ctx context.Context, limitTy limiterType, uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) { func (d *AliyundriveOpen) request(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
b, err, _ := d.requestReturnErrResp(ctx, limitTy, uri, method, callback, retry...) b, err, _ := d.requestReturnErrResp(uri, method, callback, retry...)
return b, err return b, err
} }
func (d *AliyundriveOpen) requestReturnErrResp(ctx context.Context, limitTy limiterType, uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error, *ErrResp) { func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error, *ErrResp) {
req := base.RestyClient.R() req := base.RestyClient.R()
// TODO check whether access_token is expired // TODO check whether access_token is expired
req.SetHeader("Authorization", "Bearer "+d.getAccessToken()) req.SetHeader("Authorization", "Bearer "+d.getAccessToken())
@ -155,10 +142,6 @@ func (d *AliyundriveOpen) requestReturnErrResp(ctx context.Context, limitTy limi
} }
var e ErrResp var e ErrResp
req.SetError(&e) req.SetError(&e)
err := d.wait(ctx, limitTy)
if err != nil {
return nil, err, nil
}
res, err := req.Execute(method, API_URL+uri) res, err := req.Execute(method, API_URL+uri)
if err != nil { if err != nil {
if res != nil { if res != nil {
@ -169,11 +152,11 @@ func (d *AliyundriveOpen) requestReturnErrResp(ctx context.Context, limitTy limi
isRetry := len(retry) > 0 && retry[0] isRetry := len(retry) > 0 && retry[0]
if e.Code != "" { if e.Code != "" {
if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.getAccessToken() == "") { if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.getAccessToken() == "") {
err = d.refreshToken(ctx) err = d.refreshToken()
if err != nil { if err != nil {
return nil, err, nil return nil, err, nil
} }
return d.requestReturnErrResp(ctx, limitTy, uri, method, callback, true) return d.requestReturnErrResp(uri, method, callback, true)
} }
return nil, fmt.Errorf("%s:%s", e.Code, e.Message), &e return nil, fmt.Errorf("%s:%s", e.Code, e.Message), &e
} }
@ -182,7 +165,7 @@ func (d *AliyundriveOpen) requestReturnErrResp(ctx context.Context, limitTy limi
func (d *AliyundriveOpen) list(ctx context.Context, data base.Json) (*Files, error) { func (d *AliyundriveOpen) list(ctx context.Context, data base.Json) (*Files, error) {
var resp Files var resp Files
_, err := d.request(ctx, limiterList, "/adrive/v1.0/openFile/list", http.MethodPost, func(req *resty.Request) { _, err := d.request("/adrive/v1.0/openFile/list", http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp) req.SetBody(data).SetResult(&resp)
}) })
if err != nil { if err != nil {
@ -211,7 +194,7 @@ func (d *AliyundriveOpen) getFiles(ctx context.Context, fileId string) ([]File,
//"video_thumbnail_width": 480, //"video_thumbnail_width": 480,
//"image_thumbnail_width": 480, //"image_thumbnail_width": 480,
} }
resp, err := d.list(ctx, data) resp, err := d.limitList(ctx, data)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -2,6 +2,7 @@ package aliyundrive_share
import ( import (
"context" "context"
"fmt"
"net/http" "net/http"
"time" "time"
@ -11,6 +12,7 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/cron" "github.com/OpenListTeam/OpenList/v4/pkg/cron"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/rateg"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
@ -23,7 +25,8 @@ type AliyundriveShare struct {
DriveId string DriveId string
cron *cron.Cron cron *cron.Cron
limiter *limiter limitList func(ctx context.Context, dir model.Obj) ([]model.Obj, error)
limitLink func(ctx context.Context, file model.Obj) (*model.Link, error)
} }
func (d *AliyundriveShare) Config() driver.Config { func (d *AliyundriveShare) Config() driver.Config {
@ -35,26 +38,29 @@ func (d *AliyundriveShare) GetAddition() driver.Additional {
} }
func (d *AliyundriveShare) Init(ctx context.Context) error { func (d *AliyundriveShare) Init(ctx context.Context) error {
d.limiter = getLimiter() err := d.refreshToken()
err := d.refreshToken(ctx)
if err != nil { if err != nil {
d.limiter.free()
d.limiter = nil
return err return err
} }
err = d.getShareToken(ctx) err = d.getShareToken()
if err != nil { if err != nil {
d.limiter.free()
d.limiter = nil
return err return err
} }
d.cron = cron.NewCron(time.Hour * 2) d.cron = cron.NewCron(time.Hour * 2)
d.cron.Do(func() { d.cron.Do(func() {
err := d.refreshToken(ctx) err := d.refreshToken()
if err != nil { if err != nil {
log.Errorf("%+v", err) log.Errorf("%+v", err)
} }
}) })
d.limitList = rateg.LimitFnCtx(d.list, rateg.LimitFnOption{
Limit: 4,
Bucket: 1,
})
d.limitLink = rateg.LimitFnCtx(d.link, rateg.LimitFnOption{
Limit: 1,
Bucket: 1,
})
return nil return nil
} }
@ -62,14 +68,19 @@ func (d *AliyundriveShare) Drop(ctx context.Context) error {
if d.cron != nil { if d.cron != nil {
d.cron.Stop() d.cron.Stop()
} }
d.limiter.free()
d.limiter = nil
d.DriveId = "" d.DriveId = ""
return nil return nil
} }
func (d *AliyundriveShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) { func (d *AliyundriveShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files, err := d.getFiles(ctx, dir.GetID()) if d.limitList == nil {
return nil, fmt.Errorf("driver not init")
}
return d.limitList(ctx, dir)
}
func (d *AliyundriveShare) list(ctx context.Context, dir model.Obj) ([]model.Obj, error) {
files, err := d.getFiles(dir.GetID())
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -79,6 +90,13 @@ func (d *AliyundriveShare) List(ctx context.Context, dir model.Obj, args model.L
} }
func (d *AliyundriveShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *AliyundriveShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.limitLink == nil {
return nil, fmt.Errorf("driver not init")
}
return d.limitLink(ctx, file)
}
func (d *AliyundriveShare) link(ctx context.Context, file model.Obj) (*model.Link, error) {
data := base.Json{ data := base.Json{
"drive_id": d.DriveId, "drive_id": d.DriveId,
"file_id": file.GetID(), "file_id": file.GetID(),
@ -87,7 +105,7 @@ func (d *AliyundriveShare) Link(ctx context.Context, file model.Obj, args model.
"share_id": d.ShareId, "share_id": d.ShareId,
} }
var resp ShareLinkResp var resp ShareLinkResp
_, err := d.request(ctx, limiterLink, "https://api.alipan.com/v2/file/get_share_link_download_url", http.MethodPost, func(req *resty.Request) { _, err := d.request("https://api.alipan.com/v2/file/get_share_link_download_url", http.MethodPost, func(req *resty.Request) {
req.SetHeader(CanaryHeaderKey, CanaryHeaderValue).SetBody(data).SetResult(&resp) req.SetHeader(CanaryHeaderKey, CanaryHeaderValue).SetBody(data).SetResult(&resp)
}) })
if err != nil { if err != nil {
@ -117,7 +135,7 @@ func (d *AliyundriveShare) Other(ctx context.Context, args model.OtherArgs) (int
default: default:
return nil, errs.NotSupport return nil, errs.NotSupport
} }
_, err := d.request(ctx, limiterOther, url, http.MethodPost, func(req *resty.Request) { _, err := d.request(url, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp) req.SetBody(data).SetResult(&resp)
}) })
if err != nil { if err != nil {

View File

@ -1,67 +0,0 @@
package aliyundrive_share
import (
"context"
"fmt"
"golang.org/x/time/rate"
)
// See issue https://github.com/OpenListTeam/OpenList/issues/724
// Seems there is no limit per user.
type limiterType int
const (
limiterList limiterType = iota
limiterLink
limiterOther
)
const (
listRateLimit = 3.9 // 4 per second in document, but we use 3.9 per second to be safe
linkRateLimit = 0.9 // 1 per second in document, but we use 0.9 per second to be safe
otherRateLimit = 14.9 // 15 per second in document, but we use 14.9 per second to be safe
)
type limiter struct {
list *rate.Limiter
link *rate.Limiter
other *rate.Limiter
}
func getLimiter() *limiter {
return &limiter{
list: rate.NewLimiter(rate.Limit(listRateLimit), 1),
link: rate.NewLimiter(rate.Limit(linkRateLimit), 1),
other: rate.NewLimiter(rate.Limit(otherRateLimit), 1),
}
}
func (l *limiter) wait(ctx context.Context, typ limiterType) error {
if l == nil {
return fmt.Errorf("driver not init")
}
switch typ {
case limiterList:
return l.list.Wait(ctx)
case limiterLink:
return l.link.Wait(ctx)
case limiterOther:
return l.other.Wait(ctx)
default:
return fmt.Errorf("unknown limiter type")
}
}
func (l *limiter) free() {
}
func (d *AliyundriveShare) wait(ctx context.Context, typ limiterType) error {
if d == nil {
return fmt.Errorf("driver not init")
}
//if d.ref != nil {
// return d.ref.wait(ctx, typ) // If this is a reference driver, wait on the reference driver.
//}
return d.limiter.wait(ctx, typ)
}

View File

@ -1,7 +1,6 @@
package aliyundrive_share package aliyundrive_share
import ( import (
"context"
"errors" "errors"
"fmt" "fmt"
@ -16,15 +15,11 @@ const (
CanaryHeaderValue = "client=web,app=share,version=v2.3.1" CanaryHeaderValue = "client=web,app=share,version=v2.3.1"
) )
func (d *AliyundriveShare) refreshToken(ctx context.Context) error { func (d *AliyundriveShare) refreshToken() error {
err := d.wait(ctx, limiterOther)
if err != nil {
return err
}
url := "https://auth.alipan.com/v2/account/token" url := "https://auth.alipan.com/v2/account/token"
var resp base.TokenResp var resp base.TokenResp
var e ErrorResp var e ErrorResp
_, err = base.RestyClient.R(). _, err := base.RestyClient.R().
SetBody(base.Json{"refresh_token": d.RefreshToken, "grant_type": "refresh_token"}). SetBody(base.Json{"refresh_token": d.RefreshToken, "grant_type": "refresh_token"}).
SetResult(&resp). SetResult(&resp).
SetError(&e). SetError(&e).
@ -41,11 +36,7 @@ func (d *AliyundriveShare) refreshToken(ctx context.Context) error {
} }
// do others that not defined in Driver interface // do others that not defined in Driver interface
func (d *AliyundriveShare) getShareToken(ctx context.Context) error { func (d *AliyundriveShare) getShareToken() error {
err := d.wait(ctx, limiterOther)
if err != nil {
return err
}
data := base.Json{ data := base.Json{
"share_id": d.ShareId, "share_id": d.ShareId,
} }
@ -54,7 +45,7 @@ func (d *AliyundriveShare) getShareToken(ctx context.Context) error {
} }
var e ErrorResp var e ErrorResp
var resp ShareTokenResp var resp ShareTokenResp
_, err = base.RestyClient.R(). _, err := base.RestyClient.R().
SetResult(&resp).SetError(&e).SetBody(data). SetResult(&resp).SetError(&e).SetBody(data).
Post("https://api.alipan.com/v2/share_link/get_share_token") Post("https://api.alipan.com/v2/share_link/get_share_token")
if err != nil { if err != nil {
@ -67,7 +58,7 @@ func (d *AliyundriveShare) getShareToken(ctx context.Context) error {
return nil return nil
} }
func (d *AliyundriveShare) request(ctx context.Context, limitTy limiterType, url, method string, callback base.ReqCallback) ([]byte, error) { func (d *AliyundriveShare) request(url, method string, callback base.ReqCallback) ([]byte, error) {
var e ErrorResp var e ErrorResp
req := base.RestyClient.R(). req := base.RestyClient.R().
SetError(&e). SetError(&e).
@ -80,10 +71,6 @@ func (d *AliyundriveShare) request(ctx context.Context, limitTy limiterType, url
} else { } else {
req.SetBody("{}") req.SetBody("{}")
} }
err := d.wait(ctx, limitTy)
if err != nil {
return nil, err
}
resp, err := req.Execute(method, url) resp, err := req.Execute(method, url)
if err != nil { if err != nil {
return nil, err return nil, err
@ -91,14 +78,14 @@ func (d *AliyundriveShare) request(ctx context.Context, limitTy limiterType, url
if e.Code != "" { if e.Code != "" {
if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" { if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" {
if e.Code == "AccessTokenInvalid" { if e.Code == "AccessTokenInvalid" {
err = d.refreshToken(ctx) err = d.refreshToken()
} else { } else {
err = d.getShareToken(ctx) err = d.getShareToken()
} }
if err != nil { if err != nil {
return nil, err return nil, err
} }
return d.request(ctx, limitTy, url, method, callback) return d.request(url, method, callback)
} else { } else {
return nil, errors.New(e.Code + ": " + e.Message) return nil, errors.New(e.Code + ": " + e.Message)
} }
@ -106,7 +93,7 @@ func (d *AliyundriveShare) request(ctx context.Context, limitTy limiterType, url
return resp.Body(), nil return resp.Body(), nil
} }
func (d *AliyundriveShare) getFiles(ctx context.Context, fileId string) ([]File, error) { func (d *AliyundriveShare) getFiles(fileId string) ([]File, error) {
files := make([]File, 0) files := make([]File, 0)
data := base.Json{ data := base.Json{
"image_thumbnail_process": "image/resize,w_160/format,jpeg", "image_thumbnail_process": "image/resize,w_160/format,jpeg",
@ -123,10 +110,6 @@ func (d *AliyundriveShare) getFiles(ctx context.Context, fileId string) ([]File,
if data["marker"] == "first" { if data["marker"] == "first" {
data["marker"] = "" data["marker"] = ""
} }
err := d.wait(ctx, limiterList)
if err != nil {
return nil, err
}
var e ErrorResp var e ErrorResp
var resp ListResp var resp ListResp
res, err := base.RestyClient.R(). res, err := base.RestyClient.R().
@ -140,11 +123,11 @@ func (d *AliyundriveShare) getFiles(ctx context.Context, fileId string) ([]File,
log.Debugf("aliyundrive share get files: %s", res.String()) log.Debugf("aliyundrive share get files: %s", res.String())
if e.Code != "" { if e.Code != "" {
if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" { if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" {
err = d.getShareToken(ctx) err = d.getShareToken()
if err != nil { if err != nil {
return nil, err return nil, err
} }
return d.getFiles(ctx, fileId) return d.getFiles(fileId)
} }
return nil, errors.New(e.Message) return nil, errors.New(e.Message)
} }

View File

@ -57,12 +57,12 @@ import (
_ "github.com/OpenListTeam/OpenList/v4/drivers/seafile" _ "github.com/OpenListTeam/OpenList/v4/drivers/seafile"
_ "github.com/OpenListTeam/OpenList/v4/drivers/sftp" _ "github.com/OpenListTeam/OpenList/v4/drivers/sftp"
_ "github.com/OpenListTeam/OpenList/v4/drivers/smb" _ "github.com/OpenListTeam/OpenList/v4/drivers/smb"
_ "github.com/OpenListTeam/OpenList/v4/drivers/strm"
_ "github.com/OpenListTeam/OpenList/v4/drivers/teambition" _ "github.com/OpenListTeam/OpenList/v4/drivers/teambition"
_ "github.com/OpenListTeam/OpenList/v4/drivers/terabox" _ "github.com/OpenListTeam/OpenList/v4/drivers/terabox"
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder" _ "github.com/OpenListTeam/OpenList/v4/drivers/thunder"
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder_browser" _ "github.com/OpenListTeam/OpenList/v4/drivers/thunder_browser"
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunderx" _ "github.com/OpenListTeam/OpenList/v4/drivers/thunderx"
_ "github.com/OpenListTeam/OpenList/v4/drivers/trainbit"
_ "github.com/OpenListTeam/OpenList/v4/drivers/url_tree" _ "github.com/OpenListTeam/OpenList/v4/drivers/url_tree"
_ "github.com/OpenListTeam/OpenList/v4/drivers/uss" _ "github.com/OpenListTeam/OpenList/v4/drivers/uss"
_ "github.com/OpenListTeam/OpenList/v4/drivers/virtual" _ "github.com/OpenListTeam/OpenList/v4/drivers/virtual"

View File

@ -203,12 +203,11 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
streamSize := stream.GetSize() streamSize := stream.GetSize()
sliceSize := d.getSliceSize(streamSize) sliceSize := d.getSliceSize(streamSize)
count := 1 count := int(streamSize / sliceSize)
if streamSize > sliceSize {
count = int((streamSize + sliceSize - 1) / sliceSize)
}
lastBlockSize := streamSize % sliceSize lastBlockSize := streamSize % sliceSize
if lastBlockSize == 0 { if lastBlockSize > 0 {
count++
} else {
lastBlockSize = sliceSize lastBlockSize = sliceSize
} }

View File

@ -55,7 +55,7 @@ func (d *BaiduNetdisk) _refreshToken() error {
if resp.ErrorMessage != "" { if resp.ErrorMessage != "" {
return fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage) return fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage)
} }
return fmt.Errorf("empty token returned from official API, a wrong refresh token may have been used") return fmt.Errorf("empty token returned from official API")
} }
d.AccessToken = resp.AccessToken d.AccessToken = resp.AccessToken
d.RefreshToken = resp.RefreshToken d.RefreshToken = resp.RefreshToken

View File

@ -262,12 +262,11 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
// 计算需要的数据 // 计算需要的数据
streamSize := stream.GetSize() streamSize := stream.GetSize()
count := 1 count := int(streamSize / DEFAULT)
if streamSize > DEFAULT {
count = int((streamSize + DEFAULT - 1) / DEFAULT)
}
lastBlockSize := streamSize % DEFAULT lastBlockSize := streamSize % DEFAULT
if lastBlockSize == 0 { if lastBlockSize > 0 {
count++
} else {
lastBlockSize = DEFAULT lastBlockSize = DEFAULT
} }

View File

@ -255,7 +255,7 @@ func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, file model.FileStr
}, },
UpdateProgress: up, UpdateProgress: up,
}) })
req, err := http.NewRequestWithContext(ctx, http.MethodPost, "https://pan-yz.chaoxing.com/upload", r) req, err := http.NewRequestWithContext(ctx, "POST", "https://pan-yz.chaoxing.com/upload", r)
if err != nil { if err != nil {
return err return err
} }

View File

@ -32,6 +32,7 @@ func init() {
config: driver.Config{ config: driver.Config{
Name: "ChaoXingGroupDrive", Name: "ChaoXingGroupDrive",
OnlyProxy: true, OnlyProxy: true,
OnlyLocal: false,
DefaultRoot: "-1", DefaultRoot: "-1",
NoOverwriteUpload: true, NoOverwriteUpload: true,
}, },

View File

@ -167,7 +167,7 @@ func (d *ChaoXing) Login() (string, error) {
return "", err return "", err
} }
// Create the request // Create the request
req, err := http.NewRequest(http.MethodPost, "https://passport2.chaoxing.com/fanyalogin", body) req, err := http.NewRequest("POST", "https://passport2.chaoxing.com/fanyalogin", body)
if err != nil { if err != nil {
return "", err return "", err
} }

View File

@ -20,7 +20,6 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "Cloudreve", Name: "Cloudreve",
DefaultRoot: "/", DefaultRoot: "/",
LocalSort: true,
} }
func init() { func init() {

View File

@ -18,10 +18,8 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/setting" "github.com/OpenListTeam/OpenList/v4/internal/setting"
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/cookie" "github.com/OpenListTeam/OpenList/v4/pkg/cookie"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go" jsoniter "github.com/json-iterator/go"
) )
@ -237,16 +235,13 @@ func (d *Cloudreve) upLocal(ctx context.Context, stream model.FileStreamer, u Up
} }
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error { func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
DEFAULT := int64(u.ChunkSize)
ss, err := streamPkg.NewStreamSectionReader(stream, int(DEFAULT), &up)
if err != nil {
return err
}
uploadUrl := u.UploadURLs[0] uploadUrl := u.UploadURLs[0]
credential := u.Credential credential := u.Credential
var finish int64 = 0 var finish int64 = 0
var chunk int = 0 var chunk int = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() { for finish < stream.GetSize() {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
@ -254,28 +249,30 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
left := stream.GetSize() - finish left := stream.GetSize() - finish
byteSize := min(left, DEFAULT) byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()) utils.Log.Debugf("[Cloudreve-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
rd, err := ss.GetSectionReader(finish, byteSize) byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil { if err != nil {
return err return err
} }
err = retry.Do( req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
func() error { driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
rd.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, rd))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.ContentLength = byteSize req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential)) req.Header.Set("Authorization", fmt.Sprint(credential))
req.Header.Set("User-Agent", d.getUA()) req.Header.Set("User-Agent", d.getUA())
err = func() error {
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return err return err
} }
defer res.Body.Close() defer res.Body.Close()
if res.StatusCode != 200 { if res.StatusCode != 200 {
return fmt.Errorf("server error: %d", res.StatusCode) return errors.New(res.Status)
} }
body, err := io.ReadAll(res.Body) body, err := io.ReadAll(res.Body)
if err != nil { if err != nil {
@ -290,31 +287,31 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
return errors.New(up.Msg) return errors.New(up.Msg)
} }
return nil return nil
}, }()
retry.Attempts(3), if err == nil {
retry.DelayType(retry.BackOffDelay), retryCount = 0
retry.Delay(time.Second),
)
ss.FreeSectionReader(rd)
if err != nil {
return err
}
finish += byteSize finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize())) up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++ chunk++
} else {
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error: %s", maxRetries, err)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-Remote] server errors while uploading, retrying after %v...", backoff)
time.Sleep(backoff)
}
} }
return nil return nil
} }
func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error { func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
DEFAULT := int64(u.ChunkSize)
ss, err := streamPkg.NewStreamSectionReader(stream, int(DEFAULT), &up)
if err != nil {
return err
}
uploadUrl := u.UploadURLs[0] uploadUrl := u.UploadURLs[0]
var finish int64 = 0 var finish int64 = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() { for finish < stream.GetSize() {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
@ -322,46 +319,47 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
left := stream.GetSize() - finish left := stream.GetSize() - finish
byteSize := min(left, DEFAULT) byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()) utils.Log.Debugf("[Cloudreve-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
rd, err := ss.GetSectionReader(finish, byteSize) byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil { if err != nil {
return err return err
} }
err = retry.Do( req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
func() error {
rd.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, rd))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.ContentLength = byteSize req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())) req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()))
req.Header.Set("User-Agent", d.getUA()) req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return err return err
} }
defer res.Body.Close()
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession // https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
switch { switch {
case res.StatusCode >= 500 && res.StatusCode <= 504: case res.StatusCode >= 500 && res.StatusCode <= 504:
return fmt.Errorf("server error: %d", res.StatusCode) retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-OneDrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200: case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body) data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data)) return errors.New(string(data))
default: default:
return nil res.Body.Close()
} retryCount = 0
}, retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second),
)
ss.FreeSectionReader(rd)
if err != nil {
return err
}
finish += byteSize finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize())) up(float64(finish) * 100 / float64(stream.GetSize()))
} }
}
// 上传成功发送回调请求 // 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/onedrive/finish/"+u.SessionID, func(req *resty.Request) { return d.request(http.MethodPost, "/callback/onedrive/finish/"+u.SessionID, func(req *resty.Request) {
req.SetBody("{}") req.SetBody("{}")
@ -369,15 +367,12 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
} }
func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error { func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
DEFAULT := int64(u.ChunkSize)
ss, err := streamPkg.NewStreamSectionReader(stream, int(DEFAULT), &up)
if err != nil {
return err
}
var finish int64 = 0 var finish int64 = 0
var chunk int = 0 var chunk int = 0
var etags []string var etags []string
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() { for finish < stream.GetSize() {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
@ -385,20 +380,19 @@ func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u Uploa
left := stream.GetSize() - finish left := stream.GetSize() - finish
byteSize := min(left, DEFAULT) byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()) utils.Log.Debugf("[Cloudreve-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
rd, err := ss.GetSectionReader(finish, byteSize) byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil { if err != nil {
return err return err
} }
err = retry.Do( req, err := http.NewRequest("PUT", u.UploadURLs[chunk],
func() error { driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
rd.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, u.UploadURLs[chunk],
driver.NewLimitedUploadStream(ctx, rd))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.ContentLength = byteSize req.ContentLength = byteSize
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return err return err
@ -407,25 +401,24 @@ func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u Uploa
res.Body.Close() res.Body.Close()
switch { switch {
case res.StatusCode != 200: case res.StatusCode != 200:
return fmt.Errorf("server error: %d", res.StatusCode) retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-S3] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case etag == "": case etag == "":
return errors.New("failed to get ETag from header") return errors.New("failed to get ETag from header")
default: default:
retryCount = 0
etags = append(etags, etag) etags = append(etags, etag)
return nil
}
}, retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second),
)
ss.FreeSectionReader(rd)
if err != nil {
return err
}
finish += byteSize finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize())) up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++ chunk++
} }
}
// s3LikeFinishUpload // s3LikeFinishUpload
// https://github.com/cloudreve/frontend/blob/b485bf297974cbe4834d2e8e744ae7b7e5b2ad39/src/component/Uploader/core/api/index.ts#L204-L252 // https://github.com/cloudreve/frontend/blob/b485bf297974cbe4834d2e8e744ae7b7e5b2ad39/src/component/Uploader/core/api/index.ts#L204-L252
bodyBuilder := &strings.Builder{} bodyBuilder := &strings.Builder{}
@ -438,8 +431,8 @@ func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u Uploa
)) ))
} }
bodyBuilder.WriteString("</CompleteMultipartUpload>") bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequestWithContext(ctx, req, err := http.NewRequest(
http.MethodPost, "POST",
u.CompleteURL, u.CompleteURL,
strings.NewReader(bodyBuilder.String()), strings.NewReader(bodyBuilder.String()),
) )

View File

@ -26,8 +26,15 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "Cloudreve V4", Name: "Cloudreve V4",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "cloudreve://my", DefaultRoot: "cloudreve://my",
CheckStatus: true, CheckStatus: true,
Alert: "",
NoOverwriteUpload: true, NoOverwriteUpload: true,
} }

View File

@ -47,13 +47,7 @@ type BasicConfigResp struct {
type SiteLoginConfigResp struct { type SiteLoginConfigResp struct {
LoginCaptcha bool `json:"login_captcha"` LoginCaptcha bool `json:"login_captcha"`
// RegCaptcha bool `json:"reg_captcha"` Authn bool `json:"authn"`
// ForgetCaptcha bool `json:"forget_captcha"`
// RegisterEnabled bool `json:"register_enabled"`
// TosURL string `json:"tos_url"`
// PrivacyPolicyURL string `json:"privacy_policy_url"`
// SsoDisplayName string `json:"sso_display_name"`
// OidcDisplayName string `json:"oidc_display_name"`
} }
type PrepareLoginResp struct { type PrepareLoginResp struct {

View File

@ -19,9 +19,7 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/setting" "github.com/OpenListTeam/OpenList/v4/internal/setting"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go" jsoniter "github.com/json-iterator/go"
) )
@ -97,6 +95,9 @@ func (d *CloudreveV4) login() error {
if err != nil { if err != nil {
return err return err
} }
if !siteConfig.Authn {
return errors.New("authn not support")
}
var prepareLogin PrepareLoginResp var prepareLogin PrepareLoginResp
err = d.request(http.MethodGet, "/session/prepare?email="+d.Addition.Username, nil, &prepareLogin) err = d.request(http.MethodGet, "/session/prepare?email="+d.Addition.Username, nil, &prepareLogin)
if err != nil { if err != nil {
@ -252,16 +253,13 @@ func (d *CloudreveV4) upLocal(ctx context.Context, file model.FileStreamer, u Fi
} }
func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error { func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
DEFAULT := int64(u.ChunkSize)
ss, err := stream.NewStreamSectionReader(file, int(DEFAULT), &up)
if err != nil {
return err
}
uploadUrl := u.UploadUrls[0] uploadUrl := u.UploadUrls[0]
credential := u.Credential credential := u.Credential
var finish int64 = 0 var finish int64 = 0
var chunk int = 0 var chunk int = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() { for finish < file.GetSize() {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
@ -269,29 +267,30 @@ func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u F
left := file.GetSize() - finish left := file.GetSize() - finish
byteSize := min(left, DEFAULT) byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize()) utils.Log.Debugf("[CloudreveV4-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
rd, err := ss.GetSectionReader(finish, byteSize) byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil { if err != nil {
return err return err
} }
err = retry.Do( req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
func() error { driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
rd.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, rd))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.ContentLength = byteSize req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential)) req.Header.Set("Authorization", fmt.Sprint(credential))
req.Header.Set("User-Agent", d.getUA()) req.Header.Set("User-Agent", d.getUA())
err = func() error {
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return err return err
} }
defer res.Body.Close() defer res.Body.Close()
if res.StatusCode != 200 { if res.StatusCode != 200 {
return fmt.Errorf("server error: %d", res.StatusCode) return errors.New(res.Status)
} }
body, err := io.ReadAll(res.Body) body, err := io.ReadAll(res.Body)
if err != nil { if err != nil {
@ -306,30 +305,31 @@ func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u F
return errors.New(up.Msg) return errors.New(up.Msg)
} }
return nil return nil
}, retry.Attempts(3), }()
retry.DelayType(retry.BackOffDelay), if err == nil {
retry.Delay(time.Second), retryCount = 0
)
ss.FreeSectionReader(rd)
if err != nil {
return err
}
finish += byteSize finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize())) up(float64(finish) * 100 / float64(file.GetSize()))
chunk++ chunk++
} else {
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error: %s", maxRetries, err)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-Remote] server errors while uploading, retrying after %v...", backoff)
time.Sleep(backoff)
}
} }
return nil return nil
} }
func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error { func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
DEFAULT := int64(u.ChunkSize)
ss, err := stream.NewStreamSectionReader(file, int(DEFAULT), &up)
if err != nil {
return err
}
uploadUrl := u.UploadUrls[0] uploadUrl := u.UploadUrls[0]
var finish int64 = 0 var finish int64 = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() { for finish < file.GetSize() {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
@ -337,47 +337,47 @@ func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u
left := file.GetSize() - finish left := file.GetSize() - finish
byteSize := min(left, DEFAULT) byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize()) utils.Log.Debugf("[CloudreveV4-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
rd, err := ss.GetSectionReader(finish, byteSize) byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil { if err != nil {
return err return err
} }
err = retry.Do( req, err := http.NewRequest(http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
func() error {
rd.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, rd))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.ContentLength = byteSize req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, file.GetSize())) req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, file.GetSize()))
req.Header.Set("User-Agent", d.getUA()) req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return err return err
} }
defer res.Body.Close()
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession // https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
switch { switch {
case res.StatusCode >= 500 && res.StatusCode <= 504: case res.StatusCode >= 500 && res.StatusCode <= 504:
return fmt.Errorf("server error: %d", res.StatusCode) retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[CloudreveV4-OneDrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200: case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body) data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data)) return errors.New(string(data))
default: default:
return nil res.Body.Close()
} retryCount = 0
}, retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second),
)
ss.FreeSectionReader(rd)
if err != nil {
return err
}
finish += byteSize finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize())) up(float64(finish) * 100 / float64(file.GetSize()))
} }
}
// 上传成功发送回调请求 // 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/onedrive/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) { return d.request(http.MethodPost, "/callback/onedrive/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
req.SetBody("{}") req.SetBody("{}")
@ -385,15 +385,12 @@ func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u
} }
func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error { func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
DEFAULT := int64(u.ChunkSize)
ss, err := stream.NewStreamSectionReader(file, int(DEFAULT), &up)
if err != nil {
return err
}
var finish int64 = 0 var finish int64 = 0
var chunk int = 0 var chunk int = 0
var etags []string var etags []string
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() { for finish < file.GetSize() {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
@ -401,20 +398,19 @@ func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileU
left := file.GetSize() - finish left := file.GetSize() - finish
byteSize := min(left, DEFAULT) byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize()) utils.Log.Debugf("[CloudreveV4-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
rd, err := ss.GetSectionReader(finish, byteSize) byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil { if err != nil {
return err return err
} }
err = retry.Do( req, err := http.NewRequest(http.MethodPut, u.UploadUrls[chunk],
func() error { driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
rd.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, u.UploadUrls[chunk],
driver.NewLimitedUploadStream(ctx, rd))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.ContentLength = byteSize req.ContentLength = byteSize
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return err return err
@ -423,26 +419,23 @@ func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileU
res.Body.Close() res.Body.Close()
switch { switch {
case res.StatusCode != 200: case res.StatusCode != 200:
return fmt.Errorf("server error: %d", res.StatusCode) retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors", maxRetries)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("server error %d, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case etag == "": case etag == "":
return errors.New("failed to get ETag from header") return errors.New("failed to get ETag from header")
default: default:
retryCount = 0
etags = append(etags, etag) etags = append(etags, etag)
return nil
}
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second),
)
ss.FreeSectionReader(rd)
if err != nil {
return err
}
finish += byteSize finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize())) up(float64(finish) * 100 / float64(file.GetSize()))
chunk++ chunk++
} }
}
// s3LikeFinishUpload // s3LikeFinishUpload
bodyBuilder := &strings.Builder{} bodyBuilder := &strings.Builder{}
@ -455,8 +448,8 @@ func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileU
)) ))
} }
bodyBuilder.WriteString("</CompleteMultipartUpload>") bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequestWithContext(ctx, req, err := http.NewRequest(
http.MethodPost, "POST",
u.CompleteURL, u.CompleteURL,
strings.NewReader(bodyBuilder.String()), strings.NewReader(bodyBuilder.String()),
) )

View File

@ -1,14 +1,12 @@
package crypt package crypt
import ( import (
"bytes"
"context" "context"
"fmt" "fmt"
"io" "io"
stdpath "path" stdpath "path"
"regexp" "regexp"
"strings" "strings"
"sync"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/v4/internal/errs"
@ -112,7 +110,7 @@ func (d *Crypt) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
//return d.list(ctx, d.RemotePath, path) //return d.list(ctx, d.RemotePath, path)
//remoteFull //remoteFull
objs, err := fs.List(ctx, d.getPathForRemote(path, true), &fs.ListArgs{NoLog: true, Refresh: args.Refresh}) objs, err := fs.List(ctx, d.getPathForRemote(path, true), &fs.ListArgs{NoLog: true})
// the obj must implement the model.SetPath interface // the obj must implement the model.SetPath interface
// return objs, err // return objs, err
if err != nil { if err != nil {
@ -243,9 +241,6 @@ func (d *Crypt) Get(ctx context.Context, path string) (model.Obj, error) {
//return nil, errs.ObjectNotFound //return nil, errs.ObjectNotFound
} }
// https://github.com/rclone/rclone/blob/v1.67.0/backend/crypt/cipher.go#L37
const fileHeaderSize = 32
func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
dstDirActualPath, err := d.getActualPathForRemote(file.GetPath(), false) dstDirActualPath, err := d.getActualPathForRemote(file.GetPath(), false)
if err != nil { if err != nil {
@ -256,69 +251,61 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
return nil, err return nil, err
} }
remoteSize := remoteLink.ContentLength if remoteLink.RangeReadCloser == nil && remoteLink.MFile == nil && len(remoteLink.URL) == 0 {
if remoteSize <= 0 {
remoteSize = remoteFile.GetSize()
}
rrf, err := stream.GetRangeReaderFromLink(remoteSize, remoteLink)
if err != nil {
_ = remoteLink.Close()
return nil, fmt.Errorf("the remote storage driver need to be enhanced to support encrytion") return nil, fmt.Errorf("the remote storage driver need to be enhanced to support encrytion")
} }
remoteFileSize := remoteFile.GetSize()
mu := &sync.Mutex{} remoteClosers := utils.EmptyClosers()
var fileHeader []byte rangeReaderFunc := func(ctx context.Context, underlyingOffset, underlyingLength int64) (io.ReadCloser, error) {
rangeReaderFunc := func(ctx context.Context, offset, limit int64) (io.ReadCloser, error) { length := underlyingLength
length := limit if underlyingLength >= 0 && underlyingOffset+underlyingLength >= remoteFileSize {
if offset == 0 && limit > 0 { length = -1
mu.Lock()
if limit <= fileHeaderSize {
defer mu.Unlock()
if fileHeader != nil {
return io.NopCloser(bytes.NewReader(fileHeader[:limit])), nil
} }
length = fileHeaderSize rrc := remoteLink.RangeReadCloser
} else if fileHeader == nil { if len(remoteLink.URL) > 0 {
defer mu.Unlock() var converted, err = stream.GetRangeReadCloserFromLink(remoteFileSize, remoteLink)
} else {
mu.Unlock()
}
}
remoteReader, err := rrf.RangeRead(ctx, http_range.Range{Start: offset, Length: length})
if err != nil { if err != nil {
return nil, err return nil, err
} }
rrc = converted
if offset == 0 && limit > 0 {
fileHeader = make([]byte, fileHeaderSize)
n, err := io.ReadFull(remoteReader, fileHeader)
if n != fileHeaderSize {
fileHeader = nil
return nil, fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", fileHeaderSize, n, err)
}
if limit <= fileHeaderSize {
remoteReader.Close()
return io.NopCloser(bytes.NewReader(fileHeader[:limit])), nil
} else {
remoteReader = utils.ReadCloser{
Reader: io.MultiReader(bytes.NewReader(fileHeader), remoteReader),
Closer: remoteReader,
}
} }
if rrc != nil {
remoteReader, err := rrc.RangeRead(ctx, http_range.Range{Start: underlyingOffset, Length: length})
remoteClosers.AddClosers(rrc.GetClosers())
if err != nil {
return nil, err
} }
return remoteReader, nil return remoteReader, nil
} }
return &model.Link{ if remoteLink.MFile != nil {
RangeReader: stream.RangeReaderFunc(func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) { _, err := remoteLink.MFile.Seek(underlyingOffset, io.SeekStart)
if err != nil {
return nil, err
}
//keep reuse same MFile and close at last.
remoteClosers.Add(remoteLink.MFile)
return io.NopCloser(remoteLink.MFile), nil
}
return nil, errs.NotSupport
}
resultRangeReader := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
readSeeker, err := d.cipher.DecryptDataSeek(ctx, rangeReaderFunc, httpRange.Start, httpRange.Length) readSeeker, err := d.cipher.DecryptDataSeek(ctx, rangeReaderFunc, httpRange.Start, httpRange.Length)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return readSeeker, nil return readSeeker, nil
}), }
SyncClosers: utils.NewSyncClosers(remoteLink),
}, nil resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader, Closers: remoteClosers}
resultLink := &model.Link{
RangeReadCloser: resultRangeReadCloser,
Expiration: remoteLink.Expiration,
}
return resultLink, nil
} }
func (d *Crypt) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *Crypt) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
@ -401,6 +388,7 @@ func (d *Crypt) Put(ctx context.Context, dstDir model.Obj, streamer model.FileSt
}, },
Reader: wrappedIn, Reader: wrappedIn,
Mimetype: "application/octet-stream", Mimetype: "application/octet-stream",
WebPutAsTask: streamer.NeedStore(),
ForceStreamUpload: true, ForceStreamUpload: true,
Exist: streamer.GetExist(), Exist: streamer.GetExist(),
} }

View File

@ -28,10 +28,15 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "Crypt", Name: "Crypt",
LocalSort: true, LocalSort: true,
OnlyLocal: true,
OnlyProxy: true, OnlyProxy: true,
NoCache: true, NoCache: true,
NoUpload: false,
NeedMs: false,
DefaultRoot: "/", DefaultRoot: "/",
NoLinkURL: true, CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -236,7 +236,7 @@ func (d *Doubao) Put(ctx context.Context, dstDir model.Obj, file model.FileStrea
// 根据文件大小选择上传方式 // 根据文件大小选择上传方式
if file.GetSize() <= 1*utils.MB { // 小于1MB使用普通模式上传 if file.GetSize() <= 1*utils.MB { // 小于1MB使用普通模式上传
return d.Upload(ctx, &uploadConfig, dstDir, file, up, dataType) return d.Upload(&uploadConfig, dstDir, file, up, dataType)
} }
// 大文件使用分片上传 // 大文件使用分片上传
return d.UploadByMultipart(ctx, &uploadConfig, file.GetSize(), dstDir, file, up, dataType) return d.UploadByMultipart(ctx, &uploadConfig, file.GetSize(), dstDir, file, up, dataType)

View File

@ -18,7 +18,15 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "Doubao", Name: "Doubao",
LocalSort: true, LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0", DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -129,7 +129,7 @@ type BuiAuditInfo struct {
AuditInfo AuditInfo `json:"audit_info"` AuditInfo AuditInfo `json:"audit_info"`
IsAuditing bool `json:"is_auditing"` IsAuditing bool `json:"is_auditing"`
AuditStatus int `json:"audit_status"` AuditStatus int `json:"audit_status"`
LastUpdateTime int64 `json:"last_update_time"` LastUpdateTime int `json:"last_update_time"`
UnpassReason string `json:"unpass_reason"` UnpassReason string `json:"unpass_reason"`
Details Details `json:"details"` Details Details `json:"details"`
} }
@ -184,7 +184,7 @@ type UserInfo struct {
SecUserID string `json:"sec_user_id"` SecUserID string `json:"sec_user_id"`
SessionKey string `json:"session_key"` SessionKey string `json:"session_key"`
UseHmRegion bool `json:"use_hm_region"` UseHmRegion bool `json:"use_hm_region"`
UserCreateTime int64 `json:"user_create_time"` UserCreateTime int `json:"user_create_time"`
UserID int64 `json:"user_id"` UserID int64 `json:"user_id"`
UserIDStr string `json:"user_id_str"` UserIDStr string `json:"user_id_str"`
UserVerified bool `json:"user_verified"` UserVerified bool `json:"user_verified"`

View File

@ -14,7 +14,7 @@ import (
"math/rand" "math/rand"
"net/http" "net/http"
"net/url" "net/url"
stdpath "path" "path/filepath"
"sort" "sort"
"strconv" "strconv"
"strings" "strings"
@ -24,7 +24,6 @@ import (
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup" "github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go" "github.com/avast/retry-go"
@ -354,7 +353,7 @@ func (d *Doubao) getUploadConfig(upConfig *UploadConfig, dataType string, file m
"ServiceId": d.UploadToken.Alice[dataType].ServiceID, "ServiceId": d.UploadToken.Alice[dataType].ServiceID,
"NeedFallback": "true", "NeedFallback": "true",
"FileSize": strconv.FormatInt(file.GetSize(), 10), "FileSize": strconv.FormatInt(file.GetSize(), 10),
"FileExtension": stdpath.Ext(file.GetName()), "FileExtension": filepath.Ext(file.GetName()),
"s": randomString(), "s": randomString(),
} }
} }
@ -448,67 +447,41 @@ func (d *Doubao) uploadNode(uploadConfig *UploadConfig, dir model.Obj, file mode
} }
// Upload 普通上传实现 // Upload 普通上传实现
func (d *Doubao) Upload(ctx context.Context, config *UploadConfig, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) { func (d *Doubao) Upload(config *UploadConfig, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
ss, err := stream.NewStreamSectionReader(file, int(file.GetSize()), &up) data, err := io.ReadAll(file)
if err != nil {
return nil, err
}
reader, err := ss.GetSectionReader(0, file.GetSize())
if err != nil { if err != nil {
return nil, err return nil, err
} }
// 计算CRC32 // 计算CRC32
crc32Hash := crc32.NewIEEE() crc32Hash := crc32.NewIEEE()
w, err := utils.CopyWithBuffer(crc32Hash, reader) crc32Hash.Write(data)
if w != file.GetSize() {
return nil, fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", file.GetSize(), w, err)
}
crc32Value := hex.EncodeToString(crc32Hash.Sum(nil)) crc32Value := hex.EncodeToString(crc32Hash.Sum(nil))
// 构建请求路径 // 构建请求路径
uploadNode := config.InnerUploadAddress.UploadNodes[0] uploadNode := config.InnerUploadAddress.UploadNodes[0]
storeInfo := uploadNode.StoreInfos[0] storeInfo := uploadNode.StoreInfos[0]
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI) uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
rateLimitedRd := driver.NewLimitedUploadStream(ctx, reader)
err = d._retryOperation("Upload", func() error { uploadResp := UploadResp{}
reader.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadUrl, rateLimitedRd) if _, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
if err != nil { req.SetHeaders(map[string]string{
return err "Content-Type": "application/octet-stream",
} "Content-Crc32": crc32Value,
req.Header = map[string][]string{ "Content-Length": fmt.Sprintf("%d", len(data)),
"Referer": {BaseURL + "/"}, "Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
"Origin": {BaseURL},
"User-Agent": {UserAgent},
"X-Storage-U": {d.UserId},
"Authorization": {storeInfo.Auth},
"Content-Type": {"application/octet-stream"},
"Content-Crc32": {crc32Value},
"Content-Length": {fmt.Sprintf("%d", file.GetSize())},
"Content-Disposition": {fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI))},
}
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
bytes, _ := io.ReadAll(res.Body)
resp := UploadResp{}
utils.Json.Unmarshal(bytes, &resp)
if resp.Code != 2000 {
return fmt.Errorf("upload part failed: %s", resp.Message)
} else if resp.Data.Crc32 != crc32Value {
return fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, resp.Data.Crc32)
}
return nil
}) })
ss.FreeSectionReader(reader)
if err != nil { req.SetBody(data)
}, &uploadResp); err != nil {
return nil, err return nil, err
} }
if uploadResp.Code != 2000 {
return nil, fmt.Errorf("upload failed: %s", uploadResp.Message)
}
uploadNodeResp, err := d.uploadNode(config, dstDir, file, dataType) uploadNodeResp, err := d.uploadNode(config, dstDir, file, dataType)
if err != nil { if err != nil {
return nil, err return nil, err
@ -543,107 +516,69 @@ func (d *Doubao) UploadByMultipart(ctx context.Context, config *UploadConfig, fi
if config.InnerUploadAddress.AdvanceOption.SliceSize > 0 { if config.InnerUploadAddress.AdvanceOption.SliceSize > 0 {
chunkSize = int64(config.InnerUploadAddress.AdvanceOption.SliceSize) chunkSize = int64(config.InnerUploadAddress.AdvanceOption.SliceSize)
} }
ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up)
if err != nil {
return nil, err
}
totalParts := (fileSize + chunkSize - 1) / chunkSize totalParts := (fileSize + chunkSize - 1) / chunkSize
// 创建分片信息组 // 创建分片信息组
parts := make([]UploadPart, totalParts) parts := make([]UploadPart, totalParts)
// 缓存文件
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, fmt.Errorf("failed to cache file: %w", err)
}
defer tempFile.Close()
up(10.0) // 更新进度 up(10.0) // 更新进度
// 设置并行上传 // 设置并行上传
thread := min(int(totalParts), d.uploadThread) threadG, uploadCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread, retry.Attempts(1),
retry.Attempts(MaxRetryAttempts),
retry.Delay(time.Second), retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay), retry.DelayType(retry.BackOffDelay))
retry.MaxJitter(200*time.Millisecond),
)
var partsMutex sync.Mutex var partsMutex sync.Mutex
// 并行上传所有分片 // 并行上传所有分片
hash := crc32.NewIEEE() for partIndex := int64(0); partIndex < totalParts; partIndex++ {
for partIndex := range totalParts {
if utils.IsCanceled(uploadCtx) { if utils.IsCanceled(uploadCtx) {
break break
} }
partIndex := partIndex
partNumber := partIndex + 1 // 分片编号从1开始 partNumber := partIndex + 1 // 分片编号从1开始
threadG.Go(func(ctx context.Context) error {
// 计算此分片的大小和偏移 // 计算此分片的大小和偏移
offset := partIndex * chunkSize offset := partIndex * chunkSize
size := chunkSize size := chunkSize
if partIndex == totalParts-1 { if partIndex == totalParts-1 {
size = fileSize - offset size = fileSize - offset
} }
var reader *stream.SectionReader
var rateLimitedRd io.Reader limitedReader := driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, size))
crc32Value := "" // 读取数据到内存
threadG.GoWithLifecycle(errgroup.Lifecycle{ data, err := io.ReadAll(limitedReader)
Before: func(ctx context.Context) error { if err != nil {
if reader == nil { return fmt.Errorf("failed to read part %d: %w", partNumber, err)
}
// 计算CRC32
crc32Value := calculateCRC32(data)
// 使用_retryOperation上传分片
var uploadPart UploadPart
if err = d._retryOperation(fmt.Sprintf("Upload part %d", partNumber), func() error {
var err error var err error
reader, err = ss.GetSectionReader(offset, size) uploadPart, err = d.uploadPart(config, uploadUrl, uploadID, partNumber, data, crc32Value)
if err != nil {
return err return err
} }); err != nil {
hash.Reset() return fmt.Errorf("part %d upload failed: %w", partNumber, err)
w, err := utils.CopyWithBuffer(hash, reader)
if w != size {
return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", size, w, err)
}
crc32Value = hex.EncodeToString(hash.Sum(nil))
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader)
}
return nil
},
Do: func(ctx context.Context) error {
reader.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, fmt.Sprintf("%s?uploadid=%s&part_number=%d&phase=transfer", uploadUrl, uploadID, partNumber), rateLimitedRd)
if err != nil {
return err
}
req.Header = map[string][]string{
"Referer": {BaseURL + "/"},
"Origin": {BaseURL},
"User-Agent": {UserAgent},
"X-Storage-U": {d.UserId},
"Authorization": {storeInfo.Auth},
"Content-Type": {"application/octet-stream"},
"Content-Crc32": {crc32Value},
"Content-Length": {fmt.Sprintf("%d", size)},
"Content-Disposition": {fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI))},
}
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
bytes, _ := io.ReadAll(res.Body)
uploadResp := UploadResp{}
utils.Json.Unmarshal(bytes, &uploadResp)
if uploadResp.Code != 2000 {
return fmt.Errorf("upload part failed: %s", uploadResp.Message)
} else if uploadResp.Data.Crc32 != crc32Value {
return fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, uploadResp.Data.Crc32)
} }
// 记录成功上传的分片 // 记录成功上传的分片
partsMutex.Lock() partsMutex.Lock()
parts[partIndex] = UploadPart{ parts[partIndex] = UploadPart{
PartNumber: strconv.FormatInt(partNumber, 10), PartNumber: strconv.FormatInt(partNumber, 10),
Etag: uploadResp.Data.Etag, Etag: uploadPart.Etag,
Crc32: crc32Value, Crc32: crc32Value,
} }
partsMutex.Unlock() partsMutex.Unlock()
// 更新进度 // 更新进度
progress := 10.0 + 90.0*float64(threadG.Success()+1)/float64(totalParts) progress := 10.0 + 90.0*float64(threadG.Success()+1)/float64(totalParts)
up(math.Min(progress, 95.0)) up(math.Min(progress, 95.0))
return nil return nil
},
After: func(err error) {
ss.FreeSectionReader(reader)
},
}) })
} }
@ -746,6 +681,42 @@ func (d *Doubao) initMultipartUpload(config *UploadConfig, uploadUrl string, sto
return uploadResp.Data.UploadId, nil return uploadResp.Data.UploadId, nil
} }
// 分片上传实现
func (d *Doubao) uploadPart(config *UploadConfig, uploadUrl, uploadID string, partNumber int64, data []byte, crc32Value string) (resp UploadPart, err error) {
uploadResp := UploadResp{}
storeInfo := config.InnerUploadAddress.UploadNodes[0].StoreInfos[0]
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Type": "application/octet-stream",
"Content-Crc32": crc32Value,
"Content-Length": fmt.Sprintf("%d", len(data)),
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
})
req.SetQueryParams(map[string]string{
"uploadid": uploadID,
"part_number": strconv.FormatInt(partNumber, 10),
"phase": "transfer",
})
req.SetBody(data)
req.SetContentLength(true)
}, &uploadResp)
if err != nil {
return resp, err
}
if uploadResp.Code != 2000 {
return resp, fmt.Errorf("upload part failed: %s", uploadResp.Message)
} else if uploadResp.Data.Crc32 != crc32Value {
return resp, fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, uploadResp.Data.Crc32)
}
return uploadResp.Data, nil
}
// 完成分片上传 // 完成分片上传
func (d *Doubao) completeMultipartUpload(config *UploadConfig, uploadUrl, uploadID string, parts []UploadPart) error { func (d *Doubao) completeMultipartUpload(config *UploadConfig, uploadUrl, uploadID string, parts []UploadPart) error {
uploadResp := UploadResp{} uploadResp := UploadResp{}
@ -814,6 +785,13 @@ func (d *Doubao) commitMultipartUpload(uploadConfig *UploadConfig) error {
return nil return nil
} }
// 计算CRC32
func calculateCRC32(data []byte) string {
hash := crc32.NewIEEE()
hash.Write(data)
return hex.EncodeToString(hash.Sum(nil))
}
// _retryOperation 操作重试 // _retryOperation 操作重试
func (d *Doubao) _retryOperation(operation string, fn func() error) error { func (d *Doubao) _retryOperation(operation string, fn func() error) error {
return retry.Do( return retry.Do(

View File

@ -14,8 +14,15 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "DoubaoShare", Name: "DoubaoShare",
LocalSort: true, LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: true, NoUpload: true,
NeedMs: false,
DefaultRoot: "/", DefaultRoot: "/",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -79,11 +79,11 @@ type ShareInfo struct {
RiskReviewStatus int `json:"risk_review_status"` RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"` ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"` ParentID string `json:"parent_id"`
CreateTime int64 `json:"create_time"` CreateTime int `json:"create_time"`
UpdateTime int64 `json:"update_time"` UpdateTime int `json:"update_time"`
} `json:"first_node"` } `json:"first_node"`
NodeCount int `json:"node_count"` NodeCount int `json:"node_count"`
CreateTime int64 `json:"create_time"` CreateTime int `json:"create_time"`
Channel string `json:"channel"` Channel string `json:"channel"`
InfluencerType int `json:"influencer_type"` InfluencerType int `json:"influencer_type"`
} }
@ -111,8 +111,8 @@ type FilePath []struct {
RiskReviewStatus int `json:"risk_review_status"` RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"` ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"` ParentID string `json:"parent_id"`
CreateTime int64 `json:"create_time"` CreateTime int `json:"create_time"`
UpdateTime int64 `json:"update_time"` UpdateTime int `json:"update_time"`
} }
type GetFileUrlResp struct { type GetFileUrlResp struct {

View File

@ -192,11 +192,12 @@ func (d *Dropbox) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
url := d.contentBase + "/2/files/upload_session/append_v2" url := d.contentBase + "/2/files/upload_session/append_v2"
reader := driver.NewLimitedUploadStream(ctx, io.LimitReader(stream, PartSize)) reader := driver.NewLimitedUploadStream(ctx, io.LimitReader(stream, PartSize))
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, reader) req, err := http.NewRequest(http.MethodPost, url, reader)
if err != nil { if err != nil {
log.Errorf("failed to update file when append to upload session, err: %+v", err) log.Errorf("failed to update file when append to upload session, err: %+v", err)
return err return err
} }
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream") req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Authorization", "Bearer "+d.AccessToken) req.Header.Set("Authorization", "Bearer "+d.AccessToken)

View File

@ -13,11 +13,18 @@ type Addition struct {
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"` ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
AccessToken string AccessToken string
RefreshToken string `json:"refresh_token" required:"true"` RefreshToken string `json:"refresh_token" required:"true"`
RootNamespaceId string `json:"RootNamespaceId" required:"false"` RootNamespaceId string
} }
var config = driver.Config{ var config = driver.Config{
Name: "Dropbox", Name: "Dropbox",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "",
NoOverwriteUpload: true, NoOverwriteUpload: true,
} }

View File

@ -39,7 +39,7 @@ func (d *Dropbox) refreshToken() error {
if resp.ErrorMessage != "" { if resp.ErrorMessage != "" {
return fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage) return fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage)
} }
return fmt.Errorf("empty token returned from official API, a wrong refresh token may have been used") return fmt.Errorf("empty token returned from official API")
} }
d.AccessToken = resp.AccessToken d.AccessToken = resp.AccessToken
d.RefreshToken = resp.RefreshToken d.RefreshToken = resp.RefreshToken
@ -169,19 +169,13 @@ func (d *Dropbox) getFiles(ctx context.Context, path string) ([]File, error) {
func (d *Dropbox) finishUploadSession(ctx context.Context, toPath string, offset int64, sessionId string) error { func (d *Dropbox) finishUploadSession(ctx context.Context, toPath string, offset int64, sessionId string) error {
url := d.contentBase + "/2/files/upload_session/finish" url := d.contentBase + "/2/files/upload_session/finish"
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, nil) req, err := http.NewRequest(http.MethodPost, url, nil)
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream") req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Authorization", "Bearer "+d.AccessToken) req.Header.Set("Authorization", "Bearer "+d.AccessToken)
if d.RootNamespaceId != "" {
apiPathRootJson, err := d.buildPathRootHeader()
if err != nil {
return err
}
req.Header.Set("Dropbox-API-Path-Root", apiPathRootJson)
}
uploadFinishArgs := UploadFinishArgs{ uploadFinishArgs := UploadFinishArgs{
Commit: struct { Commit: struct {
@ -220,19 +214,13 @@ func (d *Dropbox) finishUploadSession(ctx context.Context, toPath string, offset
func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) { func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) {
url := d.contentBase + "/2/files/upload_session/start" url := d.contentBase + "/2/files/upload_session/start"
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, nil) req, err := http.NewRequest(http.MethodPost, url, nil)
if err != nil { if err != nil {
return "", err return "", err
} }
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream") req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Authorization", "Bearer "+d.AccessToken) req.Header.Set("Authorization", "Bearer "+d.AccessToken)
if d.RootNamespaceId != "" {
apiPathRootJson, err := d.buildPathRootHeader()
if err != nil {
return "", err
}
req.Header.Set("Dropbox-API-Path-Root", apiPathRootJson)
}
req.Header.Set("Dropbox-API-Arg", "{\"close\":false}") req.Header.Set("Dropbox-API-Arg", "{\"close\":false}")
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
@ -247,11 +235,3 @@ func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) {
_ = res.Body.Close() _ = res.Body.Close()
return sessionId, nil return sessionId, nil
} }
func (d *Dropbox) buildPathRootHeader() (string, error) {
return utils.Json.MarshalToString(map[string]interface{}{
".tag": "root",
"root": d.RootNamespaceId,
})
}

View File

@ -17,8 +17,16 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "FebBox", Name: "FebBox",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: true, NoUpload: true,
NeedMs: false,
DefaultRoot: "0", DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -31,13 +31,13 @@ func (c *customTokenSource) Token() (*oauth2.Token, error) {
v.Set("client_id", c.config.ClientID) v.Set("client_id", c.config.ClientID)
v.Set("client_secret", c.config.ClientSecret) v.Set("client_secret", c.config.ClientSecret)
req, err := http.NewRequestWithContext(c.ctx, http.MethodPost, c.config.TokenURL, strings.NewReader(v.Encode())) req, err := http.NewRequest("POST", c.config.TokenURL, strings.NewReader(v.Encode()))
if err != nil { if err != nil {
return nil, err return nil, err
} }
req.Header.Set("Content-Type", "application/x-www-form-urlencoded") req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
resp, err := http.DefaultClient.Do(req) resp, err := http.DefaultClient.Do(req.WithContext(c.ctx))
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -2,16 +2,11 @@ package ftp
import ( import (
"context" "context"
"errors"
"io"
stdpath "path" stdpath "path"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/jlaffaye/ftp" "github.com/jlaffaye/ftp"
) )
@ -19,9 +14,6 @@ type FTP struct {
model.Storage model.Storage
Addition Addition
conn *ftp.ServerConn conn *ftp.ServerConn
ctx context.Context
cancel context.CancelFunc
} }
func (d *FTP) Config() driver.Config { func (d *FTP) Config() driver.Config {
@ -33,16 +25,12 @@ func (d *FTP) GetAddition() driver.Additional {
} }
func (d *FTP) Init(ctx context.Context) error { func (d *FTP) Init(ctx context.Context) error {
d.ctx, d.cancel = context.WithCancel(context.Background()) return d.login()
var err error
d.conn, err = d._login(ctx)
return err
} }
func (d *FTP) Drop(ctx context.Context) error { func (d *FTP) Drop(ctx context.Context) error {
if d.conn != nil { if d.conn != nil {
_ = d.conn.Quit() _ = d.conn.Logout()
d.cancel()
} }
return nil return nil
} }
@ -72,52 +60,15 @@ func (d *FTP) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]m
} }
func (d *FTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *FTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
conn, err := d._login(ctx) if err := d.login(); err != nil {
if err != nil {
return nil, err return nil, err
} }
path := encode(file.GetPath(), d.Encoding) r := NewFileReader(d.conn, encode(file.GetPath(), d.Encoding), file.GetSize())
size := file.GetSize() link := &model.Link{
resultRangeReader := func(context context.Context, httpRange http_range.Range) (io.ReadCloser, error) { MFile: r,
length := httpRange.Length
if length < 0 || httpRange.Start+length > size {
length = size - httpRange.Start
} }
var c *ftp.ServerConn return link, nil
if ctx == context {
c = conn
} else {
var err error
c, err = d._login(context)
if err != nil {
return nil, err
}
}
resp, err := c.RetrFrom(path, uint64(httpRange.Start))
if err != nil {
return nil, err
}
var close utils.CloseFunc
if context == ctx {
close = resp.Close
} else {
close = func() error {
return errors.Join(resp.Close(), c.Quit())
}
}
return utils.ReadCloser{
Reader: io.LimitReader(resp, length),
Closer: close,
}, nil
}
return &model.Link{
RangeReader: &model.FileRangeReader{
RangeReaderIF: stream.RateLimitRangeReaderFunc(resultRangeReader),
},
SyncClosers: utils.NewSyncClosers(utils.CloseFunc(conn.Quit)),
}, nil
} }
func (d *FTP) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *FTP) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {

View File

@ -33,9 +33,8 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "FTP", Name: "FTP",
LocalSort: true, LocalSort: true,
OnlyLinkMFile: false, OnlyLocal: true,
DefaultRoot: "/", DefaultRoot: "/",
NoLinkURL: true,
} }
func init() { func init() {

View File

@ -1,43 +1,116 @@
package ftp package ftp
import ( import (
"context" "io"
"fmt" "os"
"sync"
"sync/atomic"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
"github.com/jlaffaye/ftp" "github.com/jlaffaye/ftp"
) )
// do others that not defined in Driver interface // do others that not defined in Driver interface
func (d *FTP) login() error { func (d *FTP) login() error {
_, err, _ := singleflight.AnyGroup.Do(fmt.Sprintf("FTP.login:%p", d), func() (any, error) {
var err error
if d.conn != nil { if d.conn != nil {
err = d.conn.NoOp() _, err := d.conn.CurrentDir()
if err == nil {
return nil
}
}
conn, err := ftp.Dial(d.Address, ftp.DialWithShutTimeout(10*time.Second))
if err != nil { if err != nil {
d.conn.Quit()
d.conn = nil
}
}
if d.conn == nil {
d.conn, err = d._login(d.ctx)
}
return nil, err
})
return err return err
} }
func (d *FTP) _login(ctx context.Context) (*ftp.ServerConn, error) {
conn, err := ftp.Dial(d.Address, ftp.DialWithShutTimeout(10*time.Second), ftp.DialWithContext(ctx))
if err != nil {
return nil, err
}
err = conn.Login(d.Username, d.Password) err = conn.Login(d.Username, d.Password)
if err != nil { if err != nil {
conn.Quit() return err
return nil, err
} }
return conn, nil d.conn = conn
return nil
}
// FileReader An FTP file reader that implements io.MFile for seeking.
type FileReader struct {
conn *ftp.ServerConn
resp *ftp.Response
offset atomic.Int64
readAtOffset int64
mu sync.Mutex
path string
size int64
}
func NewFileReader(conn *ftp.ServerConn, path string, size int64) *FileReader {
return &FileReader{
conn: conn,
path: path,
size: size,
}
}
func (r *FileReader) Read(buf []byte) (n int, err error) {
n, err = r.ReadAt(buf, r.offset.Load())
r.offset.Add(int64(n))
return
}
func (r *FileReader) ReadAt(buf []byte, off int64) (n int, err error) {
if off < 0 {
return -1, os.ErrInvalid
}
r.mu.Lock()
defer r.mu.Unlock()
if off != r.readAtOffset {
//have to restart the connection, to correct offset
_ = r.resp.Close()
r.resp = nil
}
if r.resp == nil {
r.resp, err = r.conn.RetrFrom(r.path, uint64(off))
r.readAtOffset = off
if err != nil {
return 0, err
}
}
n, err = r.resp.Read(buf)
r.readAtOffset += int64(n)
return
}
func (r *FileReader) Seek(offset int64, whence int) (int64, error) {
oldOffset := r.offset.Load()
var newOffset int64
switch whence {
case io.SeekStart:
newOffset = offset
case io.SeekCurrent:
newOffset = oldOffset + offset
case io.SeekEnd:
return r.size, nil
default:
return -1, os.ErrInvalid
}
if newOffset < 0 {
// offset out of range
return oldOffset, os.ErrInvalid
}
if newOffset == oldOffset {
// offset not changed, so return directly
return oldOffset, nil
}
r.offset.Store(newOffset)
return newOffset, nil
}
func (r *FileReader) Close() error {
if r.resp != nil {
return r.resp.Close()
}
return nil
} }

View File

@ -9,7 +9,6 @@ import (
"text/template" "text/template"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/ProtonMail/go-crypto/openpgp" "github.com/ProtonMail/go-crypto/openpgp"
@ -97,7 +96,7 @@ func getPathCommonAncestor(a, b string) (ancestor, aChildName, bChildName, aRest
} }
func getUsername(ctx context.Context) string { func getUsername(ctx context.Context) string {
user, ok := ctx.Value(conf.UserKey).(*model.User) user, ok := ctx.Value("user").(*model.User)
if !ok { if !ok {
return "<system>" return "<system>"
} }

View File

@ -16,7 +16,16 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "GitHub Releases", Name: "GitHub Releases",
NoUpload: true, LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -162,7 +162,7 @@ func (d *GoogleDrive) Put(ctx context.Context, dstDir model.Obj, stream model.Fi
SetBody(driver.NewLimitedUploadStream(ctx, stream)) SetBody(driver.NewLimitedUploadStream(ctx, stream))
}, nil) }, nil)
} else { } else {
err = d.chunkUpload(ctx, stream, putUrl, up) err = d.chunkUpload(ctx, stream, putUrl)
} }
return err return err
} }

View File

@ -5,20 +5,17 @@ import (
"crypto/x509" "crypto/x509"
"encoding/pem" "encoding/pem"
"fmt" "fmt"
"io" "github.com/OpenListTeam/OpenList/v4/internal/op"
"net/http" "net/http"
"os" "os"
"regexp" "regexp"
"strconv" "strconv"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/avast/retry-go"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
"github.com/golang-jwt/jwt/v4" "github.com/golang-jwt/jwt/v4"
@ -65,7 +62,7 @@ func (d *GoogleDrive) refreshToken() error {
if resp.ErrorMessage != "" { if resp.ErrorMessage != "" {
return fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage) return fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage)
} }
return fmt.Errorf("empty token returned from official API, a wrong refresh token may have been used") return fmt.Errorf("empty token returned from official API")
} }
d.AccessToken = resp.AccessToken d.AccessToken = resp.AccessToken
d.RefreshToken = resp.RefreshToken d.RefreshToken = resp.RefreshToken
@ -254,60 +251,28 @@ func (d *GoogleDrive) getFiles(id string) ([]File, error) {
return res, nil return res, nil
} }
func (d *GoogleDrive) chunkUpload(ctx context.Context, file model.FileStreamer, url string, up driver.UpdateProgress) error { func (d *GoogleDrive) chunkUpload(ctx context.Context, stream model.FileStreamer, url string) error {
var defaultChunkSize = d.ChunkSize * 1024 * 1024 var defaultChunkSize = d.ChunkSize * 1024 * 1024
ss, err := stream.NewStreamSectionReader(file, int(defaultChunkSize), &up)
if err != nil {
return err
}
var offset int64 = 0 var offset int64 = 0
url += "?includeItemsFromAllDrives=true&supportsAllDrives=true" for offset < stream.GetSize() {
for offset < file.GetSize() {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
} }
chunkSize := min(file.GetSize()-offset, defaultChunkSize) chunkSize := stream.GetSize() - offset
reader, err := ss.GetSectionReader(offset, chunkSize) if chunkSize > defaultChunkSize {
chunkSize = defaultChunkSize
}
reader, err := stream.RangeRead(http_range.Range{Start: offset, Length: chunkSize})
if err != nil { if err != nil {
return err return err
} }
limitedReader := driver.NewLimitedUploadStream(ctx, reader) reader = driver.NewLimitedUploadStream(ctx, reader)
err = retry.Do(func() error { _, err = d.request(url, http.MethodPut, func(req *resty.Request) {
reader.Seek(0, io.SeekStart) req.SetHeaders(map[string]string{
req, err := http.NewRequestWithContext(ctx, http.MethodPut, url, limitedReader) "Content-Length": strconv.FormatInt(chunkSize, 10),
if err != nil { "Content-Range": fmt.Sprintf("bytes %d-%d/%d", offset, offset+chunkSize-1, stream.GetSize()),
return err }).SetBody(reader).SetContext(ctx)
} }, nil)
req.Header = map[string][]string{
"Authorization": {"Bearer " + d.AccessToken},
"Content-Length": {strconv.FormatInt(chunkSize, 10)},
"Content-Range": {fmt.Sprintf("bytes %d-%d/%d", offset, offset+chunkSize-1, file.GetSize())},
}
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
bytes, _ := io.ReadAll(res.Body)
var e Error
utils.Json.Unmarshal(bytes, &e)
if e.Error.Code != 0 {
if e.Error.Code == 401 {
err = d.refreshToken()
if err != nil {
return err
}
}
return fmt.Errorf("%s: %v", e.Error.Message, e.Error.Errors)
}
up(float64(offset+chunkSize) / float64(file.GetSize()) * 100)
return nil
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second))
ss.FreeSectionReader(reader)
if err != nil { if err != nil {
return err return err
} }

View File

@ -14,7 +14,6 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range" "github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials"
@ -254,8 +253,11 @@ func (d *HalalCloud) getLink(ctx context.Context, file model.Obj, args model.Lin
chunks := getChunkSizes(result.Sizes) chunks := getChunkSizes(result.Sizes)
resultRangeReader := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) { resultRangeReader := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
length := httpRange.Length length := httpRange.Length
if httpRange.Length < 0 || httpRange.Start+httpRange.Length >= size { if httpRange.Length >= 0 && httpRange.Start+httpRange.Length >= size {
length = size - httpRange.Start length = -1
}
if err != nil {
return nil, fmt.Errorf("open download file failed: %w", err)
} }
oo := &openObject{ oo := &openObject{
ctx: ctx, ctx: ctx,
@ -277,8 +279,9 @@ func (d *HalalCloud) getLink(ctx context.Context, file model.Obj, args model.Lin
duration = time.Until(time.Now().Add(time.Hour)) duration = time.Until(time.Now().Add(time.Hour))
} }
resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader}
return &model.Link{ return &model.Link{
RangeReader: stream.RateLimitRangeReaderFunc(resultRangeReader), RangeReadCloser: resultRangeReadCloser,
Expiration: &duration, Expiration: &duration,
}, nil }, nil
} }

View File

@ -19,9 +19,16 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "HalalCloud", Name: "HalalCloud",
LocalSort: false,
OnlyLocal: true,
OnlyProxy: true, OnlyProxy: true,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "/", DefaultRoot: "/",
NoLinkURL: true, CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -96,3 +96,7 @@ type SteamFile struct {
func (s *SteamFile) Read(p []byte) (n int, err error) { func (s *SteamFile) Read(p []byte) (n int, err error) {
return s.file.Read(p) return s.file.Read(p)
} }
func (s *SteamFile) Close() error {
return s.file.Close()
}

View File

@ -276,7 +276,7 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreame
etag := s.GetHash().GetHash(utils.MD5) etag := s.GetHash().GetHash(utils.MD5)
var err error var err error
if len(etag) != utils.MD5.Width { if len(etag) != utils.MD5.Width {
_, etag, err = stream.CacheFullAndHash(s, &up, utils.MD5) _, etag, err = stream.CacheFullInTempFileAndHash(s, utils.MD5)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -30,8 +30,16 @@ func init() {
return &ILanZou{ return &ILanZou{
config: driver.Config{ config: driver.Config{
Name: "ILanZou", Name: "ILanZou",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0", DefaultRoot: "0",
LocalSort: true, CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}, },
conf: Conf{ conf: Conf{
base: "https://api.ilanzou.com", base: "https://api.ilanzou.com",
@ -48,8 +56,16 @@ func init() {
return &ILanZou{ return &ILanZou{
config: driver.Config{ config: driver.Config{
Name: "FeijiPan", Name: "FeijiPan",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0", DefaultRoot: "0",
LocalSort: true, CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}, },
conf: Conf{ conf: Conf{
base: "https://api.feijipan.com", base: "https://api.feijipan.com",

Some files were not shown because too many files have changed in this diff Show More