Compare commits

..

13 Commits

Author SHA1 Message Date
c0a8321461 Update release_docker.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:36:35 +08:00
680501a8a8 Update release_android.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:36:22 +08:00
83913a8031 Update release.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:36:11 +08:00
929d4e65b9 Update release_freebsd.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:35:51 +08:00
6717d02f94 Update release_linux_musl.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:35:34 +08:00
9d2a71e3eb Update release_linux_musl_arm.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:35:21 +08:00
62de731d37 Update release_freebsd.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:34:35 +08:00
7277163b0a Update release_linux_musl.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:34:04 +08:00
98f65d5478 Update release_linux_musl_arm.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:33:52 +08:00
312e04ea69 chore(ci):Fixed CI bugs 2025-06-20 20:13:40 +08:00
6ddb4359d3 Update release.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:07:56 +08:00
cf444c2f63 Update release.yml
Signed-off-by: Pikachu Ren <40362270+PIKACHUIM@users.noreply.github.com>
2025-06-20 20:05:04 +08:00
0176cfb0c9 chore:fixed ci build 2025-06-20 19:32:10 +08:00
589 changed files with 8977 additions and 19029 deletions

View File

@ -1,81 +0,0 @@
name: "错误报告"
description: 错误报告 / 问题
title: "[BUG] 请修改标题为您遇到的问题"
labels: [bug]
body:
- type: markdown
attributes:
value: |
感谢您花时间填写此错误报告。
请**务必**确认您的问题无重复,且不是因为您的操作、网络或第三方软件问题。
- type: checkboxes
attributes:
label: 请确认以下事项
description: |
您必须勾选以下内容,否则您的问题可能会被直接关闭。
或者您可以去[讨论区](https://github.com/OpenListTeam/OpenList/discussions)。
options:
- label: |
我已确认阅读并同意 [AGPL-3.0 第15条](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=15.%20Disclaimer%20of%20Warranty.) 。
本程序不提供任何明示或暗示的担保,使用风险由您自行承担。
- label: |
我已确认阅读并同意 [AGPL-3.0 第16条](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=16.%20Limitation%20of%20Liability.) 。
无论何种情况,版权持有人或其他分发者均不对使用本程序所造成的任何损失承担责任。
- label: |
我确认我的描述清晰,语法礼貌,能帮助开发者快速定位问题,并符合社区规则。
- label: |
我已确认阅读了[OpenList文档](https://doc.oplist.org)。
- label: |
我已确认没有重复的问题或讨论。
- label: |
我已确认是`OpenList`的问题,而不是其他原因(例如 [网络](https://doc.oplist.org/faq/howto#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host-1) `依赖`或`操作`)。
- label: |
我认为此问题必须由`OpenList`处理,而非第三方。
- label: |
我已确认这个问题在最新版本中没有被修复。
- type: input
id: version
attributes:
label: OpenList 版本(必填)
description: |
您使用的是哪个版本的软件?请不要使用`latest`或`master`作为答案。
placeholder: v4.xx.xx
validations:
required: true
- type: input
id: driver
attributes:
label: 使用的存储驱动(必填)
description: |
您使用的是哪个存储驱动?
placeholder: "例如: OneDrive"
validations:
required: true
- type: textarea
id: bug-description
attributes:
label: 问题描述(必填)
validations:
required: true
- type: textarea
id: config
attributes:
label: 配置文件内容(必填)
description: |
请提供您的`OpenList`应用的配置文件,并截图相关存储配置。(可隐藏隐私字段)
validations:
required: true
- type: textarea
id: logs
attributes:
label: 日志(可选)
description: |
请复制粘贴错误日志,或者截图。(可隐藏隐私字段) [查看方法](https://doc.oplist.org/faq/howto#%E5%A6%82%E4%BD%95%E5%BF%AB%E9%80%9F%E5%AE%9A%E4%BD%8Dbug)
- type: textarea
id: reproduction
attributes:
label: 复现链接(可选)
description: |
请提供能复现此问题的链接。

View File

@ -1,81 +0,0 @@
name: "Bug Report"
description: Bug Report / Issue
title: "[BUG] Please modify the title to describe the issue you are facing"
labels: [bug]
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to fill out this bug report.
Please **make sure** your issue is not a duplicate and is not caused by your own operation, network, or third-party software.
- type: checkboxes
attributes:
label: Please confirm the following
description: |
You must check all the following, otherwise your issue may be closed directly.
Or you can go to the [discussions](https://github.com/OpenListTeam/OpenList/discussions).
options:
- label: |
I have read and agree to [AGPL-3.0 Section 15](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=15.%20Disclaimer%20of%20Warranty.) .
The program is provided "as is" without any warranties; you bear all risks of using it.
- label: |
I have read and agree to [AGPL-3.0 Section 16](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=16.%20Limitation%20of%20Liability.) .
The copyright holders and distributors are not liable for any damages resulting from the use or inability to use the program.
- label: |
I confirm my description is clear, polite, helps developers quickly locate the issue, and complies with community rules.
- label: |
I have read the [OpenList documentation](https://doc.oplist.org).
- label: |
I confirm there are no duplicate issues or discussions.
- label: |
I confirm this is an `OpenList` issue, not caused by other reasons (such as [network](https://doc.oplist.org/faq/howto#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host-1), dependencies, or operation).
- label: |
I believe this issue must be handled by `OpenList` and not by a third party.
- label: |
I confirm this issue is not fixed in the latest version.
- type: input
id: version
attributes:
label: OpenList Version (required)
description: |
What version of the software are you using? Please do not use `latest` or `master` as the answer.
placeholder: v4.xx.xx
validations:
required: true
- type: input
id: driver
attributes:
label: Storage Driver Used (required)
description: |
Which storage driver are you using?
placeholder: "e.g. OneDrive"
validations:
required: true
- type: textarea
id: bug-description
attributes:
label: Bug Description (required)
validations:
required: true
- type: textarea
id: config
attributes:
label: Configuration File Content (required)
description: |
Please provide your `OpenList` application's configuration file and a screenshot of the relevant storage configuration. (You may mask sensitive fields)
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs (optional)
description: |
Please copy and paste any relevant log output or screenshots. (You may mask sensitive fields) [Guide](https://doc.oplist.org/faq/howto#how-to-quickly-locate-bugs)
- type: textarea
id: reproduction
attributes:
label: Reproduction Link (optional)
description: |
Please provide a link to a repo or page that can reproduce this issue.

View File

@ -1,48 +0,0 @@
name: "功能请求"
description: 功能请求 / 增强
title: "[Feature] 请修改标题为您的功能名称"
labels: [enhancement]
body:
- type: checkboxes
attributes:
label: 请确认以下事项
description: |
您必须勾选以下内容,否则您的问题可能会被直接关闭。
或者您可以去[讨论区](https://github.com/OpenListTeam/OpenList/discussions)。
options:
- label: |
我已确认阅读并同意 [AGPL-3.0 第15条](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=15.%20Disclaimer%20of%20Warranty.) 。
本程序不提供任何明示或暗示的担保,使用风险由您自行承担。
- label: |
我已确认阅读并同意 [AGPL-3.0 第16条](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=16.%20Limitation%20of%20Liability.) 。
无论何种情况,版权持有人或其他分发者均不对使用本程序所造成的任何损失承担责任。
- label: |
我确认我的描述清晰,语法礼貌,能帮助开发者快速定位问题,并符合社区规则。
- label: |
我已确认阅读了[OpenList文档](https://doc.oplist.org)。
- label: |
我已确认没有重复的问题或讨论。
- label: |
我认为此问题必须由`OpenList`处理,而非第三方。
- label: |
我已确认此功能尚未被实现。
- label: |
我已确认此功能是合理的,且有普遍需求,并非我个人需要。
- type: textarea
id: feature-description
attributes:
label: 需求描述
validations:
required: true
- type: textarea
id: suggested-solution
attributes:
label: 实现思路
description: |
实现此需求的解决思路。
- type: textarea
id: additional-context
attributes:
label: 附加信息
description: |
相关的任何其他上下文或截图,或者你觉得有帮助的信息

View File

@ -1,48 +0,0 @@
name: "Feature Request"
description: Feature Request / Enhancement
title: "[Feature] Please change the title to your feature name"
labels: [enhancement]
body:
- type: checkboxes
attributes:
label: Please confirm the following
description: |
You must check all the following, otherwise your request may be closed directly.
Or you can go to the [discussions](https://github.com/OpenListTeam/OpenList/discussions).
options:
- label: |
I have read and agree to [AGPL-3.0 Section 15](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=15.%20Disclaimer%20of%20Warranty.).
The program is provided "as is" without any warranties; you bear all risks of using it.
- label: |
I have read and agree to [AGPL-3.0 Section 16](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=16.%20Limitation%20of%20Liability.).
The copyright holders and distributors are not liable for any damages resulting from the use or inability to use the program.
- label: |
I confirm my description is clear, polite, helps developers quickly locate the issue, and complies with community rules.
- label: |
I have read the [OpenList documentation](https://doc.oplist.org).
- label: |
I confirm there are no duplicate issues or discussions.
- label: |
I believe this issue must be handled by `OpenList` and not by a third party.
- label: |
I confirm this feature has not been implemented yet.
- label: |
I confirm this feature is reasonable and has general demand, not just my personal need.
- type: textarea
id: feature-description
attributes:
label: Feature Description
validations:
required: true
- type: textarea
id: suggested-solution
attributes:
label: Suggested Solution
description: |
Solution or approach to achieve this feature.
- type: textarea
id: additional-context
attributes:
label: Additional Information
description: |
Any other context or screenshots related to this feature request, or information you find helpful.

81
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@ -0,0 +1,81 @@
name: "Bug report"
description: Bug report
labels: [bug]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report, please **confirm that your issue is not a duplicate issue and not because of your operation or version issues**
感谢您花时间填写此错误报告,请**务必确认您的issue不是重复的且不是因为您的操作或版本问题**
- type: checkboxes
attributes:
label: Please make sure of the following things
description: |
You must check all the following, otherwise your issue may be closed directly. Or you can go to the [discussions](https://github.com/OpenListTeam/OpenList/discussions)
您必须勾选以下所有内容否则您的issue可能会被直接关闭。或者您可以去[讨论区](https://github.com/OpenListTeam/OpenList/discussions)
options:
- label: |
I have read the [documentation](https://openlistteam.github.io/docs).
我已经阅读了[文档](https://openlistteam.github.io/docs)。
- label: |
I'm sure there are no duplicate issues or discussions.
我确定没有重复的issue或讨论。
- label: |
I'm sure it's due to `OpenList` and not something else(such as [Network](https://openlistteam.github.io/docs/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host) ,`Dependencies` or `Operational`).
我确定是`OpenList`的问题,而不是其他原因(例如[网络](https://openlistteam.github.io/docs/zh/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host)`依赖`或`操作`)。
- label: |
I'm sure this issue is not fixed in the latest version.
我确定这个问题在最新版本中没有被修复。
- type: input
id: version
attributes:
label: OpenList Version / OpenList 版本
description: |
What version of our software are you running? Do not use `latest` or `master` as an answer.
您使用的是哪个版本的软件?请不要使用`latest`或`master`作为答案。
placeholder: v3.xx.xx
validations:
required: true
- type: input
id: driver
attributes:
label: Driver used / 使用的存储驱动
description: |
What storage driver are you using?
您使用的是哪个存储驱动?
placeholder: "for example: Onedrive"
validations:
required: true
- type: textarea
id: bug-description
attributes:
label: Describe the bug / 问题描述
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: Reproduction / 复现链接
description: |
Please provide a link to a repo that can reproduce the problem you ran into. Please be aware that your issue may be closed directly if you don't provide it.
请提供能复现此问题的链接请知悉如果不提供它你的issue可能会被直接关闭。
validations:
required: true
- type: textarea
id: config
attributes:
label: Config / 配置
description: |
Please provide the configuration file of your `OpenList` application and take a screenshot of the relevant storage configuration. (hide privacy field)
请提供您的`OpenList`应用的配置文件,并截图相关存储配置。(隐藏隐私字段)
validations:
required: true
- type: textarea
id: logs
attributes:
label: Logs / 日志
description: |
Please copy and paste any relevant log output.
请复制粘贴错误日志,或者截图

View File

@ -1,14 +1,5 @@
blank_issues_enabled: true blank_issues_enabled: true
contact_links: contact_links:
- name: 问题和讨论
url: https://github.com/OpenListTeam/OpenList/discussions
about: 讨论、问题、想法等
- name: Questions & Discussions - name: Questions & Discussions
url: https://github.com/OpenListTeam/OpenList/discussions url: https://github.com/OpenListTeam/OpenList/discussions
about: Discuss issues, ideas, etc. about: Use GitHub discussions for message-board style questions and discussions.
- name: 即时聊天
url: https://t.me/OpenListTeam
about: 与我们聊天
- name: Chat
url: https://t.me/OpenListTeam
about: Chat with us

View File

@ -0,0 +1,33 @@
name: "Feature request"
description: Feature request
labels: [enhancement]
body:
- type: checkboxes
attributes:
label: Please make sure of the following things
description: You may select more than one, even select all.
options:
- label: I have read the [documentation](https://openlistteam.github.io/docs).
- label: I'm sure there are no duplicate issues or discussions.
- label: I'm sure this feature is not implemented.
- label: I'm sure it's a reasonable and popular requirement.
- type: textarea
id: feature-description
attributes:
label: Description of the feature / 需求描述
validations:
required: true
- type: textarea
id: suggested-solution
attributes:
label: Suggested solution / 实现思路
description: |
Solutions to achieve this requirement.
实现此需求的解决思路。
- type: textarea
id: additional-context
attributes:
label: Additional context / 附件
description: |
Any other context or screenshots about the feature request here, or information you find helpful.
相关的任何其他上下文或截图,或者你觉得有帮助的信息

View File

@ -1,56 +0,0 @@
<!--
Provide a general summary of your changes in the Title above.
The PR title must start with `feat(): `, `docs(): `, `fix(): `, `style(): `, or `refactor(): `, `chore(): `. For example: `feat(component): add new feature`.
If it spans multiple components, use the main component as the prefix and enumerate in the title, describe in the body.
-->
<!--
在上方标题中提供您更改的总体摘要。
PR 标题需以 `feat(): `, `docs(): `, `fix(): `, `style(): `, `refactor(): `, `chore(): ` 其中之一开头,例如:`feat(component): 新增功能`
如果跨多个组件,请使用主要组件作为前缀,并在标题中枚举、描述中说明。
-->
## Description / 描述
<!-- Describe your changes in detail -->
<!-- 详细描述您的更改 -->
## Motivation and Context / 背景
<!-- Why is this change required? What problem does it solve? -->
<!-- 为什么需要此更改?它解决了什么问题? -->
<!-- If it fixes an open issue, please link to the issue here. -->
<!-- 如果修复了一个打开的issue请在此处链接到该issue -->
Closes #XXXX
<!-- or -->
<!-- 或者 -->
Relates to #XXXX
## How Has This Been Tested? / 测试
<!-- Please describe in detail how you tested your changes. -->
<!-- 请详细描述您如何测试更改 -->
## Checklist / 检查清单
<!-- Go over all the following points, and put an `x` in all the boxes that apply. -->
<!-- 检查以下所有要点,并在所有适用的框中打`x` -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
<!-- 如果您对其中任何一项不确定,请不要犹豫提问。我们会帮助您! -->
- [ ] I have read the [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) document.
我已阅读 [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) 文档。
- [ ] I have formatted my code with `go fmt` or [prettier](https://prettier.io/).
我已使用 `go fmt` 或 [prettier](https://prettier.io/) 格式化提交的代码。
- [ ] I have added appropriate labels to this PR (or mentioned needed labels in the description if lacking permissions).
我已为此 PR 添加了适当的标签(如无权限或需要的标签不存在,请在描述中说明,管理员将后续处理)。
- [ ] I have requested review from relevant code authors using the "Request review" feature when applicable.
我已在适当情况下使用"Request review"功能请求相关代码作者进行审查。
- [ ] I have updated the repository accordingly (If its needed).
我已相应更新了相关仓库(若适用)。
- [ ] [OpenList-Frontend](https://github.com/OpenListTeam/OpenList-Frontend) #XXXX
- [ ] [OpenList-Docs](https://github.com/OpenListTeam/OpenList-Docs) #XXXX

21
.github/config.yml vendored Normal file
View File

@ -0,0 +1,21 @@
# Configuration for welcome - https://github.com/behaviorbot/welcome
# Configuration for new-issue-welcome - https://github.com/behaviorbot/new-issue-welcome
# Comment to be posted to on first time issues
newIssueWelcomeComment: >
Thanks for opening your first issue here! Be sure to follow the issue template!
# Configuration for new-pr-welcome - https://github.com/behaviorbot/new-pr-welcome
# Comment to be posted to on PRs from first time contributors in your repository
newPRWelcomeComment: >
Thanks for opening this pull request! Please check out our contributing guidelines.
# Configuration for first-pr-merge - https://github.com/behaviorbot/first-pr-merge
# Comment to be posted to on pull requests merged by a first time user
firstPRMergeComment: >
Congrats on merging your first pull request! We here at behavior bot are proud of you!
# It is recommend to include as many gifs and emojis as possible

21
.github/stale.yml vendored Normal file
View File

@ -0,0 +1,21 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 44
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 20
# Issues with these labels will never be considered stale
exemptLabels:
- accepted
- security
- working
- pr-welcome
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
This issue was closed due to inactive more than 52 days. You can reopen or
recreate it if you think it should continue. Thank you for your contributions again.

View File

@ -1,8 +1,8 @@
name: Beta Release builds name: beta release
on: on:
push: push:
branches: ["main"] branches: [ 'main' ]
workflow_dispatch: workflow_dispatch:
concurrency: concurrency:
@ -14,8 +14,12 @@ permissions:
jobs: jobs:
changelog: changelog:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Beta Release Changelog name: Beta Release Changelog
runs-on: ubuntu-latest runs-on: ${{ matrix.platform }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -61,19 +65,17 @@ jobs:
strategy: strategy:
matrix: matrix:
include: include:
- target: "!(*musl*|*windows-arm64*|*windows7-*|*android*|*freebsd*)" # xgo and loongarch - target: '!(*musl*|*windows-arm64*|*android*|*freebsd*)' # xgo
hash: "md5" hash: "md5"
- target: "linux-!(arm*)-musl*" #musl-not-arm - target: 'linux-!(arm*)-musl*' #musl-not-arm
hash: "md5-linux-musl" hash: "md5-linux-musl"
- target: "linux-arm*-musl*" #musl-arm - target: 'linux-arm*-musl*' #musl-arm
hash: "md5-linux-musl-arm" hash: "md5-linux-musl-arm"
- target: "windows-arm64" #win-arm64 - target: 'windows-arm64' #win-arm64
hash: "md5-windows-arm64" hash: "md5-windows-arm64"
- target: "windows7-*" #win7 - target: 'android-*' #android
hash: "md5-windows7"
- target: "android-*" #android
hash: "md5-android" hash: "md5-android"
- target: "freebsd-*" #freebsd - target: 'freebsd-*' #freebsd
hash: "md5-freebsd" hash: "md5-freebsd"
name: Beta Release name: Beta Release
@ -87,29 +89,27 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: "1.24.5" go-version: '1.22'
- name: Setup web - name: Setup web
run: bash build.sh dev web run: bash build.sh dev web
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Build - name: Build
uses: OpenListTeam/cgo-actions@v1.2.2 uses: OpenListTeam/cgo-actions@v1.1.2
with: with:
targets: ${{ matrix.target }} targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch musl-target-format: $os-$musl-$arch
github-token: ${{ secrets.GITHUB_TOKEN }}
out-dir: build out-dir: build
output: openlist-$target$ext output: openlist-$target$ext
musl-base-url: "https://github.com/OpenListTeam/musl-compilers/releases/latest/download/" musl-base-url: "https://github.com/OpenListTeam/musl-compilers/releases/latest/download/"
x-flags: | x-flags: |
github.com/OpenListTeam/OpenList/v4/internal/conf.BuiltAt=$built_at github.com/OpenListTeam/OpenList/internal/conf.BuiltAt=$built_at
github.com/OpenListTeam/OpenList/v4/internal/conf.GitAuthor=The OpenList Projects Contributors <noreply@openlist.team> github.com/OpenListTeam/OpenList/internal/conf.GitAuthor=OpenList
github.com/OpenListTeam/OpenList/v4/internal/conf.GitCommit=$git_commit github.com/OpenListTeam/OpenList/internal/conf.GitCommit=$git_commit
github.com/OpenListTeam/OpenList/v4/internal/conf.Version=$tag github.com/OpenListTeam/OpenList/internal/conf.Version=$tag
github.com/OpenListTeam/OpenList/v4/internal/conf.WebVersion=rolling github.com/OpenListTeam/OpenList/internal/conf.WebVersion=dev
- name: Compress - name: Compress
run: | run: |
@ -117,7 +117,7 @@ jobs:
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# See above # See above
- name: Upload assets to beta release - name: Upload assets to beta release
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
with: with:
@ -141,3 +141,33 @@ jobs:
path: ${{ github.workspace }}/build/compress/* path: ${{ github.workspace }}/build/compress/*
compression-level: 0 compression-level: 0
if-no-files-found: error # 'warn' or 'ignore' are also available, defaults to `warn` if-no-files-found: error # 'warn' or 'ignore' are also available, defaults to `warn`
# TODO: We do not have desktop clients right now. We may need a better way to
# trigger the build of desktop client when we actually have it.
# desktop:
# needs:
# - release
# name: Beta Release Desktop
# runs-on: ubuntu-latest
# steps:
# - name: Checkout repo
# uses: actions/checkout@v4
# with:
# repository: openlistteam/desktop-release
# ref: main
# persist-credentials: false
# fetch-depth: 0
# - name: Commit
# run: |
# git config --local user.email "bot@nn.ci"
# git config --local user.name "IlaBot"
# git commit --allow-empty -m "Trigger build for ${{ github.sha }}"
# - name: Push commit
# uses: ad-m/github-push-action@master
# with:
# github_token: ${{ secrets.MY_TOKEN }}
# branch: main
# repository: openlistteam/desktop-release

View File

@ -1,8 +1,10 @@
name: Test Build name: build
on: on:
push:
branches: [ 'main' ]
pull_request: pull_request:
branches: ["main"] branches: [ 'main' ]
workflow_dispatch: workflow_dispatch:
concurrency: concurrency:
@ -13,7 +15,8 @@ jobs:
build: build:
strategy: strategy:
matrix: matrix:
target: platform: [ubuntu-latest]
target:
- darwin-amd64 - darwin-amd64
- darwin-arm64 - darwin-arm64
- windows-amd64 - windows-amd64
@ -21,9 +24,10 @@ jobs:
- linux-amd64-musl - linux-amd64-musl
- windows-arm64 - windows-arm64
- android-arm64 - android-arm64
name: Build ${{ matrix.target }} name: Build
runs-on: ubuntu-latest runs-on: ${{ matrix.platform }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -33,31 +37,28 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: "1.24.5" go-version: '1.22'
- name: Setup web - name: Setup web
run: bash build.sh dev web run: bash build.sh dev web
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Build - name: Build
uses: OpenListTeam/cgo-actions@v1.2.2 uses: OpenListTeam/cgo-actions@v1.1.2
with: with:
targets: ${{ matrix.target }} targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch musl-target-format: $os-$musl-$arch
github-token: ${{ secrets.GITHUB_TOKEN }}
out-dir: build out-dir: build
x-flags: | x-flags: |
github.com/OpenListTeam/OpenList/v4/internal/conf.BuiltAt=$built_at github.com/OpenListTeam/OpenList/internal/conf.BuiltAt=$built_at
github.com/OpenListTeam/OpenList/v4/internal/conf.GitAuthor=The OpenList Projects Contributors <noreply@openlist.team> github.com/OpenListTeam/OpenList/internal/conf.GitAuthor=OpenList
github.com/OpenListTeam/OpenList/v4/internal/conf.GitCommit=$git_commit github.com/OpenListTeam/OpenList/internal/conf.GitCommit=$git_commit
github.com/OpenListTeam/OpenList/v4/internal/conf.Version=$tag github.com/OpenListTeam/OpenList/internal/conf.Version=$tag
github.com/OpenListTeam/OpenList/v4/internal/conf.WebVersion=rolling github.com/OpenListTeam/OpenList/internal/conf.WebVersion=dev
output: openlist$ext
- name: Upload artifact - name: Upload artifact
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: openlist_${{ steps.short-sha.outputs.sha }}_${{ matrix.target }} name: openlist_${{ env.SHA }}_${{ matrix.target }}
path: build/* path: build/*

View File

@ -1,13 +1,10 @@
name: Release Automatic changelog name: auto changelog
on: on:
push: push:
tags: tags:
- 'v*' - 'v*'
permissions:
contents: write
jobs: jobs:
changelog: changelog:
name: Create Release name: Create Release
@ -24,4 +21,4 @@ jobs:
- run: npx changelogithub # or changelogithub@0.12 if ensure the stable result - run: npx changelogithub # or changelogithub@0.12 if ensure the stable result
env: env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} GITHUB_TOKEN: ${{secrets.MY_TOKEN}}

View File

@ -0,0 +1,22 @@
name: Close need info
on:
schedule:
- cron: "0 0 */1 * *"
workflow_dispatch:
jobs:
close-need-info:
runs-on: ubuntu-latest
steps:
- name: close-issues
uses: actions-cool/issues-helper@v3
with:
actions: 'close-issues'
token: ${{ secrets.GITHUB_TOKEN }}
labels: 'question'
inactive-day: 3
close-reason: 'not_planned'
body: |
Hello @${{ github.event.issue.user.login }}, this issue was closed due to no activities in 3 days.
你好 @${{ github.event.issue.user.login }}此issue因超过3天未回复被关闭。

21
.github/workflows/issue_close_stale.yml vendored Normal file
View File

@ -0,0 +1,21 @@
name: Close inactive
on:
schedule:
- cron: "0 0 */7 * *"
workflow_dispatch:
jobs:
close-inactive:
runs-on: ubuntu-latest
steps:
- name: close-issues
uses: actions-cool/issues-helper@v3
with:
actions: 'close-issues'
token: ${{ secrets.GITHUB_TOKEN }}
labels: 'stale'
inactive-day: 8
close-reason: 'not_planned'
body: |
Hello @${{ github.event.issue.user.login }}, this issue was closed due to inactive more than 52 days. You can reopen or recreate it if you think it should continue. Thank you for your contributions again.

25
.github/workflows/issue_duplicate.yml vendored Normal file
View File

@ -0,0 +1,25 @@
name: Issue Duplicate
on:
issues:
types: [labeled]
jobs:
create-comment:
runs-on: ubuntu-latest
if: github.event.label.name == 'duplicate'
steps:
- name: Create comment
uses: actions-cool/issues-helper@v3
with:
actions: 'create-comment'
token: ${{ secrets.GITHUB_TOKEN }}
issue-number: ${{ github.event.issue.number }}
body: |
Hello @${{ github.event.issue.user.login }}, your issue is a duplicate and will be closed.
你好 @${{ github.event.issue.user.login }}你的issue是重复的将被关闭。
- name: Close issue
uses: actions-cool/issues-helper@v3
with:
actions: 'close-issue'
token: ${{ secrets.GITHUB_TOKEN }}

25
.github/workflows/issue_invalid.yml vendored Normal file
View File

@ -0,0 +1,25 @@
name: Issue Invalid
on:
issues:
types: [labeled]
jobs:
create-comment:
runs-on: ubuntu-latest
if: github.event.label.name == 'invalid'
steps:
- name: Create comment
uses: actions-cool/issues-helper@v3
with:
actions: 'create-comment'
token: ${{ secrets.GITHUB_TOKEN }}
issue-number: ${{ github.event.issue.number }}
body: |
Hello @${{ github.event.issue.user.login }}, your issue is invalid and will be closed.
你好 @${{ github.event.issue.user.login }}你的issue无效将被关闭。
- name: Close issue
uses: actions-cool/issues-helper@v3
with:
actions: 'close-issue'
token: ${{ secrets.GITHUB_TOKEN }}

17
.github/workflows/issue_on_close.yml vendored Normal file
View File

@ -0,0 +1,17 @@
name: Remove working label when issue closed
on:
issues:
types: [closed]
jobs:
rm-working:
runs-on: ubuntu-latest
steps:
- name: Remove working label
uses: actions-cool/issues-helper@v3
with:
actions: 'remove-labels'
token: ${{ secrets.GITHUB_TOKEN }}
issue-number: ${{ github.event.issue.number }}
labels: 'working,pr-welcome'

View File

@ -1,61 +0,0 @@
name: Issue or PR Auto Reply
on:
issues:
types: [opened]
pull_request:
types: [opened]
permissions:
issues: write
pull-requests: write
jobs:
auto-reply:
runs-on: ubuntu-latest
if: github.event_name == 'issues'
steps:
- name: Check issue for unchecked tasks and reply
uses: actions/github-script@v7
with:
script: |
const issueBody = context.payload.issue.body || "";
const unchecked = /- \[ \] /.test(issueBody);
let comment = "感谢您联系OpenList。我们会尽快回复您。\n";
comment += "Thanks for contacting OpenList. We will reply to you as soon as possible.\n\n";
if (unchecked) {
comment += "由于您提出的 Issue 中包含部分未确认的项目,为了更好地管理项目,在人工审核后可能会直接关闭此问题。\n";
comment += "如果您能确认并补充相关未确认项目的信息,欢迎随时重新提交。我们会及时关注并处理。感谢您的理解与支持!\n";
comment += "Since your issue contains some unchecked tasks, it may be closed after manual review.\n";
comment += "If you can confirm and provide information for the unchecked tasks, feel free to resubmit.\n";
comment += "We will pay attention and handle it in a timely manner.\n\n";
comment += "感谢您的理解与支持!\n";
comment += "Thank you for your understanding and support!\n";
}
await github.rest.issues.createComment({
...context.repo,
issue_number: context.issue.number,
body: comment
});
pr-title-check:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- name: Check PR title for required prefix and comment
uses: actions/github-script@v7
with:
script: |
const title = context.payload.pull_request.title || "";
const ok = /^(feat|docs|fix|style|refactor|chore)\(.+?\): /i.test(title);
if (!ok) {
let comment = "⚠️ PR 标题需以 `feat(): `, `docs(): `, `fix(): `, `style(): `, `refactor(): `, `chore(): ` 其中之一开头,例如:`feat(component): 新增功能`。\n";
comment += "⚠️ The PR title must start with `feat(): `, `docs(): `, `fix(): `, `style(): `, or `refactor(): `, `chore(): `. For example: `feat(component): add new feature`.\n\n";
comment += "如果跨多个组件,请使用主要组件作为前缀,并在标题中枚举、描述中说明。\n";
comment += "If it spans multiple components, use the main component as the prefix and enumerate in the title, describe in the body.\n\n";
await github.rest.issues.createComment({
...context.repo,
issue_number: context.issue.number,
body: comment
});
}

20
.github/workflows/issue_question.yml vendored Normal file
View File

@ -0,0 +1,20 @@
name: Issue Question
on:
issues:
types: [labeled]
jobs:
create-comment:
runs-on: ubuntu-latest
if: github.event.label.name == 'question'
steps:
- name: Create comment
uses: actions-cool/issues-helper@v3.6.0
with:
actions: 'create-comment'
token: ${{ secrets.GITHUB_TOKEN }}
issue-number: ${{ github.event.issue.number }}
body: |
Hello @${{ github.event.issue.user.login }}, please input issue by template and add detail. Issues labeled by `question` will be closed if no activities in 3 days.
你好 @${{ github.event.issue.user.login }}请按照issue模板填写, 并详细说明问题/日志记录/复现步骤/复现链接/实现思路或提供更多信息等, 3天内未回复issue自动关闭。

19
.github/workflows/issue_similarity.yml vendored Normal file
View File

@ -0,0 +1,19 @@
name: Issues Similarity Analysis
on:
issues:
types: [opened, edited]
jobs:
similarity-analysis:
runs-on: ubuntu-latest
steps:
- name: analysis
uses: actions-cool/issues-similarity-analysis@v1
with:
filter-threshold: 0.5
comment-title: '### See'
comment-body: '${index}. ${similarity} #${number}'
show-footer: false
show-mentioned: true
since-days: 730

13
.github/workflows/issue_translate.yml vendored Normal file
View File

@ -0,0 +1,13 @@
name: Translation Helper
on:
pull_request_target:
types: [opened]
issues:
types: [opened]
jobs:
translate:
runs-on: ubuntu-latest
steps:
- uses: actions-cool/translation-helper@v1.2.0

25
.github/workflows/issue_wontfix.yml vendored Normal file
View File

@ -0,0 +1,25 @@
name: Issue Wontfix
on:
issues:
types: [labeled]
jobs:
lock-issue:
runs-on: ubuntu-latest
if: github.event.label.name == 'wontfix'
steps:
- name: Create comment
uses: actions-cool/issues-helper@v3
with:
actions: 'create-comment'
token: ${{ secrets.GITHUB_TOKEN }}
issue-number: ${{ github.event.issue.number }}
body: |
Hello @${{ github.event.issue.user.login }}, this issue will not be worked on and will be closed.
你好 @${{ github.event.issue.user.login }},这不会被处理,将被关闭。
- name: Close issue
uses: actions-cool/issues-helper@v3
with:
actions: 'close-issue'
token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,41 +1,29 @@
name: Release builds name: release
on: on:
release: release:
types: [ published ] types: [ published ]
permissions: write-all
permissions:
contents: write
jobs: jobs:
# Set release to prerelease first
prerelease:
name: Set Prerelease
runs-on: ubuntu-latest
steps:
- name: Prerelease
uses: irongut/EditRelease@v1.2.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
id: ${{ github.event.release.id }}
prerelease: true
# Main release job for all platforms
release: release:
needs: prerelease
strategy: strategy:
matrix: matrix:
build-type: [ 'standard', 'lite' ] platform: [ ubuntu-latest ]
target-platform: [ '', 'android', 'freebsd', 'linux_musl', 'linux_musl_arm' ] go-version: [ '1.21' ]
name: Release ${{ matrix.target-platform && format('{0} ', matrix.target-platform) || '' }}${{ matrix.build-type == 'lite' && 'Lite' || '' }} name: Release
runs-on: ubuntu-latest runs-on: ${{ matrix.platform }}
steps: steps:
- name: Free Disk Space (Ubuntu) - name: Free Disk Space (Ubuntu)
if: matrix.target-platform == ''
uses: jlumbroso/free-disk-space@main uses: jlumbroso/free-disk-space@main
with: with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: false tool-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true android: true
dotnet: true dotnet: true
haskell: true haskell: true
@ -43,10 +31,17 @@ jobs:
docker-images: true docker-images: true
swap-storage: true swap-storage: true
- name: Prerelease
uses: irongut/EditRelease@v1.2.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
id: ${{ github.event.release.id }}
prerelease: true
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: '1.24' go-version: ${{ matrix.go-version }}
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -54,7 +49,6 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Install dependencies - name: Install dependencies
if: matrix.target-platform == ''
run: | run: |
sudo snap install zig --classic --beta sudo snap install zig --classic --beta
docker pull crazymax/xgo:latest docker pull crazymax/xgo:latest
@ -63,10 +57,9 @@ jobs:
- name: Build - name: Build
run: | run: |
bash build.sh release ${{ matrix.build-type == 'lite' && 'lite' || '' }} ${{ matrix.target-platform }} bash build.sh release
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Upload assets - name: Upload assets
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
@ -74,3 +67,32 @@ jobs:
files: build/compress/* files: build/compress/*
prerelease: false prerelease: false
# TODO: We do not have desktop clients right now. We may need a better way to
# trigger the build of desktop client when we actually have it.
# release_desktop:
# needs: release
# name: Release desktop
# runs-on: ubuntu-latest
# steps:
# - name: Checkout repo
# uses: actions/checkout@v4
# with:
# repository: openlistteam/desktop-release
# ref: main
# persist-credentials: false
# fetch-depth: 0
# - name: Add tag
# run: |
# git config --local user.email "bot@nn.ci"
# git config --local user.name "IlaBot"
# version=$(wget -qO- -t1 -T2 "https://api.github.com/repos/openlistteam/openlist/releases/latest" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
# git tag -a $version -m "release $version"
# - name: Push tags
# uses: ad-m/github-push-action@master
# with:
# github_token: ${{ secrets.MY_TOKEN }}
# branch: main
# repository: openlistteam/desktop-release

38
.github/workflows/release_android.yml vendored Normal file
View File

@ -0,0 +1,38 @@
name: release_android
on:
release:
types: [ published ]
permissions: write-all
jobs:
release_android:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release android
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*

View File

@ -1,20 +1,7 @@
name: Release builds (Docker) name: release_docker
on: on:
workflow_dispatch: workflow_dispatch:
inputs:
manual_tag:
description: 'Tag name (like v0.1.0). Required if as_latest is true.'
required: false
type: string
as_latest:
description: 'Tag as latest?'
required: true
default: 'false'
type: choice
options:
- 'true'
- 'false'
push: push:
tags: tags:
- 'v*' - 'v*'
@ -24,18 +11,18 @@ concurrency:
cancel-in-progress: true cancel-in-progress: true
env: env:
DOCKERHUB_ORG_NAME: ${{ vars.DOCKERHUB_ORG_NAME || 'openlistteam' }} ORG_NAME: openlistteam
GHCR_ORG_NAME: ${{ vars.GHCR_ORG_NAME || 'openlistteam' }}
IMAGE_NAME: openlist-git IMAGE_NAME: openlist-git
IMAGE_NAME_DOCKERHUB: openlist IMAGE_NAME_DOCKERHUB: openlist
REGISTRY: ghcr.io REGISTRY: ghcr.io
ARTIFACT_NAME: 'binaries_docker_release' ARTIFACT_NAME: 'binaries_docker_release'
ARTIFACT_NAME_LITE: 'binaries_docker_release_lite' RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64'
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/ppc64le,linux/riscv64,linux/loong64' ### Temporarily disable Docker builds for linux/s390x architectures for unknown reasons. IMAGE_PUSH: ${{ github.event_name == 'push' }}
IMAGE_PUSH: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }} IMAGE_IS_PROD: ${{ github.ref_type == 'tag' }}
IMAGE_TAGS_BETA: |
permissions: type=raw,value=beta,enable={{is_default_branch}}
packages: write
permissions: write-all
jobs: jobs:
build_binary: build_binary:
@ -62,11 +49,17 @@ jobs:
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build go binary (beta)
if: env.IMAGE_IS_PROD != 'true'
run: bash build.sh beta docker-multiplatform
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build go binary (release) - name: Build go binary (release)
if: env.IMAGE_IS_PROD == 'true'
run: bash build.sh release docker-multiplatform run: bash build.sh release docker-multiplatform
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
@ -78,46 +71,6 @@ jobs:
!build/*.tgz !build/*.tgz
!build/musl-libs/** !build/musl-libs/**
build_binary_lite:
name: Build Binaries for Docker Release (Lite)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: Cache Musl
id: cache-musl
uses: actions/cache@v4
with:
path: build/musl-libs
key: docker-musl-libs-v2
- name: Download Musl Library
if: steps.cache-musl.outputs.cache-hit != 'true'
run: bash build.sh prepare lite docker-multiplatform
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build go binary (release)
run: bash build.sh release lite docker-multiplatform
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ env.ARTIFACT_NAME_LITE }}
overwrite: true
path: |
build/
!build/*.tgz
!build/musl-libs/**
release_docker: release_docker:
needs: build_binary needs: build_binary
name: Release Docker image name: Release Docker image
@ -127,19 +80,15 @@ jobs:
image: ["latest", "ffmpeg", "aria2", "aio"] image: ["latest", "ffmpeg", "aria2", "aio"]
include: include:
- image: "latest" - image: "latest"
base_image_tag: "base"
build_arg: "" build_arg: ""
tag_favor: "" tag_favor: ""
- image: "ffmpeg" - image: "ffmpeg"
base_image_tag: "ffmpeg"
build_arg: INSTALL_FFMPEG=true build_arg: INSTALL_FFMPEG=true
tag_favor: "suffix=-ffmpeg,onlatest=true" tag_favor: "suffix=-ffmpeg,onlatest=true"
- image: "aria2" - image: "aria2"
base_image_tag: "aria2"
build_arg: INSTALL_ARIA2=true build_arg: INSTALL_ARIA2=true
tag_favor: "suffix=-aria2,onlatest=true" tag_favor: "suffix=-aria2,onlatest=true"
- image: "aio" - image: "aio"
base_image_tag: "aio"
build_arg: | build_arg: |
INSTALL_FFMPEG=true INSTALL_FFMPEG=true
INSTALL_ARIA2=true INSTALL_ARIA2=true
@ -170,7 +119,7 @@ jobs:
if: env.IMAGE_PUSH == 'true' if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
username: ${{ vars.DOCKERHUB_ORG_NAME_BACKUP || env.DOCKERHUB_ORG_NAME }} username: ${{ env.ORG_NAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta - name: Docker meta
@ -178,14 +127,11 @@ jobs:
uses: docker/metadata-action@v5 uses: docker/metadata-action@v5
with: with:
images: | images: |
${{ env.REGISTRY }}/${{ env.GHCR_ORG_NAME }}/${{ env.IMAGE_NAME }} ${{ env.REGISTRY }}/${{ env.ORG_NAME }}/${{ env.IMAGE_NAME }}
${{ env.DOCKERHUB_ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }} ${{ env.ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }}
tags: > tags: ${{ env.IMAGE_IS_PROD == 'true' && '' || env.IMAGE_TAGS_BETA }}
${{ github.event_name == 'workflow_dispatch'
&& format('type=raw,value={0}', github.event.inputs.manual_tag)
|| format('type=raw,value={0}', github.ref_name) }}
flavor: | flavor: |
latest=${{ github.event_name == 'push' || github.event.inputs.as_latest == 'true' }} ${{ env.IMAGE_IS_PROD == 'true' && 'latest=true' || '' }}
${{ matrix.tag_favor }} ${{ matrix.tag_favor }}
- name: Build and push - name: Build and push
@ -195,93 +141,7 @@ jobs:
context: . context: .
file: Dockerfile.ci file: Dockerfile.ci
push: ${{ env.IMAGE_PUSH == 'true' }} push: ${{ env.IMAGE_PUSH == 'true' }}
build-args: | build-args: ${{ matrix.build_arg }}
BASE_IMAGE_TAG=${{ matrix.base_image_tag }}
${{ matrix.build_arg }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: ${{ env.RELEASE_PLATFORMS }}
release_docker_lite:
needs: build_binary_lite
name: Release Docker image (Lite)
runs-on: ubuntu-latest
strategy:
matrix:
image: ["latest", "ffmpeg", "aria2", "aio"]
include:
- image: "latest"
base_image_tag: "base"
build_arg: ""
tag_favor: "suffix=-lite,onlatest=true"
- image: "ffmpeg"
base_image_tag: "ffmpeg"
build_arg: INSTALL_FFMPEG=true
tag_favor: "suffix=-lite-ffmpeg,onlatest=true"
- image: "aria2"
base_image_tag: "aria2"
build_arg: INSTALL_ARIA2=true
tag_favor: "suffix=-lite-aria2,onlatest=true"
- image: "aio"
base_image_tag: "aio"
build_arg: |
INSTALL_FFMPEG=true
INSTALL_ARIA2=true
tag_favor: "suffix=-lite-aio,onlatest=true"
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: ${{ env.ARTIFACT_NAME_LITE }}
path: 'build/'
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to DockerHub Container Registry
if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_ORG_NAME_BACKUP || env.DOCKERHUB_ORG_NAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.REGISTRY }}/${{ env.GHCR_ORG_NAME }}/${{ env.IMAGE_NAME }}
${{ env.DOCKERHUB_ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }}
tags: >
${{ github.event_name == 'workflow_dispatch'
&& format('type=raw,value={0}', github.event.inputs.manual_tag)
|| format('type=raw,value={0}', github.ref_name) }}
flavor: |
latest=${{ github.event_name == 'push' || github.event.inputs.as_latest == 'true' }}
${{ matrix.tag_favor }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.ci
push: ${{ env.IMAGE_PUSH == 'true' }}
build-args: |
BASE_IMAGE_TAG=${{ matrix.base_image_tag }}
${{ matrix.build_arg }}
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
platforms: ${{ env.RELEASE_PLATFORMS }} platforms: ${{ env.RELEASE_PLATFORMS }}

38
.github/workflows/release_freebsd.yml vendored Normal file
View File

@ -0,0 +1,38 @@
name: release_freebsd
on:
release:
types: [ published ]
permissions: write-all
jobs:
release_freebsd:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release freebsd
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*

View File

@ -0,0 +1,36 @@
name: release_linux_musl
on:
release:
types: [ published ]
permissions: write-all
jobs:
release_linux_musl:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release linux_musl
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*

View File

@ -0,0 +1,37 @@
name: release_linux_musl_arm
on:
release:
types: [ published ]
permissions: write-all
jobs:
release_linux_musl_arm:
strategy:
matrix:
platform: [ ubuntu-latest ]
go-version: [ '1.21' ]
name: Release
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build
run: |
bash build.sh release linux_musl_arm
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload assets
uses: softprops/action-gh-release@v2
with:
files: build/compress/*

View File

@ -1,38 +0,0 @@
name: Sync to Gitee
on:
push:
branches:
- main
workflow_dispatch:
jobs:
sync:
runs-on: ubuntu-latest
name: Sync GitHub to Gitee
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.GITEE_SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan gitee.com >> ~/.ssh/known_hosts
- name: Create single commit and push
run: |
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
# Create a new branch
git checkout --orphan new-main
git add .
git commit -m "Sync from GitHub: $(date)"
# Add Gitee remote and force push
git remote add gitee ${{ vars.GITEE_REPO_URL }}
git push --force gitee new-main:main

View File

@ -1,4 +1,4 @@
name: Beta Release (Docker) name: test_docker
on: on:
workflow_dispatch: workflow_dispatch:
@ -14,13 +14,12 @@ concurrency:
cancel-in-progress: true cancel-in-progress: true
env: env:
DOCKERHUB_ORG_NAME: ${{ vars.DOCKERHUB_ORG_NAME || 'openlistteam' }} ORG_NAME: openlistteam
GHCR_ORG_NAME: ${{ vars.GHCR_ORG_NAME || 'openlistteam' }}
IMAGE_NAME: openlist-git IMAGE_NAME: openlist-git
IMAGE_NAME_DOCKERHUB: openlist IMAGE_NAME_DOCKERHUB: openlist
REGISTRY: ghcr.io REGISTRY: ghcr.io
ARTIFACT_NAME: 'binaries_docker_release' ARTIFACT_NAME: 'binaries_docker_release'
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/ppc64le,linux/riscv64,linux/loong64' ### Temporarily disable Docker builds for linux/s390x architectures for unknown reasons. RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64'
IMAGE_PUSH: ${{ github.event_name == 'push' }} IMAGE_PUSH: ${{ github.event_name == 'push' }}
IMAGE_TAGS_BETA: | IMAGE_TAGS_BETA: |
type=ref,event=pr type=ref,event=pr
@ -28,7 +27,7 @@ env:
jobs: jobs:
build_binary: build_binary:
name: Build Binaries for Docker Release (Beta) name: Build Binaries for Docker Release
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
@ -55,7 +54,6 @@ jobs:
run: bash build.sh beta docker-multiplatform run: bash build.sh beta docker-multiplatform
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FRONTEND_REPO: ${{ vars.FRONTEND_REPO }}
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
@ -69,28 +67,25 @@ jobs:
release_docker: release_docker:
needs: build_binary needs: build_binary
name: Release Docker image (Beta) name: Release Docker image
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
contents: read
packages: write packages: write
strategy: strategy:
matrix: matrix:
image: ["latest", "ffmpeg", "aria2", "aio"] image: ["latest", "ffmpeg", "aria2", "aio"]
include: include:
- image: "latest" - image: "latest"
base_image_tag: "base"
build_arg: "" build_arg: ""
tag_favor: "" tag_favor: ""
- image: "ffmpeg" - image: "ffmpeg"
base_image_tag: "ffmpeg"
build_arg: INSTALL_FFMPEG=true build_arg: INSTALL_FFMPEG=true
tag_favor: "suffix=-ffmpeg,onlatest=true" tag_favor: "suffix=-ffmpeg,onlatest=true"
- image: "aria2" - image: "aria2"
base_image_tag: "aria2"
build_arg: INSTALL_ARIA2=true build_arg: INSTALL_ARIA2=true
tag_favor: "suffix=-aria2,onlatest=true" tag_favor: "suffix=-aria2,onlatest=true"
- image: "aio" - image: "aio"
base_image_tag: "aio"
build_arg: | build_arg: |
INSTALL_FFMPEG=true INSTALL_FFMPEG=true
INSTALL_ARIA2=true INSTALL_ARIA2=true
@ -121,7 +116,7 @@ jobs:
if: env.IMAGE_PUSH == 'true' if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
username: ${{ vars.DOCKERHUB_ORG_NAME_BACKUP || env.DOCKERHUB_ORG_NAME }} username: ${{ env.ORG_NAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta - name: Docker meta
@ -129,8 +124,8 @@ jobs:
uses: docker/metadata-action@v5 uses: docker/metadata-action@v5
with: with:
images: | images: |
${{ env.REGISTRY }}/${{ env.GHCR_ORG_NAME }}/${{ env.IMAGE_NAME }} ${{ env.REGISTRY }}/${{ env.ORG_NAME }}/${{ env.IMAGE_NAME }}
${{ env.DOCKERHUB_ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }} ${{ env.ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }}
tags: ${{ env.IMAGE_TAGS_BETA }} tags: ${{ env.IMAGE_TAGS_BETA }}
flavor: | flavor: |
${{ matrix.tag_favor }} ${{ matrix.tag_favor }}
@ -142,9 +137,7 @@ jobs:
context: . context: .
file: Dockerfile.ci file: Dockerfile.ci
push: ${{ env.IMAGE_PUSH == 'true' }} push: ${{ env.IMAGE_PUSH == 'true' }}
build-args: | build-args: ${{ matrix.build_arg }}
BASE_IMAGE_TAG=${{ matrix.base_image_tag }}
${{ matrix.build_arg }}
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
platforms: ${{ env.RELEASE_PLATFORMS }} platforms: ${{ env.RELEASE_PLATFORMS }}

View File

@ -1,40 +0,0 @@
name: Trigger OpenWRT Update
on:
push:
tags:
- 'v*'
workflow_dispatch:
inputs:
tag:
description: 'Release tag to trigger update for'
required: true
type: string
jobs:
trigger-makefile-update:
runs-on: ubuntu-latest
steps:
- name: Trigger Makefile hash update
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.EXTERNAL_REPO_TOKEN_LUCI_APP_OPENLIST }}
repository: ${{ vars.HOOK_REPO || 'OpenListTeam/OpenList-OpenWRT' }}
event-type: update-hashes
client-payload: |
{
"source_repository": "${{ github.repository }}",
"release_tag": "${{ inputs.tag || github.ref_name }}",
"release_name": "${{ inputs.tag || github.ref_name }}",
"release_url": "${{ github.server_url }}/${{ github.repository }}/releases/tag/${{ inputs.tag || github.ref_name }}",
"triggered_by": "${{ github.actor }}",
"trigger_reason": "${{ github.event_name }}"
}
- name: Log trigger information
run: |
echo "🚀 Successfully triggered Makefile hash update"
echo "📦 Target repository: OpenListTeam/luci-app-openlist"
echo "🏷️ Tag: ${{ inputs.tag || github.ref_name }}"
echo "👤 Triggered by: ${{ github.actor }}"
echo "📅 Trigger time: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"

View File

@ -2,76 +2,106 @@
## Setup your machine ## Setup your machine
`OpenList` is written in [Go](https://golang.org/) and [SolidJS](https://www.solidjs.com/). `OpenList` is written in [Go](https://golang.org/) and [React](https://reactjs.org/).
Prerequisites: Prerequisites:
- [git](https://git-scm.com) - [git](https://git-scm.com)
- [Go 1.24+](https://golang.org/doc/install) - [Go 1.20+](https://golang.org/doc/install)
- [gcc](https://gcc.gnu.org/) - [gcc](https://gcc.gnu.org/)
- [nodejs](https://nodejs.org/) - [nodejs](https://nodejs.org/)
## Cloning a fork Clone `OpenList` and `OpenList-Frontend` anywhere:
Fork and clone `OpenList` and `OpenList-Frontend` anywhere:
```shell ```shell
$ git clone https://github.com/<your-username>/OpenList.git $ git clone https://github.com/OpenListTeam/OpenList.git
$ git clone --recurse-submodules https://github.com/<your-username>/OpenList-Frontend.git $ git clone --recurse-submodules https://github.com/OpenListTeam/OpenList-Frontend.git
```
## Creating a branch
Create a new branch from the `main` branch, with an appropriate name.
```shell
$ git checkout -b <branch-name>
``` ```
You should switch to the `main` branch for development.
## Preview your change ## Preview your change
### backend ### backend
```shell ```shell
$ go run main.go $ go run main.go
``` ```
### frontend ### frontend
```shell ```shell
$ pnpm dev $ pnpm dev
``` ```
## Add a new driver ## Add a new driver
Copy `drivers/template` folder and rename it, and follow the comments in it. Copy `drivers/template` folder and rename it, and follow the comments in it.
## Create a commit ## Create a commit
Commit messages should be well formatted, and to make that "standardized". Commit messages should be well formatted, and to make that "standardized".
Submit your pull request. For PR titles, follow [Conventional Commits](https://www.conventionalcommits.org). ### Commit Message Format
Each commit message consists of a **header**, a **body** and a **footer**. The header has a special
format that includes a **type**, a **scope** and a **subject**:
https://github.com/OpenListTeam/OpenList/issues/376 ```
<type>(<scope>): <subject>
<BLANK LINE>
<body>
<BLANK LINE>
<footer>
```
It's suggested to sign your commits. See: [How to sign commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits) The **header** is mandatory and the **scope** of the header is optional.
Any line of the commit message cannot be longer than 100 characters! This allows the message to be easier
to read on GitHub as well as in various git tools.
### Revert
If the commit reverts a previous commit, it should begin with `revert: `, followed by the header
of the reverted commit.
In the body it should say: `This reverts commit <hash>.`, where the hash is the SHA of the commit
being reverted.
### Type
Must be one of the following:
* **feat**: A new feature
* **fix**: A bug fix
* **docs**: Documentation only changes
* **style**: Changes that do not affect the meaning of the code (white-space, formatting, missing
semi-colons, etc)
* **refactor**: A code change that neither fixes a bug nor adds a feature
* **perf**: A code change that improves performance
* **test**: Adding missing or correcting existing tests
* **build**: Affects project builds or dependency modifications
* **revert**: Restore the previous commit
* **ci**: Continuous integration of related file modifications
* **chore**: Changes to the build process or auxiliary tools and libraries such as documentation
generation
* **release**: Release a new version
### Scope
The scope could be anything specifying place of the commit change. For example `$location`,
`$browser`, `$compile`, `$rootScope`, `ngHref`, `ngClick`, `ngView`, etc...
You can use `*` when the change affects more than a single scope.
### Subject
The subject contains succinct description of the change:
* use the imperative, present tense: "change" not "changed" nor "changes"
* don't capitalize first letter
* no dot (.) at the end
### Body
Just as in the **subject**, use the imperative, present tense: "change" not "changed" nor "changes".
The body should include the motivation for the change and contrast this with previous behavior.
### Footer
The footer should contain any information about **Breaking Changes** and is also the place to
[reference GitHub issues that this commit closes](https://help.github.com/articles/closing-issues-via-commit-messages/).
**Breaking Changes** should start with the word `BREAKING CHANGE:` with a space or two newlines.
The rest of the commit message is then used for this.
## Submit a pull request ## Submit a pull request
Please make sure your code has been formatted with `go fmt` or [prettier](https://prettier.io/) before submitting. Push your branch to your `openlist` fork and open a pull request against the
`main` branch.
Push your branch to your `openlist` fork and open a pull request against the `main` branch.
## Merge your pull request
Your pull request will be merged after review. Please wait for the maintainer to merge your pull request after review.
At least 1 approving review is required by reviewers with write access. You can also request a review from maintainers.
## Delete your branch
(Optional) After your pull request is merged, you can delete your branch.
---
Thank you for your contribution! Let's make OpenList better together!

View File

@ -1,7 +1,4 @@
### Default image is base. You can add other support by modifying BASE_IMAGE_TAG. The following parameters are supported: base (default), aria2, ffmpeg, aio FROM docker.io/library/alpine:edge AS builder
ARG BASE_IMAGE_TAG=base
FROM alpine:edge AS builder
LABEL stage=go-builder LABEL stage=go-builder
WORKDIR /app/ WORKDIR /app/
RUN apk add --no-cache bash curl jq gcc git go musl-dev RUN apk add --no-cache bash curl jq gcc git go musl-dev
@ -10,27 +7,36 @@ RUN go mod download
COPY ./ ./ COPY ./ ./
RUN bash build.sh release docker RUN bash build.sh release docker
FROM openlistteam/openlist-base-image:${BASE_IMAGE_TAG} FROM alpine:edge
LABEL MAINTAINER="OpenList"
ARG INSTALL_FFMPEG=false ARG INSTALL_FFMPEG=false
ARG INSTALL_ARIA2=false ARG INSTALL_ARIA2=false
ARG USER=openlist LABEL MAINTAINER="OpenList"
ARG UID=1001
ARG GID=1001
WORKDIR /opt/openlist/ WORKDIR /opt/openlist/
RUN addgroup -g ${GID} ${USER} && \ RUN apk update && \
adduser -D -u ${UID} -G ${USER} ${USER} && \ apk upgrade --no-cache && \
mkdir -p /opt/openlist/data apk add --no-cache bash ca-certificates su-exec tzdata; \
[ "$INSTALL_FFMPEG" = "true" ] && apk add --no-cache ffmpeg; \
[ "$INSTALL_ARIA2" = "true" ] && apk add --no-cache curl aria2 && \
mkdir -p /opt/aria2/.aria2 && \
wget https://github.com/P3TERX/aria2.conf/archive/refs/heads/master.tar.gz -O /tmp/aria-conf.tar.gz && \
tar -zxvf /tmp/aria-conf.tar.gz -C /opt/aria2/.aria2 --strip-components=1 && rm -f /tmp/aria-conf.tar.gz && \
sed -i 's|rpc-secret|#rpc-secret|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/script.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/script.conf && \
touch /opt/aria2/.aria2/aria2.session && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY --from=builder --chmod=755 --chown=${UID}:${GID} /app/bin/openlist ./ COPY --chmod=755 --from=builder /app/bin/openlist ./
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh COPY --chmod=755 entrypoint.sh /entrypoint.sh
USER ${USER}
RUN /entrypoint.sh version RUN /entrypoint.sh version
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2} ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/openlist/data/ VOLUME /opt/openlist/data/
EXPOSE 5244 5245 EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ] CMD [ "/entrypoint.sh" ]

View File

@ -1,26 +1,34 @@
ARG BASE_IMAGE_TAG=base FROM docker.io/library/alpine:edge
FROM ghcr.io/openlistteam/openlist-base-image:${BASE_IMAGE_TAG}
LABEL MAINTAINER="OpenList"
ARG TARGETPLATFORM ARG TARGETPLATFORM
ARG INSTALL_FFMPEG=false ARG INSTALL_FFMPEG=false
ARG INSTALL_ARIA2=false ARG INSTALL_ARIA2=false
ARG USER=openlist LABEL MAINTAINER="OpenList"
ARG UID=1001
ARG GID=1001
WORKDIR /opt/openlist/ WORKDIR /opt/openlist/
RUN addgroup -g ${GID} ${USER} && \ RUN apk update && \
adduser -D -u ${UID} -G ${USER} ${USER} && \ apk upgrade --no-cache && \
mkdir -p /opt/openlist/data apk add --no-cache bash ca-certificates su-exec tzdata; \
[ "$INSTALL_FFMPEG" = "true" ] && apk add --no-cache ffmpeg; \
[ "$INSTALL_ARIA2" = "true" ] && apk add --no-cache curl aria2 && \
mkdir -p /opt/aria2/.aria2 && \
wget https://github.com/P3TERX/aria2.conf/archive/refs/heads/master.tar.gz -O /tmp/aria-conf.tar.gz && \
tar -zxvf /tmp/aria-conf.tar.gz -C /opt/aria2/.aria2 --strip-components=1 && rm -f /tmp/aria-conf.tar.gz && \
sed -i 's|rpc-secret|#rpc-secret|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/script.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/script.conf && \
touch /opt/aria2/.aria2/aria2.session && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY --chmod=755 --chown=${UID}:${GID} /build/${TARGETPLATFORM}/openlist ./ COPY --chmod=755 /build/${TARGETPLATFORM}/openlist ./
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh COPY --chmod=755 entrypoint.sh /entrypoint.sh
USER ${USER}
RUN /entrypoint.sh version RUN /entrypoint.sh version
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2} ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/openlist/data/ VOLUME /opt/openlist/data/
EXPOSE 5244 5245 EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ] CMD [ "/entrypoint.sh" ]

191
README.md
View File

@ -1,90 +1,79 @@
<div align="center"> <div align="center">
<img style="width: 128px; height: 128px;" src="https://raw.githubusercontent.com/OpenListTeam/Logo/main/logo.svg" alt="logo" /> <img width="100px" alt="logo" src="https://raw.githubusercontent.com/OpenListTeam/Logo/main/logo.svg"/></a>
<p><em>🗂A file list program that supports multiple storages, powered by Gin and SolidJS, fork of AList.</em></p>
<p><em>OpenList is a resilient, long-term governance, community-driven fork of AList — built to defend open source against trust-based attacks.</em></p> <div>
<a href="https://goreportcard.com/report/github.com/OpenListTeam/OpenList/v3">
<img src="https://goreportcard.com/badge/github.com/OpenListTeam/OpenList/v3" alt="latest version" /> <img src="https://goreportcard.com/badge/github.com/OpenListTeam/OpenList/v3" alt="latest version" />
<a href="https://github.com/OpenListTeam/OpenList/blob/main/LICENSE"><img src="https://img.shields.io/github/license/OpenListTeam/OpenList" alt="License" /></a> </a>
<a href="https://github.com/OpenListTeam/OpenList/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/actions/workflow/status/OpenListTeam/OpenList/build.yml?branch=main" alt="Build status" /></a> <a href="https://github.com/OpenListTeam/OpenList/blob/main/LICENSE">
<a href="https://github.com/OpenListTeam/OpenList/releases"><img src="https://img.shields.io/github/release/OpenListTeam/OpenList" alt="latest version" /></a> <img src="https://img.shields.io/github/license/OpenListTeam/OpenList" alt="License" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/discussions"><img src="https://img.shields.io/github/discussions/OpenListTeam/OpenList?color=%23ED8936" alt="discussions" /></a> <a href="https://github.com/OpenListTeam/OpenList/actions?query=workflow%3ABuild">
<a href="https://github.com/OpenListTeam/OpenList/releases"><img src="https://img.shields.io/github/downloads/OpenListTeam/OpenList/total?color=%239F7AEA&logo=github" alt="Downloads" /></a> <img src="https://img.shields.io/github/actions/workflow/status/OpenListTeam/OpenList/build.yml?branch=main" alt="Build status" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/releases">
<img src="https://img.shields.io/github/release/OpenListTeam/OpenList" alt="latest version" />
</a>
</div>
<div>
<a href="https://github.com/OpenListTeam/OpenList/discussions">
<img src="https://img.shields.io/github/discussions/OpenListTeam/OpenList?color=%23ED8936" alt="discussions" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/releases">
<img src="https://img.shields.io/github/downloads/OpenListTeam/OpenList/total?color=%239F7AEA&logo=github" alt="Downloads" />
</a>
</div>
</div> </div>
--- ---
- English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Dutch](./README_nl.md) > [!IMPORTANT]
>
> Drop-in replacement for AList with long-term governance, no hidden risks, and full transparency, built to defend open source against trust-based attacks.
>
> We sincerely thank the author [Xhofe](https://github.com/Xhofe) of the original project [AlistGo/alist](https://github.com/AlistGo/alist) and all other contributors.
>
> This fork is not yet stable, specific migration progress can be viewed in [OpenList Migration Work Summary](https://github.com/OpenListTeam/OpenList/issues/6).
- [Contributing](./CONTRIBUTING.md) English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing](./CONTRIBUTING.md) | [CODE OF CONDUCT](./CODE_OF_CONDUCT.md)
- [CODE OF CONDUCT](./CODE_OF_CONDUCT.md)
- [LICENSE](./LICENSE)
## Disclaimer
OpenList is an open-source project independently maintained by the OpenList Team, following the AGPL-3.0 license and committed to maintaining complete code openness and modification transparency.
We have noticed the emergence of some third-party projects in the community with names similar to this project, such as OpenListApp/OpenListApp, as well as some paid proprietary software using the same or similar naming. To avoid user confusion, we hereby declare:
- OpenList has no official association with any third-party derivative projects.
- All software, code, and services of this project are maintained by the OpenList Team and are freely available on GitHub.
- Project documentation and API services primarily rely on charitable resources provided by Cloudflare. There are currently no paid plans or commercial deployments, and the use of existing features does not involve any costs.
We respect the community's rights to free use and derivative development, but we also strongly urge downstream projects:
- Should not use the "OpenList" name for impersonation promotion or commercial gain;
- Must not distribute OpenList-based code in a closed-source manner or violate AGPL license terms.
To better maintain healthy ecosystem development, we recommend:
- Clearly indicate the project source and choose appropriate open-source licenses in accordance with the open-source spirit;
- If involving commercial use, please avoid using "OpenList" or any confusing naming as the project name;
- If you need to use materials located under OpenListTeam/Logo, you may modify and use them under compliance with the agreement.
Thank you for your support and understanding of the OpenList project.
## Features ## Features
- [x] Multiple storages - [x] Multiple storages
- [x] Local storage - [x] Local storage
- [x] [Aliyundrive](https://www.alipan.com) - [x] [Aliyundrive](https://www.alipan.com/)
- [x] OneDrive / Sharepoint ([Global](https://www.microsoft.com/en-us/microsoft-365/onedrive/online-cloud-storage), [CN](https://portal.partner.microsoftonline.cn), DE, US) - [x] OneDrive / Sharepoint ([global](https://www.office.com/), [cn](https://portal.partner.microsoftonline.cn),de,us)
- [x] [189cloud](https://cloud.189.cn) (Personal, Family) - [x] [189cloud](https://cloud.189.cn) (Personal, Family)
- [x] [GoogleDrive](https://drive.google.com) - [x] [GoogleDrive](https://drive.google.com/)
- [x] [123pan](https://www.123pan.com) - [x] [123pan](https://www.123pan.com/)
- [x] [FTP / SFTP](https://en.wikipedia.org/wiki/File_Transfer_Protocol) - [x] FTP / SFTP
- [x] [PikPak](https://www.mypikpak.com) - [x] [PikPak](https://www.mypikpak.com/)
- [x] [S3](https://aws.amazon.com/s3) - [x] [S3](https://aws.amazon.com/s3/)
- [x] [Seafile](https://seafile.com) - [x] [Seafile](https://seafile.com/)
- [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage) - [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- [x] [WebDAV](https://en.wikipedia.org/wiki/WebDAV) - [x] WebDav(Support OneDrive/SharePoint without API)
- [x] Teambition([China](https://www.teambition.com), [International](https://us.teambition.com)) - [x] Teambition([China](https://www.teambition.com/ ),[International](https://us.teambition.com/ ))
- [x] [Mediatrack](https://www.mediatrack.cn) - [x] [Mediatrack](https://www.mediatrack.cn/)
- [x] [139yun](https://yun.139.com) (Personal, Family, Group) - [x] [139yun](https://yun.139.com/) (Personal, Family, Group)
- [x] [YandexDisk](https://disk.yandex.com) - [x] [YandexDisk](https://disk.yandex.com/)
- [x] [BaiduNetdisk](http://pan.baidu.com) - [x] [BaiduNetdisk](http://pan.baidu.com/)
- [x] [Terabox](https://www.terabox.com/main) - [x] [Terabox](https://www.terabox.com/main)
- [x] [UC](https://drive.uc.cn) - [x] [UC](https://drive.uc.cn)
- [x] [Quark](https://pan.quark.cn) - [x] [Quark](https://pan.quark.cn)
- [x] [Thunder](https://pan.xunlei.com) - [x] [Thunder](https://pan.xunlei.com)
- [x] [Lanzou](https://www.lanzou.com) - [x] [Lanzou](https://www.lanzou.com/)
- [x] [ILanzou](https://www.ilanzou.com) - [x] [ILanzou](https://www.ilanzou.com/)
- [x] [Aliyundrive share](https://www.alipan.com) - [x] [Aliyundrive share](https://www.alipan.com/)
- [x] [Google photo](https://photos.google.com) - [x] [Google photo](https://photos.google.com/)
- [x] [Mega.nz](https://mega.nz) - [x] [Mega.nz](https://mega.nz)
- [x] [Baidu photo](https://photo.baidu.com) - [x] [Baidu photo](https://photo.baidu.com/)
- [x] [SMB](https://en.wikipedia.org/wiki/Server_Message_Block) - [x] SMB
- [x] [115](https://115.com) - [x] [115](https://115.com/)
- [X] [Cloudreve](https://cloudreve.org) - [X] Cloudreve
- [x] [Dropbox](https://www.dropbox.com) - [x] [Dropbox](https://www.dropbox.com/)
- [x] [FeijiPan](https://www.feijipan.com) - [x] [FeijiPan](https://www.feijipan.com/)
- [x] [dogecloud](https://www.dogecloud.com/product/oss) - [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs) - [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] Easy to deploy and out-of-the-box - [x] Easy to deploy and out-of-the-box
- [x] File preview (PDF, markdown, code, plain text, ...) - [x] File preview (PDF, markdown, code, plain text, ...)
- [x] Image preview in gallery mode - [x] Image preview in gallery mode
@ -95,8 +84,8 @@ Thank you for your support and understanding of the OpenList project.
- [x] Dark mode - [x] Dark mode
- [x] I18n - [x] I18n
- [x] Protected routes (password protection and authentication) - [x] Protected routes (password protection and authentication)
- [x] WebDAV - [x] WebDav (waiting for detail documents)
- [x] Docker Deploy - [ ] Docker Deploy (rebuilding)
- [x] Cloudflare Workers proxy - [x] Cloudflare Workers proxy
- [x] File/Folder package download - [x] File/Folder package download
- [x] Web upload(Can allow visitors to upload), delete, mkdir, rename, move and copy - [x] Web upload(Can allow visitors to upload), delete, mkdir, rename, move and copy
@ -106,9 +95,7 @@ Thank you for your support and understanding of the OpenList project.
## Document ## Document
- 📘 [Global Site](https://doc.oplist.org) <https://docs.openlist.team>
- 📚 [Backup Site](https://doc.openlist.team)
- 🌏 [CN Site](https://doc.oplist.org.cn)
## Demo ## Demo
@ -118,32 +105,24 @@ N/A (to be rebuilt)
Please refer to [*Discussions*](https://github.com/OpenListTeam/OpenList/discussions) for raising general questions, ***Issues* is for bug reports and feature requests only.** Please refer to [*Discussions*](https://github.com/OpenListTeam/OpenList/discussions) for raising general questions, ***Issues* is for bug reports and feature requests only.**
## License
The `OpenList` is open-source software licensed under the [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.txt) license.
## Disclaimer
- This project is a free and open-source software designed to facilitate file sharing via net disks, primarily intended to support the downloading and learning of the Go programming language.
- Please comply with all applicable laws and regulations when using this software. Any form of misuse is strictly prohibited.
- The software is based on official SDKs or APIs without any modification, disruption, or interference with their behavior.
- It only performs HTTP 302 redirects or traffic forwarding, and does not intercept, store, or tamper with any user data.
- This project is not affiliated with any official platform or service provider.
- The software is provided "as is", without any warranties of any kind, either express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose.
- The maintainers are not liable for any direct or indirect damages arising from the use of, or inability to use, this software.
- You are solely responsible for any risks associated with using this software, including but not limited to account bans or download speed limitations.
- This project is licensed under the [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.txt) License. Please see the [LICENSE](./LICENSE) file for details.
## Contact Us
- [@GitHub](https://github.com/OpenListTeam)
- [Telegram Group](https://t.me/OpenListTeam)
- [Telegram Channel](https://t.me/OpenListOfficial)
## Contributors ## Contributors
We sincerely thank the author [Xhofe](https://github.com/Xhofe) of the original project [AlistGo/alist](https://github.com/AlistGo/alist) and all other contributors.
Thanks goes to these wonderful people: Thanks goes to these wonderful people:
[![Contributors](https://contrib.rocks/image?repo=OpenListTeam/OpenList)](https://github.com/OpenListTeam/OpenList/graphs/contributors) [![Contributors](https://contrib.rocks/image?repo=OpenListTeam/OpenList)](https://github.com/OpenListTeam/OpenList/graphs/contributors)
## License
The `OpenList` is open-source software licensed under the AGPL-3.0 license.
## Disclaimer
- This program is a free and open source project. It is designed to share files on the network disk, which is convenient for downloading and learning Golang. Please abide by relevant laws and regulations when using it, and do not abuse it;
- This program is implemented by calling the official sdk/interface, without destroying the official interface behavior;
- This program only does 302 redirect/traffic forwarding, and does not intercept, store, or tamper with any user data;
- Before using this program, you should understand and bear the corresponding risks, including but not limited to account ban, download speed limit, etc., which is none of this program's business;
- If there is any infringement, please contact [OpenListTeam](https://github.com/OpenListTeam), and it will be dealt with in time.
---
> [@GitHub](https://github.com/OpenListTeam) · [Telegram Group](https://t.me/OpenListTeam)

View File

@ -1,149 +1,126 @@
<div align="center"> <div align="center">
<img style="width: 128px; height: 128px;" src="https://raw.githubusercontent.com/OpenListTeam/Logo/main/logo.svg" alt="logo" /> <img width="100px" alt="logo" src="https://raw.githubusercontent.com/OpenListTeam/Logo/main/logo.svg"/></a>
<p><em>🗂一个支持多存储的文件列表程序,使用 Gin 和 SolidJS基于 AList 项目 fork 开发</em></p>
<p><em>OpenList 是一个有韧性、长期治理、社区驱动的 AList 分支,旨在防御基于信任的开源攻击。</em></p> <div>
<a href="https://goreportcard.com/report/github.com/OpenListTeam/OpenList/v3">
<img src="https://goreportcard.com/badge/github.com/OpenListTeam/OpenList/v3" alt="latest version" /> <img src="https://goreportcard.com/badge/github.com/OpenListTeam/OpenList/v3" alt="latest version" />
<a href="https://github.com/OpenListTeam/OpenList/blob/main/LICENSE"><img src="https://img.shields.io/github/license/OpenListTeam/OpenList" alt="License" /></a> </a>
<a href="https://github.com/OpenListTeam/OpenList/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/actions/workflow/status/OpenListTeam/OpenList/build.yml?branch=main" alt="Build status" /></a> <a href="https://github.com/OpenListTeam/OpenList/blob/main/LICENSE">
<a href="https://github.com/OpenListTeam/OpenList/releases"><img src="https://img.shields.io/github/release/OpenListTeam/OpenList" alt="latest version" /></a> <img src="https://img.shields.io/github/license/OpenListTeam/OpenList" alt="License" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/discussions"><img src="https://img.shields.io/github/discussions/OpenListTeam/OpenList?color=%23ED8936" alt="discussions" /></a> <a href="https://github.com/OpenListTeam/OpenList/actions?query=workflow%3ABuild">
<a href="https://github.com/OpenListTeam/OpenList/releases"><img src="https://img.shields.io/github/downloads/OpenListTeam/OpenList/total?color=%239F7AEA&logo=github" alt="Downloads" /></a> <img src="https://img.shields.io/github/actions/workflow/status/OpenListTeam/OpenList/build.yml?branch=main" alt="Build status" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/releases">
<img src="https://img.shields.io/github/release/OpenListTeam/OpenList" alt="latest version" />
</a>
</div>
<div>
<a href="https://github.com/OpenListTeam/OpenList/discussions">
<img src="https://img.shields.io/github/discussions/OpenListTeam/OpenList?color=%23ED8936" alt="discussions" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/releases">
<img src="https://img.shields.io/github/downloads/OpenListTeam/OpenList/total?color=%239F7AEA&logo=github" alt="Downloads" />
</a>
</div>
</div> </div>
--- ---
- [English](./README.md) | 中文 | [日本語](./README_ja.md) | [Dutch](./README_nl.md) > [!IMPORTANT]
>
> 一个更可信、可持续的 AList 开源替代方案,防范未来可能的闭源、黑箱或不可信变更。
>
> 我们诚挚地感谢原项目 [AlistGo/alist](https://github.com/AlistGo/alist) 的作者 [Xhofe](https://github.com/Xhofe) 以及其他所有贡献者。
>
> 本 Fork 尚未稳定, 具体迁移进度可在 [OpenList 迁移工作总结](https://github.com/OpenListTeam/OpenList/issues/6) 中查看。
- [贡献指南](./CONTRIBUTING.md) [English](./README.md) | 中文 | [日本語](./README_ja.md) | [Contributing](./CONTRIBUTING.md) | [CODE OF CONDUCT](./CODE_OF_CONDUCT.md)
- [行为准则](./CODE_OF_CONDUCT.md)
- [许可证](./LICENSE)
## 免责声明
OpenList 是一个由 OpenList 团队独立维护的开源项目,遵循 AGPL-3.0 许可证,致力于保持完整的代码开放性和修改透明性。
我们注意到社区中出现了一些与本项目名称相似的第三方项目,如 OpenListApp/OpenListApp以及部分采用相同或近似命名的收费专有软件。为避免用户误解现声明如下
- OpenList 与任何第三方衍生项目无官方关联。
- 本项目的全部软件、代码与服务由 OpenList 团队维护,可在 GitHub 免费获取。
- 项目文档与 API 服务均主要依托于 Cloudflare 提供的公益资源,目前无任何收费计划或商业部署,现有功能使用不涉及任何支出。
我们尊重社区的自由使用与衍生开发权利,但也强烈呼吁下游项目:
- 不应以“OpenList”名义进行冒名宣传或获取商业利益
- 不得将基于 OpenList 的代码进行闭源分发或违反 AGPL 许可证条款。
为了更好地维护生态健康发展,我们建议:
- 明确注明项目来源,并以符合开源精神的方式选择适当的开源许可证;
- 如涉及商业用途请避免使用“OpenList”或任何会产生混淆的方式作为项目名称
- 若需使用本项目位于 OpenListTeam/Logo 下的素材,可在遵守协议的前提下进行修改后使用。
感谢您对 OpenList 项目的支持与理解。
## 功能 ## 功能
- [x] 多种存储 - [x] 多种存储
- [x] 本地存储 - [x] 本地存储
- [x] [阿里云盘](https://www.alipan.com) - [x] [阿里云盘](https://www.alipan.com/)
- [x] OneDrive / Sharepoint ([国际版](https://www.microsoft.com/en-us/microsoft-365/onedrive/online-cloud-storage), [中国](https://portal.partner.microsoftonline.cn), DE, US) - [x] OneDrive / Sharepoint[国际版](https://www.office.com/), [世纪互联](https://portal.partner.microsoftonline.cn),de,us
- [x] [天翼云盘](https://cloud.189.cn)(个人、家庭) - [x] [天翼云盘](https://cloud.189.cn) (个人云, 家庭云)
- [x] [GoogleDrive](https://drive.google.com) - [x] [GoogleDrive](https://drive.google.com/)
- [x] [123云盘](https://www.123pan.com) - [x] [123云盘](https://www.123pan.com/)
- [x] [FTP / SFTP](https://en.wikipedia.org/wiki/File_Transfer_Protocol) - [x] FTP / SFTP
- [x] [PikPak](https://www.mypikpak.com) - [x] [PikPak](https://www.mypikpak.com/)
- [x] [S3](https://aws.amazon.com/s3) - [x] [S3](https://aws.amazon.com/cn/s3/)
- [x] [Seafile](https://seafile.com) - [x] [Seafile](https://seafile.com/)
- [x] [又拍云对象存储](https://www.upyun.com/products/file-storage) - [x] [又拍云对象存储](https://www.upyun.com/products/file-storage)
- [x] [WebDAV](https://en.wikipedia.org/wiki/WebDAV) - [x] WebDav(支持无API的OneDrive/SharePoint)
- [x] Teambition([中国](https://www.teambition.com), [国际](https://us.teambition.com)) - [x] Teambition[中国](https://www.teambition.com/ )[国际](https://us.teambition.com/ )
- [x] [分秒帧](https://www.mediatrack.cn) - [x] [分秒帧](https://www.mediatrack.cn/)
- [x] [和彩云](https://yun.139.com)(个人、家庭、群组 - [x] [和彩云](https://yun.139.com/) (个人云, 家庭云,共享群组)
- [x] [YandexDisk](https://disk.yandex.com) - [x] [Yandex.Disk](https://disk.yandex.com/)
- [x] [百度网盘](http://pan.baidu.com) - [x] [百度网盘](http://pan.baidu.com/)
- [x] [Terabox](https://www.terabox.com/main) - [x] [UC网盘](https://drive.uc.cn)
- [x] [UC网盘](https://drive.uc.cn) - [x] [夸克网盘](https://pan.quark.cn)
- [x] [夸克网盘](https://pan.quark.cn) - [x] [迅雷网盘](https://pan.xunlei.com)
- [x] [迅雷网盘](https://pan.xunlei.com) - [x] [蓝奏云](https://www.lanzou.com/)
- [x] [蓝奏云](https://www.lanzou.com) - [x] [蓝奏云优享版](https://www.ilanzou.com/)
- [x] [蓝奏云优享版](https://www.ilanzou.com) - [x] [阿里云盘分享](https://www.alipan.com/)
- [x] [阿里云盘分享](https://www.alipan.com) - [x] [谷歌相册](https://photos.google.com/)
- [x] [Google 相册](https://photos.google.com) - [x] [Mega.nz](https://mega.nz)
- [x] [Mega.nz](https://mega.nz) - [x] [一刻相册](https://photo.baidu.com/)
- [x] [百度相册](https://photo.baidu.com) - [x] SMB
- [x] [SMB](https://en.wikipedia.org/wiki/Server_Message_Block) - [x] [115](https://115.com/)
- [x] [115](https://115.com) - [X] Cloudreve
- [x] [Cloudreve](https://cloudreve.org) - [x] [Dropbox](https://www.dropbox.com/)
- [x] [Dropbox](https://www.dropbox.com) - [x] [飞机盘](https://www.feijipan.com/)
- [x] [飞机盘](https://www.feijipan.com) - [x] [多吉云](https://www.dogecloud.com/product/oss)
- [x] [多吉云](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] 部署方便,开箱即用 - [x] 部署方便,开箱即用
- [x] 文件预览PDF、markdown、代码、纯文本 - [x] 文件预览PDF、markdown、代码、纯文本……
- [x] 画廊模式下的图预览 - [x] 画廊模式下的图预览
- [x] 视频和音频预览,支持歌词和字幕 - [x] 视频和音频预览,支持歌词和字幕
- [x] Office 文档预览docx、pptx、xlsx - [x] Office 文档预览docx、pptx、xlsx、...
- [x] `README.md` 预览渲染 - [x] `README.md` 预览渲染
- [x] 文件永久链接复制和直接文件下载 - [x] 文件永久链接复制和直接文件下载
- [x] 黑暗模式 - [x] 黑暗模式
- [x] 国际化 - [x] 国际化
- [x] 受保护的路由(密码保护和证) - [x] 受保护的路由(密码保护和身份验证)
- [x] WebDAV - [x] WebDav (详细文档待补充)
- [x] Docker 部署 - [ ] Docker 部署(重建中)
- [x] Cloudflare Workers 代理 - [x] Cloudflare workers 中转
- [x] 文件/文件夹打包下载 - [x] 文件/文件夹打包下载
- [x] 网页上传(可允许访客上传)、删除新建文件夹重命名移动复制 - [x] 网页上传(可以允许访客上传)删除新建文件夹重命名移动复制
- [x] 离线下载 - [x] 离线下载
- [x] 跨存储复制文件 - [x] 跨存储复制文件
- [x]文件多线程下载/流式加速 - [x] 单线程下载/串流的多线程下载加速
## 文档 ## 文档
- 🌏 [国内站点](https://doc.oplist.org.cn) <https://docs.openlist.team>
- 📘 [海外站点](https://doc.oplist.org)
- 📚 [备用站点](https://doc.openlist.team)
## 演示 ## Demo
N/A重建) N/A重建
## 讨论 ## 讨论
如有一般问题请前往 [*Discussions*](https://github.com/OpenListTeam/OpenList/discussions) 讨论***Issues* 仅用于错误报告和功能请求。** 一般问题请 [*Discussions*](https://github.com/OpenListTeam/OpenList/discussions) 讨论,***Issues* 仅针对错误报告和功能请求。**
## 许可证
`OpenList` 是基于 [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.txt) 许可证的开源软件。
## 免责声明
- 本项目为免费开源软件,旨在通过网盘便捷分享文件,主要用于 Go 语言的下载与学习。
- 使用本软件时请遵守相关法律法规,严禁任何形式的滥用。
- 本软件基于官方 SDK 或 API 实现,未对其行为进行任何修改、破坏或干扰。
- 仅进行 HTTP 302 跳转或流量转发,不拦截、存储或篡改任何用户数据。
- 本项目与任何官方平台或服务提供商无关。
- 本软件按“原样”提供,不附带任何明示或暗示的担保,包括但不限于适销性或特定用途的适用性。
- 维护者不对因使用或无法使用本软件而导致的任何直接或间接损失负责。
- 您需自行承担使用本软件的所有风险,包括但不限于账号被封、下载限速等。
- 本项目遵循 [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.txt) 许可证,详情请参见 [LICENSE](./LICENSE) 文件。
## 联系我们
- [@GitHub](https://github.com/OpenListTeam)
- [Telegram 交流群](https://t.me/OpenListTeam)
- [Telegram 频道](https://t.me/OpenListOfficial)
## 贡献者 ## 贡献者
我们衷心感谢原项目 [AlistGo/alist](https://github.com/AlistGo/alist) 的作者 [Xhofe](https://github.com/Xhofe) 及所有其他贡献者。 感谢这些开源作者们:
感谢这些优秀的人:
[![Contributors](https://contrib.rocks/image?repo=OpenListTeam/OpenList)](https://github.com/OpenListTeam/OpenList/graphs/contributors) [![Contributors](https://contrib.rocks/image?repo=OpenListTeam/OpenList)](https://github.com/OpenListTeam/OpenList/graphs/contributors)
## 许可
`OpenList` 是按 AGPL-3.0 许可证许可的开源软件。
## 免责声明
- 本程序为免费开源项目旨在分享网盘文件方便下载以及学习golang使用时请遵守相关法律法规请勿滥用
- 本程序通过调用官方sdk/接口实现,无破坏官方接口行为;
- 本程序仅做302重定向/流量转发,不拦截、存储、篡改任何用户数据;
- 在使用本程序之前你应了解并承担相应的风险包括但不限于账号被ban下载限速等与本程序无关
- 如有侵权,请联系[OpenListTeam](https://github.com/OpenListTeam),团队会及时处理。
---
> [@GitHub](https://github.com/OpenListTeam) · [Telegram 交流群](https://t.me/OpenListTeam)

View File

@ -1,149 +1,127 @@
<div align="center"> <div align="center">
<img style="width: 128px; height: 128px;" src="https://raw.githubusercontent.com/OpenListTeam/Logo/main/logo.svg" alt="logo" /> <img width="100px" alt="logo" src="https://raw.githubusercontent.com/OpenListTeam/Logo/main/logo.svg"/></a>
<p><em>🗂複数のストレージをサポートするファイルリストプログラムで、Gin と SolidJS を使用し、AList プロジェクトをフォークして開発されました。</em></p>
<p><em>OpenList は、信頼ベースの攻撃からオープンソースを守るために構築された、レジリエントで長期ガバナンス、コミュニティ主導の AList フォークです。</em></p> <div>
<a href="https://goreportcard.com/report/github.com/OpenListTeam/OpenList/v3">
<img src="https://goreportcard.com/badge/github.com/OpenListTeam/OpenList/v3" alt="latest version" /> <img src="https://goreportcard.com/badge/github.com/OpenListTeam/OpenList/v3" alt="latest version" />
<a href="https://github.com/OpenListTeam/OpenList/blob/main/LICENSE"><img src="https://img.shields.io/github/license/OpenListTeam/OpenList" alt="License" /></a> </a>
<a href="https://github.com/OpenListTeam/OpenList/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/actions/workflow/status/OpenListTeam/OpenList/build.yml?branch=main" alt="Build status" /></a> <a href="https://github.com/OpenListTeam/OpenList/blob/main/LICENSE">
<a href="https://github.com/OpenListTeam/OpenList/releases"><img src="https://img.shields.io/github/release/OpenListTeam/OpenList" alt="latest version" /></a> <img src="https://img.shields.io/github/license/OpenListTeam/OpenList" alt="License" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/discussions"><img src="https://img.shields.io/github/discussions/OpenListTeam/OpenList?color=%23ED8936" alt="discussions" /></a> <a href="https://github.com/OpenListTeam/OpenList/actions?query=workflow%3ABuild">
<a href="https://github.com/OpenListTeam/OpenList/releases"><img src="https://img.shields.io/github/downloads/OpenListTeam/OpenList/total?color=%239F7AEA&logo=github" alt="Downloads" /></a> <img src="https://img.shields.io/github/actions/workflow/status/OpenListTeam/OpenList/build.yml?branch=main" alt="Build status" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/releases">
<img src="https://img.shields.io/github/release/OpenListTeam/OpenList" alt="latest version" />
</a>
</div>
<div>
<a href="https://github.com/OpenListTeam/OpenList/discussions">
<img src="https://img.shields.io/github/discussions/OpenListTeam/OpenList?color=%23ED8936" alt="discussions" />
</a>
<a href="https://github.com/OpenListTeam/OpenList/releases">
<img src="https://img.shields.io/github/downloads/OpenListTeam/OpenList/total?color=%239F7AEA&logo=github" alt="Downloads" />
</a>
</div>
</div> </div>
--- ---
- [English](./README.md) | [中文](./README_cn.md) | 日本語 | [Dutch](./README_nl.md) > [!IMPORTANT]
>
> より信頼性が高く、持続可能なAListのオープンソース代替案で、将来起こりうる非公開化、ブラックボックス化、または信頼できない変更から保護します。
>
> 元のプロジェクト [AlistGo/alist](https://github.com/AlistGo/alist) の作者 [Xhofe](https://github.com/Xhofe) および他のすべての貢献者に心から感謝いたします。
>
> このForkはまだ安定していません。具体的な移行の進捗状況は [OpenList 移行作業のまとめ](https://github.com/OpenListTeam/OpenList/issues/6) でご確認いただけます。
- [コントリビュート](./CONTRIBUTING.md) [English](./README.md) | [中文](./README_cn.md) | 日本語 | [Contributing](./CONTRIBUTING.md) | [CODE OF CONDUCT](./CODE_OF_CONDUCT.md)
- [行動規範](./CODE_OF_CONDUCT.md)
- [ライセンス](./LICENSE)
## 免責事項
OpenListは、OpenListチームが独立して維持するオープンソースプロジェクトであり、AGPL-3.0ライセンスに従い、完全なコードの開放性と変更の透明性を維持することに専念しています。
コミュニティ内で、OpenListApp/OpenListAppなど、本プロジェクトと類似した名称を持つサードパーティプロジェクトや、同一または類似した命名を採用する有料専有ソフトウェアが出現していることを確認しています。ユーザーの誤解を避けるため、以下のように宣言いたします
- OpenListは、いかなるサードパーティ派生プロジェクトとも公式な関連性はありません。
- 本プロジェクトのすべてのソフトウェア、コード、サービスはOpenListチームによって維持され、GitHubで無料で取得できます。
- プロジェクトドキュメントとAPIサービスは主にCloudflareが提供する公益リソースに依存しており、現在有料プランや商業展開はなく、既存機能の使用に費用は発生しません。
私たちはコミュニティの自由な使用と派生開発の権利を尊重しますが、下流プロジェクトに強く呼びかけます:
- 「OpenList」の名前で偽装宣伝や商業利益を得るべきではありません
- OpenListベースのコードをクローズドソースで配布したり、AGPLライセンス条項に違反してはいけません。
エコシステムの健全な発展をより良く維持するため、以下を推奨します:
- プロジェクトの出典を明確に示し、オープンソース精神に合致する適切なオープンソースライセンスを選択する;
- 商業用途が関わる場合は、「OpenList」や混乱を招く可能性のある名前をプロジェクト名として使用することを避ける
- OpenListTeam/Logo下の素材を使用する必要がある場合は、協定を遵守した上で修正して使用できます。
OpenListプロジェクトへのご支援とご理解をありがとうございます。
## 特徴 ## 特徴
- [x] 複数ストレージ - [x] マルチストレージ
- [x] ローカルストレージ - [x] ローカルストレージ
- [x] [Aliyundrive](https://www.alipan.com) - [x] [Aliyundrive](https://www.alipan.com/)
- [x] OneDrive / Sharepoint ([グローバル](https://www.microsoft.com/en-us/microsoft-365/onedrive/online-cloud-storage), [中国](https://portal.partner.microsoftonline.cn), DE, US) - [x] OneDrive / Sharepoint ([グローバル](https://www.office.com/), [cn](https://portal.partner.microsoftonline.cn),de,us)
- [x] [189cloud](https://cloud.189.cn)(個人、家族) - [x] [189cloud](https://cloud.189.cn) (Personal, Family)
- [x] [GoogleDrive](https://drive.google.com) - [x] [GoogleDrive](https://drive.google.com/)
- [x] [123pan](https://www.123pan.com) - [x] [123pan](https://www.123pan.com/)
- [x] [FTP / SFTP](https://en.wikipedia.org/wiki/File_Transfer_Protocol) - [x] FTP / SFTP
- [x] [PikPak](https://www.mypikpak.com) - [x] [PikPak](https://www.mypikpak.com/)
- [x] [S3](https://aws.amazon.com/s3) - [x] [S3](https://aws.amazon.com/s3/)
- [x] [Seafile](https://seafile.com) - [x] [Seafile](https://seafile.com/)
- [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage) - [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- [x] [WebDAV](https://en.wikipedia.org/wiki/WebDAV) - [x] WebDav(Support OneDrive/SharePoint without API)
- [x] Teambition([中国](https://www.teambition.com), [国際](https://us.teambition.com)) - [x] Teambition([China](https://www.teambition.com/ ),[International](https://us.teambition.com/ ))
- [x] [Mediatrack](https://www.mediatrack.cn) - [x] [Mediatrack](https://www.mediatrack.cn/)
- [x] [139yun](https://yun.139.com)(個人、家族、グループ) - [x] [139yun](https://yun.139.com/) (Personal, Family, Group)
- [x] [YandexDisk](https://disk.yandex.com) - [x] [YandexDisk](https://disk.yandex.com/)
- [x] [BaiduNetdisk](http://pan.baidu.com) - [x] [BaiduNetdisk](http://pan.baidu.com/)
- [x] [Terabox](https://www.terabox.com/main) - [x] [Terabox](https://www.terabox.com/main)
- [x] [UC](https://drive.uc.cn) - [x] [UC](https://drive.uc.cn)
- [x] [Quark](https://pan.quark.cn) - [x] [Quark](https://pan.quark.cn)
- [x] [Thunder](https://pan.xunlei.com) - [x] [Thunder](https://pan.xunlei.com)
- [x] [Lanzou](https://www.lanzou.com) - [x] [Lanzou](https://www.lanzou.com/)
- [x] [ILanzou](https://www.ilanzou.com) - [x] [ILanzou](https://www.ilanzou.com/)
- [x] [Aliyundrive share](https://www.alipan.com) - [x] [Aliyundrive share](https://www.alipan.com/)
- [x] [Google photo](https://photos.google.com) - [x] [Google photo](https://photos.google.com/)
- [x] [Mega.nz](https://mega.nz) - [x] [Mega.nz](https://mega.nz)
- [x] [Baidu photo](https://photo.baidu.com) - [x] [Baidu photo](https://photo.baidu.com/)
- [x] [SMB](https://en.wikipedia.org/wiki/Server_Message_Block) - [x] SMB
- [x] [115](https://115.com) - [x] [115](https://115.com/)
- [x] [Cloudreve](https://cloudreve.org) - [X] Cloudreve
- [x] [Dropbox](https://www.dropbox.com) - [x] [Dropbox](https://www.dropbox.com/)
- [x] [FeijiPan](https://www.feijipan.com) - [x] [FeijiPan](https://www.feijipan.com/)
- [x] [dogecloud](https://www.dogecloud.com/product/oss) - [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs) - [x] デプロイが簡単で、すぐに使える
- [x] 簡単にデプロイでき、すぐに使える - [x] ファイルプレビュー (PDF, マークダウン, コード, プレーンテキスト, ...)
- [x] ファイルプレビューPDF、markdown、コード、テキストなど
- [x] ギャラリーモードでの画像プレビュー - [x] ギャラリーモードでの画像プレビュー
- [x] ビデオオーディオプレビュー、歌詞字幕対応 - [x] ビデオオーディオプレビュー、歌詞字幕のサポート
- [x] Officeドキュメントプレビューdocxpptxxlsxなど) - [x] Office ドキュメントプレビュー (docx, pptx, xlsx, ...)
- [x] `README.md` プレビュー表示 - [x] `README.md` プレビューレンダリング
- [x] ファイルのパーマリンクコピーと直接ダウンロード - [x] ファイルのパーマリンクコピーと直接ダウンロード
- [x] ダークモード - [x] ダークモード
- [x] 国際化対応 - [x] 国際化
- [x] 保護されたルートパスワード保護と認証 - [x] 保護されたルート (パスワード保護と認証)
- [x] WebDAV - [x] WebDav詳細なドキュメントは今後追加予定
- [x] Dockerデプロイ - [ ] Docker デプロイ(再構築中)
- [x] Cloudflare Workersプロキシ - [x] Cloudflare ワーカープロキシ
- [x] ファイル/フォルダパッケージダウンロード - [x] ファイル/フォルダパッケージダウンロード
- [x] Webアップロード訪問者アップロード許可可)、削除、フォルダ作成、リネーム、移動、コピー - [x] ウェブアップロード(訪問者アップロード許可できる), 削除, mkdir, 名前変更, 移動, コピー
- [x] オフラインダウンロード - [x] オフラインダウンロード
- [x] ストレージ間ファイルコピー - [x] 二つのストレージ間ファイルコピー
- [x] 単一ファイルのマルチスレッドダウンロード/ストリーム加速 - [x] シングルスレッドダウンロード/ストリーム向けのマルチスレッド ダウンロード アクセラレーション
## ドキュメント ## ドキュメント
- 📘 [グローバルサイト](https://doc.oplist.org) <https://docs.openlist.team>
- 📚 [バックアップサイト](https://doc.openlist.team)
- 🌏 [CNサイト](https://doc.oplist.org.cn)
## デモ ## デモ
N/A再構築中 N/A (再構築中)
## ディスカッション ## ディスカッション
一般的な質問は [*Discussions*](https://github.com/OpenListTeam/OpenList/discussions) をご利用ください。***Issues* はバグ報告と機能リクエスト専用です。** 一般的な質問は [*Discussions*](https://github.com/OpenListTeam/OpenList/discussions) をご利用ください。***Issues* はバグ報告と機能リクエストに限定されています。**
## ライセンス
「OpenList」は [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.txt) ライセンスの下で公開されているオープンソースソフトウェアです。
## 免責事項
- 本プロジェクトは無料のオープンソースソフトウェアであり、ネットワークディスクを通じたファイル共有を容易にすることを目的とし、主に Go 言語のダウンロードと学習をサポートします。
- 本ソフトウェアの利用にあたっては、関連する法令を遵守し、不正利用を固く禁じます。
- 本ソフトウェアは公式 SDK または API に基づいており、その動作を一切改変・破壊・妨害しません。
- 302 リダイレクトまたはトラフィック転送のみを行い、ユーザーデータの傍受・保存・改ざんは一切行いません。
- 本プロジェクトは、いかなる公式プラットフォームやサービスプロバイダーとも関係ありません。
- 本ソフトウェアは「現状有姿」で提供されており、商品性や特定目的への適合性を含むいかなる保証もありません。
- 本ソフトウェアの使用または使用不能によるいかなる直接的・間接的損害についても、メンテナは責任を負いません。
- 本ソフトウェアの利用に伴うすべてのリスク(アカウントの凍結やダウンロード速度制限などを含む)は、利用者自身が負うものとします。
- 本プロジェクトは [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.txt) ライセンスに従います。詳細は [LICENSE](./LICENSE) ファイルをご覧ください。
## お問い合わせ
- [@GitHub](https://github.com/OpenListTeam)
- [Telegram グループ](https://t.me/OpenListTeam)
- [Telegram チャンネル](https://t.me/OpenListOfficial)
## コントリビューター ## コントリビューター
オリジナルプロジェクト [AlistGo/alist](https://github.com/AlistGo/alist) の作者 [Xhofe](https://github.com/Xhofe) およびその他すべての貢献者に心より感謝いたします これらの素晴らしい人々に感謝します:
素晴らしい皆様に感謝します: [![Contributors](https://contrib.rocks/image?repo=openlistteam/openlist)](https://github.com/OpenListTeam/OpenList/graphs/contributors)
[![Contributors](https://contrib.rocks/image?repo=OpenListTeam/OpenList)](https://github.com/OpenListTeam/OpenList/graphs/contributors) ## ライセンス
`OpenList`」は AGPL-3.0 ライセンスの下で公開されているオープンソースソフトウェアです。
## 免責事項
- このプログラムはフリーでオープンソースのプロジェクトです。ネットワークディスク上でファイルを共有するように設計されており、golang のダウンロードや学習に便利です。利用にあたっては関連法規を遵守し、悪用しないようお願いします;
- このプログラムは、公式インターフェースの動作を破壊することなく、公式 sdk/インターフェースを呼び出すことで実装されています;
- このプログラムは、302リダイレクト/トラフィック転送のみを行い、いかなるユーザーデータも傍受、保存、改ざんしません;
- このプログラムを使用する前に、アカウントの禁止、ダウンロード速度の制限など、対応するリスクを理解し、負担する必要があります;
- 権利侵害がある場合は、[OpenListTeam](https://github.com/OpenListTeam) までご連絡ください。チームが迅速に対応いたします。
---
> [@GitHub](https://github.com/OpenListTeam) · [Telegram Group](https://t.me/OpenListTeam)

View File

@ -1,149 +0,0 @@
<div align="center">
<img style="width: 128px; height: 128px;" src="https://raw.githubusercontent.com/OpenListTeam/Logo/main/logo.svg" alt="logo" />
<p><em>OpenList is een veerkrachtige, langetermijn, door de gemeenschap geleide fork van AList — gebouwd om open source te beschermen tegen op vertrouwen gebaseerde aanvallen.</em></p>
<img src="https://goreportcard.com/badge/github.com/OpenListTeam/OpenList/v3" alt="latest version" />
<a href="https://github.com/OpenListTeam/OpenList/blob/main/LICENSE"><img src="https://img.shields.io/github/license/OpenListTeam/OpenList" alt="License" /></a>
<a href="https://github.com/OpenListTeam/OpenList/actions?query=workflow%3ABuild"><img src="https://img.shields.io/github/actions/workflow/status/OpenListTeam/OpenList/build.yml?branch=main" alt="Build status" /></a>
<a href="https://github.com/OpenListTeam/OpenList/releases"><img src="https://img.shields.io/github/release/OpenListTeam/OpenList" alt="latest version" /></a>
<a href="https://github.com/OpenListTeam/OpenList/discussions"><img src="https://img.shields.io/github/discussions/OpenListTeam/OpenList?color=%23ED8936" alt="discussions" /></a>
<a href="https://github.com/OpenListTeam/OpenList/releases"><img src="https://img.shields.io/github/downloads/OpenListTeam/OpenList/total?color=%239F7AEA&logo=github" alt="Downloads" /></a>
</div>
---
- [English](./README.md) | [中文](./README_cn.md) | [日本語](./README_ja.md) | Dutch
- [Bijdragen](./CONTRIBUTING.md)
- [Gedragscode](./CODE_OF_CONDUCT.md)
- [Licentie](./LICENSE)
## Disclaimer
OpenList is een open-source project dat onafhankelijk wordt onderhouden door het OpenList Team, volgend op de AGPL-3.0 licentie en toegewijd aan het behouden van volledige code openheid en transparantie van wijzigingen.
We hebben gemerkt dat er in de gemeenschap enkele derde partij projecten zijn verschenen met namen vergelijkbaar met dit project, zoals OpenListApp/OpenListApp, evenals enkele betaalde eigendomssoftware die dezelfde of soortgelijke naamgeving gebruikt. Om verwarring bij gebruikers te voorkomen, verklaren we hierbij:
- OpenList heeft geen officiële associatie met enige derde partij afgeleide projecten.
- Alle software, code en diensten van dit project worden onderhouden door het OpenList Team en zijn gratis beschikbaar op GitHub.
- Projectdocumentatie en API diensten zijn voornamelijk afhankelijk van liefdadigheidsbronnen verstrekt door Cloudflare. Er zijn momenteel geen betaalplannen of commerciële implementaties, en het gebruik van bestaande functies brengt geen kosten met zich mee.
We respecteren de rechten van de gemeenschap voor vrij gebruik en afgeleide ontwikkeling, maar we roepen downstream projecten ook ten zeerste op:
- Mogen niet de "OpenList" naam gebruiken voor namaakpromotie of commercieel gewin;
- Mogen OpenList-gebaseerde code niet distribueren op een closed-source manier of AGPL licentievoorwaarden schenden.
Om een gezonde ecosysteemontwikkeling beter te onderhouden, bevelen we aan:
- Duidelijk de projectbron aangeven en passende open-source licenties kiezen in overeenstemming met de open-source geest;
- Bij commercieel gebruik, vermijd het gebruik van "OpenList" of enige verwarrende naamgeving als projectnaam;
- Als u materialen onder OpenListTeam/Logo moet gebruiken, kunt u deze wijzigen en gebruiken onder naleving van de overeenkomst.
Dank u voor uw ondersteuning en begrip
## Functies
- [x] Meerdere opslagmogelijkheden
- [x] Lokale opslag
- [x] [Aliyundrive](https://www.alipan.com)
- [x] OneDrive / Sharepoint ([Global](https://www.microsoft.com/en-us/microsoft-365/onedrive/online-cloud-storage), [CN](https://portal.partner.microsoftonline.cn), DE, US)
- [x] [189cloud](https://cloud.189.cn) (Persoonlijk, Familie)
- [x] [GoogleDrive](https://drive.google.com)
- [x] [123pan](https://www.123pan.com)
- [x] [FTP / SFTP](https://en.wikipedia.org/wiki/File_Transfer_Protocol)
- [x] [PikPak](https://www.mypikpak.com)
- [x] [S3](https://aws.amazon.com/s3)
- [x] [Seafile](https://seafile.com)
- [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- [x] [WebDAV](https://en.wikipedia.org/wiki/WebDAV)
- [x] Teambition([China](https://www.teambition.com), [Internationaal](https://us.teambition.com))
- [x] [Mediatrack](https://www.mediatrack.cn)
- [x] [139yun](https://yun.139.com) (Persoonlijk, Familie, Groep)
- [x] [YandexDisk](https://disk.yandex.com)
- [x] [BaiduNetdisk](http://pan.baidu.com)
- [x] [Terabox](https://www.terabox.com/main)
- [x] [UC](https://drive.uc.cn)
- [x] [Quark](https://pan.quark.cn)
- [x] [Thunder](https://pan.xunlei.com)
- [x] [Lanzou](https://www.lanzou.com)
- [x] [ILanzou](https://www.ilanzou.com)
- [x] [Aliyundrive share](https://www.alipan.com)
- [x] [Google photo](https://photos.google.com)
- [x] [Mega.nz](https://mega.nz)
- [x] [Baidu photo](https://photo.baidu.com)
- [x] [SMB](https://en.wikipedia.org/wiki/Server_Message_Block)
- [x] [115](https://115.com)
- [x] [Cloudreve](https://cloudreve.org)
- [x] [Dropbox](https://www.dropbox.com)
- [x] [FeijiPan](https://www.feijipan.com)
- [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] Eenvoudig te implementeren en direct te gebruiken
- [x] Bestandsvoorbeeld (PDF, markdown, code, platte tekst, ...)
- [x] Afbeeldingsvoorbeeld in galerijweergave
- [x] Video- en audiovoorbeeld, ondersteuning voor songteksten en ondertitels
- [x] Office-documenten voorbeeld (docx, pptx, xlsx, ...)
- [x] `README.md` voorbeeldweergave
- [x] Permalink kopiëren en direct downloaden van bestanden
- [x] Donkere modus
- [x] I18n
- [x] Beschermde routes (wachtwoordbeveiliging en authenticatie)
- [x] WebDAV
- [x] Docker implementatie
- [x] Cloudflare Workers proxy
- [x] Bestands-/map-pakket download
- [x] Webupload (bezoekers kunnen uploaden toestaan), verwijderen, map aanmaken, hernoemen, verplaatsen en kopiëren
- [x] Offline download
- [x] Bestanden kopiëren tussen twee opslaglocaties
- [x] Multi-thread downloadversnelling voor enkelvoudige download/stream
## Documentatie
- 📘 [Global Site](https://doc.oplist.org)
- 📚 [Backup Site](https://doc.openlist.team)
- 🌏 [CN Site](https://doc.oplist.org.cn)
## Demo
N.v.t. (wordt opnieuw opgebouwd)
## Discussie
Stel algemene vragen in [*Discussions*](https://github.com/OpenListTeam/OpenList/discussions), ***Issues* zijn alleen voor bugmeldingen en feature requests.**
## Licentie
`OpenList` is open-source software onder de [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.txt) licentie.
## Disclaimer
- Dit project is gratis en open-source software, ontworpen om het delen van bestanden via netdisks te vergemakkelijken, voornamelijk bedoeld ter ondersteuning van het downloaden en leren van de programmeertaal Go.
- Houd u bij het gebruik van deze software aan alle toepasselijke wetten en voorschriften. Elk misbruik is ten strengste verboden.
- De software is gebaseerd op officiële SDK's of API's zonder enige wijziging, verstoring of beïnvloeding van hun gedrag.
- Het voert alleen HTTP 302-omleidingen of verkeersdoorsturing uit en onderschept, slaat of wijzigt geen gebruikersgegevens.
- Dit project is niet gelieerd aan enig officieel platform of dienstverlener.
- De software wordt geleverd "zoals deze is", zonder enige vorm van garantie, expliciet of impliciet, inclusief maar niet beperkt tot garanties van verkoopbaarheid of geschiktheid voor een bepaald doel.
- De beheerders zijn niet aansprakelijk voor enige directe of indirecte schade die voortvloeit uit het gebruik van of het onvermogen om deze software te gebruiken.
- U bent zelf verantwoordelijk voor alle risico's die gepaard gaan met het gebruik van deze software, inclusief maar niet beperkt tot accountblokkades of downloadbeperkingen.
- Dit project valt onder de [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.txt) licentie. Zie het [LICENSE](./LICENSE) bestand voor details.
## Contact
- [@GitHub](https://github.com/OpenListTeam)
- [Telegram Groep](https://t.me/OpenListTeam)
- [Telegram Kanaal](https://t.me/OpenListOfficial)
## Bijdragers
Wij danken de auteur [Xhofe](https://github.com/Xhofe) van het originele project [AlistGo/alist](https://github.com/AlistGo/alist) en alle andere bijdragers.
Dank aan deze geweldige mensen:
[![Contributors](https://contrib.rocks/image?repo=OpenListTeam/OpenList)](https://github.com/OpenListTeam/OpenList/graphs/contributors)

439
build.sh
View File

@ -4,73 +4,58 @@ builtAt="$(date +'%F %T %z')"
gitAuthor="The OpenList Projects Contributors <noreply@openlist.team>" gitAuthor="The OpenList Projects Contributors <noreply@openlist.team>"
gitCommit=$(git log --pretty=format:"%h" -1) gitCommit=$(git log --pretty=format:"%h" -1)
# Set frontend repository, default to OpenListTeam/OpenList-Frontend githubAuthHeader=""
frontendRepo="${FRONTEND_REPO:-OpenListTeam/OpenList-Frontend}" githubAuthValue=""
githubAuthArgs=""
if [ -n "$GITHUB_TOKEN" ]; then if [ -n "$GITHUB_TOKEN" ]; then
githubAuthArgs="--header \"Authorization: Bearer $GITHUB_TOKEN\"" githubAuthHeader="--header"
fi githubAuthValue="Authorization: Bearer $GITHUB_TOKEN"
# Check for lite parameter
useLite=false
if [[ "$*" == *"lite"* ]]; then
useLite=true
fi fi
if [ "$1" = "dev" ]; then if [ "$1" = "dev" ]; then
version="dev" version="dev"
webVersion="rolling" webVersion="dev"
elif [ "$1" = "beta" ]; then elif [ "$1" = "beta" ]; then
version="beta" version="beta"
webVersion="rolling" webVersion="dev"
else else
git tag -d beta || true git tag -d beta || true
# Always true if there's no tag # Always true if there's no tag
version=$(git describe --abbrev=0 --tags 2>/dev/null || echo "v0.0.0") version=$(git describe --abbrev=0 --tags 2>/dev/null || echo "v0.0.0")
webVersion=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/$frontendRepo/releases/latest\"" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g') webVersion=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue "https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
fi fi
echo "backend version: $version" echo "backend version: $version"
echo "frontend version: $webVersion" echo "frontend version: $webVersion"
if [ "$useLite" = true ]; then
echo "using lite frontend"
else
echo "using standard frontend"
fi
ldflags="\ ldflags="\
-w -s \ -w -s \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.BuiltAt=$builtAt' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.BuiltAt=$builtAt' \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.GitAuthor=$gitAuthor' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.GitAuthor=$gitAuthor' \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.GitCommit=$gitCommit' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.GitCommit=$gitCommit' \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.Version=$version' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.Version=$version' \
-X 'github.com/OpenListTeam/OpenList/v4/internal/conf.WebVersion=$webVersion' \ -X 'github.com/OpenListTeam/OpenList/internal/conf.WebVersion=$webVersion' \
" "
FetchWebRolling() { FetchWebDev() {
pre_release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/$frontendRepo/releases/tags/rolling\"") pre_release_tag=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases | jq -r 'map(select(.prerelease)) | first | .tag_name')
if [ -z "$pre_release_tag" ] || [ "$pre_release_tag" == "null" ]; then
# fall back to latest release
pre_release_json=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest")
else
pre_release_json=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/tags/$pre_release_tag")
fi
pre_release_assets=$(echo "$pre_release_json" | jq -r '.assets[].browser_download_url') pre_release_assets=$(echo "$pre_release_json" | jq -r '.assets[].browser_download_url')
pre_release_tar_url=$(echo "$pre_release_assets" | grep "openlist-frontend-dist" | grep "\.tar\.gz$")
# There is no lite for rolling curl -fsSL "$pre_release_tar_url" -o web-dist-dev.tar.gz
pre_release_tar_url=$(echo "$pre_release_assets" | grep "openlist-frontend-dist" | grep -v "lite" | grep "\.tar\.gz$")
curl -fsSL "$pre_release_tar_url" -o dist.tar.gz
rm -rf public/dist && mkdir -p public/dist rm -rf public/dist && mkdir -p public/dist
tar -zxvf dist.tar.gz -C public/dist tar -zxvf web-dist-dev.tar.gz -C public/dist
rm -rf dist.tar.gz rm -rf web-dist-dev.tar.gz
} }
FetchWebRelease() { FetchWebRelease() {
release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/$frontendRepo/releases/latest\"") release_json=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest")
release_assets=$(echo "$release_json" | jq -r '.assets[].browser_download_url') release_assets=$(echo "$release_json" | jq -r '.assets[].browser_download_url')
release_tar_url=$(echo "$release_assets" | grep "openlist-frontend-dist" | grep "\.tar\.gz$")
if [ "$useLite" = true ]; then
release_tar_url=$(echo "$release_assets" | grep "openlist-frontend-dist-lite" | grep "\.tar\.gz$")
else
release_tar_url=$(echo "$release_assets" | grep "openlist-frontend-dist" | grep -v "lite" | grep "\.tar\.gz$")
fi
curl -fsSL "$release_tar_url" -o dist.tar.gz curl -fsSL "$release_tar_url" -o dist.tar.gz
rm -rf public/dist && mkdir -p public/dist rm -rf public/dist && mkdir -p public/dist
tar -zxvf dist.tar.gz -C public/dist tar -zxvf dist.tar.gz -C public/dist
@ -89,45 +74,6 @@ BuildWinArm64() {
go build -o "$1" -ldflags="$ldflags" -tags=jsoniter . go build -o "$1" -ldflags="$ldflags" -tags=jsoniter .
} }
BuildWin7() {
# Setup Win7 Go compiler (patched version that supports Windows 7)
go_version=$(go version | grep -o 'go[0-9]\+\.[0-9]\+\.[0-9]\+' | sed 's/go//')
echo "Detected Go version: $go_version"
curl -fsSL --retry 3 -o go-win7.zip -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/XTLS/go-win7/releases/download/patched-${go_version}/go-for-win7-linux-amd64.zip"
rm -rf go-win7
unzip go-win7.zip -d go-win7
rm go-win7.zip
# Set permissions for all wrapper files
chmod +x ./wrapper/zcc-win7
chmod +x ./wrapper/zcxx-win7
chmod +x ./wrapper/zcc-win7-386
chmod +x ./wrapper/zcxx-win7-386
# Build for both 386 and amd64 architectures
for arch in "386" "amd64"; do
echo "building for windows7-${arch}"
export GOOS=windows
export GOARCH=${arch}
export CGO_ENABLED=1
# Use architecture-specific wrapper files
if [ "$arch" = "386" ]; then
export CC=$(pwd)/wrapper/zcc-win7-386
export CXX=$(pwd)/wrapper/zcxx-win7-386
else
export CC=$(pwd)/wrapper/zcc-win7
export CXX=$(pwd)/wrapper/zcxx-win7
fi
# Use the patched Go compiler for Win7 compatibility
$(pwd)/go-win7/bin/go build -o "${1}-${arch}.exe" -ldflags="$ldflags" -tags=jsoniter .
done
}
BuildDev() { BuildDev() {
rm -rf .git/ rm -rf .git/
mkdir -p "dist" mkdir -p "dist"
@ -154,8 +100,8 @@ BuildDev() {
xgo -targets=windows/amd64,darwin/amd64,darwin/arm64 -out "$appName" -ldflags="$ldflags" -tags=jsoniter . xgo -targets=windows/amd64,darwin/amd64,darwin/arm64 -out "$appName" -ldflags="$ldflags" -tags=jsoniter .
mv "$appName"-* dist mv "$appName"-* dist
cd dist cd dist
# cp ./"$appName"-windows-amd64.exe ./"$appName"-windows-amd64-upx.exe cp ./"$appName"-windows-amd64.exe ./"$appName"-windows-amd64-upx.exe
# upx -9 ./"$appName"-windows-amd64-upx.exe upx -9 ./"$appName"-windows-amd64-upx.exe
find . -type f -print0 | xargs -0 md5sum >md5.txt find . -type f -print0 | xargs -0 md5sum >md5.txt
cat md5.txt cat md5.txt
} }
@ -167,7 +113,7 @@ BuildDocker() {
PrepareBuildDockerMusl() { PrepareBuildDockerMusl() {
mkdir -p build/musl-libs mkdir -p build/musl-libs
BASE="https://github.com/OpenListTeam/musl-compilers/releases/latest/download/" BASE="https://github.com/OpenListTeam/musl-compilers/releases/latest/download/"
FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross i486-linux-musl-cross armv6-linux-musleabihf-cross armv7l-linux-musleabihf-cross riscv64-linux-musl-cross powerpc64le-linux-musl-cross loongarch64-linux-musl-cross) ## Disable s390x-linux-musl-cross builds FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross i486-linux-musl-cross s390x-linux-musl-cross armv6-linux-musleabihf-cross armv7l-linux-musleabihf-cross riscv64-linux-musl-cross powerpc64le-linux-musl-cross)
for i in "${FILES[@]}"; do for i in "${FILES[@]}"; do
url="${BASE}${i}.tgz" url="${BASE}${i}.tgz"
lib_tgz="build/${i}.tgz" lib_tgz="build/${i}.tgz"
@ -186,8 +132,8 @@ BuildDockerMultiplatform() {
docker_lflags="--extldflags '-static -fpic' $ldflags" docker_lflags="--extldflags '-static -fpic' $ldflags"
export CGO_ENABLED=1 export CGO_ENABLED=1
OS_ARCHES=(linux-amd64 linux-arm64 linux-386 linux-riscv64 linux-ppc64le linux-loong64) ## Disable linux-s390x builds OS_ARCHES=(linux-amd64 linux-arm64 linux-386 linux-s390x linux-riscv64 linux-ppc64le)
CGO_ARGS=(x86_64-linux-musl-gcc aarch64-linux-musl-gcc i486-linux-musl-gcc riscv64-linux-musl-gcc powerpc64le-linux-musl-gcc loongarch64-linux-musl-gcc) ## Disable s390x-linux-musl-gcc builds CGO_ARGS=(x86_64-linux-musl-gcc aarch64-linux-musl-gcc i486-linux-musl-gcc s390x-linux-musl-gcc riscv64-linux-musl-gcc powerpc64le-linux-musl-gcc)
for i in "${!OS_ARCHES[@]}"; do for i in "${!OS_ARCHES[@]}"; do
os_arch=${OS_ARCHES[$i]} os_arch=${OS_ARCHES[$i]}
cgo_cc=${CGO_ARGS[$i]} cgo_cc=${CGO_ARGS[$i]}
@ -219,171 +165,12 @@ BuildRelease() {
rm -rf .git/ rm -rf .git/
mkdir -p "build" mkdir -p "build"
BuildWinArm64 ./build/"$appName"-windows-arm64.exe BuildWinArm64 ./build/"$appName"-windows-arm64.exe
BuildWin7 ./build/"$appName"-windows7
xgo -out "$appName" -ldflags="$ldflags" -tags=jsoniter . xgo -out "$appName" -ldflags="$ldflags" -tags=jsoniter .
# why? Because some target platforms seem to have issues with upx compression # why? Because some target platforms seem to have issues with upx compression
# upx -9 ./"$appName"-linux-amd64 upx -9 ./"$appName"-linux-amd64
# cp ./"$appName"-windows-amd64.exe ./"$appName"-windows-amd64-upx.exe cp ./"$appName"-windows-amd64.exe ./"$appName"-windows-amd64-upx.exe
# upx -9 ./"$appName"-windows-amd64-upx.exe upx -9 ./"$appName"-windows-amd64-upx.exe
mv "$appName"-* build mv "$appName"-* build
# Build LoongArch with glibc (both old world abi1.0 and new world abi2.0)
# Separate from musl builds to avoid cache conflicts
BuildLoongGLIBC ./build/$appName-linux-loong64-abi1.0 abi1.0
BuildLoongGLIBC ./build/$appName-linux-loong64 abi2.0
}
BuildLoongGLIBC() {
local target_abi="$2"
local output_file="$1"
local oldWorldGoVersion="1.24.3"
if [ "$target_abi" = "abi1.0" ]; then
echo building for linux-loong64-abi1.0
else
echo building for linux-loong64-abi2.0
target_abi="abi2.0" # Default to abi2.0 if not specified
fi
# Note: No longer need global cache cleanup since ABI1.0 uses isolated cache directory
echo "Using optimized cache strategy: ABI1.0 has isolated cache, ABI2.0 uses standard cache"
if [ "$target_abi" = "abi1.0" ]; then
# Setup abi1.0 toolchain and patched Go compiler similar to cgo-action implementation
echo "Setting up Loongson old-world ABI1.0 toolchain and patched Go compiler..."
# Download and setup patched Go compiler for old-world
if ! curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/go${oldWorldGoVersion}.linux-amd64.tar.gz" \
-o go-loong64-abi1.0.tar.gz; then
echo "Error: Failed to download patched Go compiler for old-world ABI1.0"
if [ -n "$GITHUB_TOKEN" ]; then
echo "Error output from curl:"
curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/go${oldWorldGoVersion}.linux-amd64.tar.gz" \
-o go-loong64-abi1.0.tar.gz || true
fi
return 1
fi
rm -rf go-loong64-abi1.0
mkdir go-loong64-abi1.0
if ! tar -xzf go-loong64-abi1.0.tar.gz -C go-loong64-abi1.0 --strip-components=1; then
echo "Error: Failed to extract patched Go compiler"
return 1
fi
rm go-loong64-abi1.0.tar.gz
# Download and setup GCC toolchain for old-world
if ! curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/loongson-gnu-toolchain-8.3.novec-x86_64-loongarch64-linux-gnu-rc1.1.tar.xz" \
-o gcc8-loong64-abi1.0.tar.xz; then
echo "Error: Failed to download GCC toolchain for old-world ABI1.0"
if [ -n "$GITHUB_TOKEN" ]; then
echo "Error output from curl:"
curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/loongson-gnu-toolchain-8.3.novec-x86_64-loongarch64-linux-gnu-rc1.1.tar.xz" \
-o gcc8-loong64-abi1.0.tar.xz || true
fi
return 1
fi
rm -rf gcc8-loong64-abi1.0
mkdir gcc8-loong64-abi1.0
if ! tar -Jxf gcc8-loong64-abi1.0.tar.xz -C gcc8-loong64-abi1.0 --strip-components=1; then
echo "Error: Failed to extract GCC toolchain"
return 1
fi
rm gcc8-loong64-abi1.0.tar.xz
# Setup separate cache directory for ABI1.0 to avoid cache pollution
abi1_cache_dir="$(pwd)/go-loong64-abi1.0-cache"
mkdir -p "$abi1_cache_dir"
echo "Using separate cache directory for ABI1.0: $abi1_cache_dir"
# Use patched Go compiler for old-world build (critical for ABI1.0 compatibility)
echo "Building with patched Go compiler for old-world ABI1.0..."
echo "Using isolated cache directory: $abi1_cache_dir"
# Use env command to set environment variables locally without affecting global environment
if ! env GOOS=linux GOARCH=loong64 \
CC="$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-gcc" \
CXX="$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-g++" \
CGO_ENABLED=1 \
GOCACHE="$abi1_cache_dir" \
$(pwd)/go-loong64-abi1.0/bin/go build -a -o "$output_file" -ldflags="$ldflags" -tags=jsoniter .; then
echo "Error: Build failed with patched Go compiler"
echo "Attempting retry with cache cleanup..."
env GOCACHE="$abi1_cache_dir" $(pwd)/go-loong64-abi1.0/bin/go clean -cache
if ! env GOOS=linux GOARCH=loong64 \
CC="$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-gcc" \
CXX="$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-g++" \
CGO_ENABLED=1 \
GOCACHE="$abi1_cache_dir" \
$(pwd)/go-loong64-abi1.0/bin/go build -a -o "$output_file" -ldflags="$ldflags" -tags=jsoniter .; then
echo "Error: Build failed again after cache cleanup"
echo "Build environment details:"
echo "GOOS=linux"
echo "GOARCH=loong64"
echo "CC=$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-gcc"
echo "CXX=$(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-g++"
echo "CGO_ENABLED=1"
echo "GOCACHE=$abi1_cache_dir"
echo "Go version: $($(pwd)/go-loong64-abi1.0/bin/go version)"
echo "GCC version: $($(pwd)/gcc8-loong64-abi1.0/bin/loongarch64-linux-gnu-gcc --version | head -1)"
return 1
fi
fi
else
# Setup abi2.0 toolchain for new world glibc build
echo "Setting up new-world ABI2.0 toolchain..."
if ! curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/cross-tools/releases/download/20250507/x86_64-cross-tools-loongarch64-unknown-linux-gnu-legacy.tar.xz" \
-o gcc12-loong64-abi2.0.tar.xz; then
echo "Error: Failed to download GCC toolchain for new-world ABI2.0"
if [ -n "$GITHUB_TOKEN" ]; then
echo "Error output from curl:"
curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/cross-tools/releases/download/20250507/x86_64-cross-tools-loongarch64-unknown-linux-gnu-legacy.tar.xz" \
-o gcc12-loong64-abi2.0.tar.xz || true
fi
return 1
fi
rm -rf gcc12-loong64-abi2.0
mkdir gcc12-loong64-abi2.0
if ! tar -Jxf gcc12-loong64-abi2.0.tar.xz -C gcc12-loong64-abi2.0 --strip-components=1; then
echo "Error: Failed to extract GCC toolchain"
return 1
fi
rm gcc12-loong64-abi2.0.tar.xz
export GOOS=linux
export GOARCH=loong64
export CC=$(pwd)/gcc12-loong64-abi2.0/bin/loongarch64-unknown-linux-gnu-gcc
export CXX=$(pwd)/gcc12-loong64-abi2.0/bin/loongarch64-unknown-linux-gnu-g++
export CGO_ENABLED=1
# Use standard Go compiler for new-world build
echo "Building with standard Go compiler for new-world ABI2.0..."
if ! go build -a -o "$output_file" -ldflags="$ldflags" -tags=jsoniter .; then
echo "Error: Build failed with standard Go compiler"
echo "Attempting retry with cache cleanup..."
go clean -cache
if ! go build -a -o "$output_file" -ldflags="$ldflags" -tags=jsoniter .; then
echo "Error: Build failed again after cache cleanup"
echo "Build environment details:"
echo "GOOS=$GOOS"
echo "GOARCH=$GOARCH"
echo "CC=$CC"
echo "CXX=$CXX"
echo "CGO_ENABLED=$CGO_ENABLED"
echo "Go version: $(go version)"
echo "GCC version: $($CC --version | head -1)"
return 1
fi
fi
fi
} }
BuildReleaseLinuxMusl() { BuildReleaseLinuxMusl() {
@ -441,7 +228,6 @@ BuildReleaseLinuxMuslArm() {
done done
} }
BuildReleaseAndroid() { BuildReleaseAndroid() {
rm -rf .git/ rm -rf .git/
mkdir -p "build" mkdir -p "build"
@ -468,10 +254,9 @@ BuildReleaseFreeBSD() {
mkdir -p "build/freebsd" mkdir -p "build/freebsd"
# Get latest FreeBSD 14.x release version from GitHub # Get latest FreeBSD 14.x release version from GitHub
freebsd_version=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/freebsd/freebsd-src/tags\"" | \ freebsd_version=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue "https://api.github.com/repos/freebsd/freebsd-src/tags" | \
jq -r '.[].name' | \ jq -r '.[].name' | \
grep '^release/14\.' | \ grep '^release/14\.' | \
grep -v -- '-p[0-9]*$' | \
sort -V | \ sort -V | \
tail -1 | \ tail -1 | \
sed 's/release\///' | \ sed 's/release\///' | \
@ -510,172 +295,82 @@ MakeRelease() {
rm -rv compress rm -rv compress
fi fi
mkdir compress mkdir compress
# Add -lite suffix if useLite is true
liteSuffix=""
if [ "$useLite" = true ]; then
liteSuffix="-lite"
fi
for i in $(find . -type f -name "$appName-linux-*"); do for i in $(find . -type f -name "$appName-linux-*"); do
cp "$i" "$appName" cp "$i" "$appName"
tar -czvf compress/"$i$liteSuffix".tar.gz "$appName" tar -czvf compress/"$i".tar.gz "$appName"
rm -f "$appName" rm -f "$appName"
done done
for i in $(find . -type f -name "$appName-android-*"); do for i in $(find . -type f -name "$appName-android-*"); do
cp "$i" "$appName" cp "$i" "$appName"
tar -czvf compress/"$i$liteSuffix".tar.gz "$appName" tar -czvf compress/"$i".tar.gz "$appName"
rm -f "$appName" rm -f "$appName"
done done
for i in $(find . -type f -name "$appName-darwin-*"); do for i in $(find . -type f -name "$appName-darwin-*"); do
cp "$i" "$appName" cp "$i" "$appName"
tar -czvf compress/"$i$liteSuffix".tar.gz "$appName" tar -czvf compress/"$i".tar.gz "$appName"
rm -f "$appName" rm -f "$appName"
done done
for i in $(find . -type f -name "$appName-freebsd-*"); do for i in $(find . -type f -name "$appName-freebsd-*"); do
cp "$i" "$appName" cp "$i" "$appName"
tar -czvf compress/"$i$liteSuffix".tar.gz "$appName" tar -czvf compress/"$i".tar.gz "$appName"
rm -f "$appName" rm -f "$appName"
done done
for i in $(find . -type f \( -name "$appName-windows-*" -o -name "$appName-windows7-*" \)); do for i in $(find . -type f -name "$appName-windows-*"); do
cp "$i" "$appName".exe cp "$i" "$appName".exe
zip compress/$(echo $i | sed 's/\.[^.]*$//')$liteSuffix.zip "$appName".exe zip compress/$(echo $i | sed 's/\.[^.]*$//').zip "$appName".exe
rm -f "$appName".exe rm -f "$appName".exe
done done
cd compress cd compress
find . -type f -print0 | xargs -0 md5sum >"$1"
# Handle MD5 filename - add -lite suffix only if not already present cat "$1"
md5FileName="$1"
if [ "$useLite" = true ] && [[ "$1" != *"-lite.txt" ]]; then
md5FileName=$(echo "$1" | sed 's/\.txt$/-lite.txt/')
fi
find . -type f -print0 | xargs -0 md5sum >"$md5FileName"
cat "$md5FileName"
cd ../.. cd ../..
} }
# Parse parameters to handle lite parameter position flexibility if [ "$1" = "dev" ]; then
buildType="" FetchWebDev
dockerType="" if [ "$2" = "docker" ]; then
otherParam=""
for arg in "$@"; do
case $arg in
dev|beta|release|zip|prepare)
if [ -z "$buildType" ]; then
buildType="$arg"
fi
;;
docker|docker-multiplatform|linux_musl_arm|linux_musl|android|freebsd|web)
if [ -z "$dockerType" ]; then
dockerType="$arg"
fi
;;
lite)
# lite parameter is already handled above
;;
*)
if [ -z "$otherParam" ]; then
otherParam="$arg"
fi
;;
esac
done
if [ "$buildType" = "dev" ]; then
FetchWebRolling
if [ "$dockerType" = "docker" ]; then
BuildDocker BuildDocker
elif [ "$dockerType" = "docker-multiplatform" ]; then elif [ "$2" = "docker-multiplatform" ]; then
BuildDockerMultiplatform BuildDockerMultiplatform
elif [ "$dockerType" = "web" ]; then elif [ "$2" = "web" ]; then
echo "web only" echo "web only"
else else
BuildDev BuildDev
fi fi
elif [ "$buildType" = "release" -o "$buildType" = "beta" ]; then elif [ "$1" = "release" -o "$1" = "beta" ]; then
if [ "$buildType" = "beta" ]; then if [ "$1" = "beta" ]; then
FetchWebRolling FetchWebDev
else else
FetchWebRelease FetchWebRelease
fi fi
if [ "$dockerType" = "docker" ]; then if [ "$2" = "docker" ]; then
BuildDocker BuildDocker
elif [ "$dockerType" = "docker-multiplatform" ]; then elif [ "$2" = "docker-multiplatform" ]; then
BuildDockerMultiplatform BuildDockerMultiplatform
elif [ "$dockerType" = "linux_musl_arm" ]; then elif [ "$2" = "linux_musl_arm" ]; then
BuildReleaseLinuxMuslArm BuildReleaseLinuxMuslArm
if [ "$useLite" = true ]; then MakeRelease "md5-linux-musl-arm.txt"
MakeRelease "md5-linux-musl-arm-lite.txt" elif [ "$2" = "linux_musl" ]; then
else
MakeRelease "md5-linux-musl-arm.txt"
fi
elif [ "$dockerType" = "linux_musl" ]; then
BuildReleaseLinuxMusl BuildReleaseLinuxMusl
if [ "$useLite" = true ]; then MakeRelease "md5-linux-musl.txt"
MakeRelease "md5-linux-musl-lite.txt" elif [ "$2" = "android" ]; then
else
MakeRelease "md5-linux-musl.txt"
fi
elif [ "$dockerType" = "android" ]; then
BuildReleaseAndroid BuildReleaseAndroid
if [ "$useLite" = true ]; then MakeRelease "md5-android.txt"
MakeRelease "md5-android-lite.txt" elif [ "$2" = "freebsd" ]; then
else
MakeRelease "md5-android.txt"
fi
elif [ "$dockerType" = "freebsd" ]; then
BuildReleaseFreeBSD BuildReleaseFreeBSD
if [ "$useLite" = true ]; then MakeRelease "md5-freebsd.txt"
MakeRelease "md5-freebsd-lite.txt" elif [ "$2" = "web" ]; then
else
MakeRelease "md5-freebsd.txt"
fi
elif [ "$dockerType" = "web" ]; then
echo "web only" echo "web only"
else else
BuildRelease BuildRelease
if [ "$useLite" = true ]; then MakeRelease "md5.txt"
MakeRelease "md5-lite.txt"
else
MakeRelease "md5.txt"
fi
fi fi
elif [ "$buildType" = "prepare" ]; then elif [ "$1" = "prepare" ]; then
if [ "$dockerType" = "docker-multiplatform" ]; then if [ "$2" = "docker-multiplatform" ]; then
PrepareBuildDockerMusl PrepareBuildDockerMusl
fi fi
elif [ "$buildType" = "zip" ]; then elif [ "$1" = "zip" ]; then
if [ -n "$otherParam" ]; then MakeRelease "$2".txt
if [ "$useLite" = true ]; then
MakeRelease "$otherParam-lite.txt"
else
MakeRelease "$otherParam.txt"
fi
elif [ -n "$dockerType" ]; then
if [ "$useLite" = true ]; then
MakeRelease "$dockerType-lite.txt"
else
MakeRelease "$dockerType.txt"
fi
else
if [ "$useLite" = true ]; then
MakeRelease "md5-lite.txt"
else
MakeRelease "md5.txt"
fi
fi
else else
echo -e "Parameter error" echo -e "Parameter error"
echo -e "Usage: $0 {dev|beta|release|zip|prepare} [docker|docker-multiplatform|linux_musl_arm|linux_musl|android|freebsd|web] [lite] [other_params]"
echo -e "Examples:"
echo -e " $0 dev"
echo -e " $0 dev lite"
echo -e " $0 dev docker"
echo -e " $0 dev docker lite"
echo -e " $0 release"
echo -e " $0 release lite"
echo -e " $0 release docker lite"
echo -e " $0 release linux_musl"
fi fi

View File

@ -4,13 +4,11 @@ Copyright © 2022 NAME HERE <EMAIL ADDRESS>
package cmd package cmd
import ( import (
"fmt" "github.com/OpenListTeam/OpenList/internal/conf"
"github.com/OpenListTeam/OpenList/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/internal/setting"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/internal/setting" "github.com/OpenListTeam/OpenList/pkg/utils/random"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -26,11 +24,10 @@ var AdminCmd = &cobra.Command{
if err != nil { if err != nil {
utils.Log.Errorf("failed get admin user: %+v", err) utils.Log.Errorf("failed get admin user: %+v", err)
} else { } else {
utils.Log.Infof("get admin user from CLI") utils.Log.Infof("Admin user's username: %s", admin.Username)
fmt.Println("Admin user's username:", admin.Username) utils.Log.Infof("The password can only be output at the first startup, and then stored as a hash value, which cannot be reversed")
fmt.Println("The password can only be output at the first startup, and then stored as a hash value, which cannot be reversed") utils.Log.Infof("You can reset the password with a random string by running [openlist admin random]")
fmt.Println("You can reset the password with a random string by running [openlist admin random]") utils.Log.Infof("You can also set a new password by running [openlist admin set NEW_PASSWORD]")
fmt.Println("You can also set a new password by running [openlist admin set NEW_PASSWORD]")
} }
}, },
} }
@ -39,7 +36,6 @@ var RandomPasswordCmd = &cobra.Command{
Use: "random", Use: "random",
Short: "Reset admin user's password to a random string", Short: "Reset admin user's password to a random string",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
utils.Log.Infof("reset admin user's password to a random string from CLI")
newPwd := random.String(8) newPwd := random.String(8)
setAdminPassword(newPwd) setAdminPassword(newPwd)
}, },
@ -48,12 +44,12 @@ var RandomPasswordCmd = &cobra.Command{
var SetPasswordCmd = &cobra.Command{ var SetPasswordCmd = &cobra.Command{
Use: "set", Use: "set",
Short: "Set admin user's password", Short: "Set admin user's password",
RunE: func(cmd *cobra.Command, args []string) error { Run: func(cmd *cobra.Command, args []string) {
if len(args) == 0 { if len(args) == 0 {
return fmt.Errorf("Please enter the new password") utils.Log.Errorf("Please enter the new password")
return
} }
setAdminPassword(args[0]) setAdminPassword(args[0])
return nil
}, },
} }
@ -64,8 +60,7 @@ var ShowTokenCmd = &cobra.Command{
Init() Init()
defer Release() defer Release()
token := setting.GetStr(conf.Token) token := setting.GetStr(conf.Token)
utils.Log.Infof("show admin token from CLI") utils.Log.Infof("Admin token: %s", token)
fmt.Println("Admin token:", token)
}, },
} }
@ -82,10 +77,9 @@ func setAdminPassword(pwd string) {
utils.Log.Errorf("failed update admin user: %+v", err) utils.Log.Errorf("failed update admin user: %+v", err)
return return
} }
utils.Log.Infof("admin user has been update from CLI") utils.Log.Infof("admin user has been updated:")
fmt.Println("admin user has been updated:") utils.Log.Infof("username: %s", admin.Username)
fmt.Println("username:", admin.Username) utils.Log.Infof("password: %s", pwd)
fmt.Println("password:", pwd)
DelAdminCacheOnline() DelAdminCacheOnline()
} }

View File

@ -4,10 +4,8 @@ Copyright © 2022 NAME HERE <EMAIL ADDRESS>
package cmd package cmd
import ( import (
"fmt" "github.com/OpenListTeam/OpenList/internal/op"
"github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -26,8 +24,7 @@ var Cancel2FACmd = &cobra.Command{
if err != nil { if err != nil {
utils.Log.Errorf("failed to cancel 2FA: %+v", err) utils.Log.Errorf("failed to cancel 2FA: %+v", err)
} else { } else {
utils.Log.Infof("2FA is canceled from CLI") utils.Log.Info("2FA canceled")
fmt.Println("2FA canceled")
DelAdminCacheOnline() DelAdminCacheOnline()
} }
} }

View File

@ -5,10 +5,10 @@ import (
"path/filepath" "path/filepath"
"strconv" "strconv"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap" "github.com/OpenListTeam/OpenList/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap/data" "github.com/OpenListTeam/OpenList/internal/bootstrap/data"
"github.com/OpenListTeam/OpenList/v4/internal/db" "github.com/OpenListTeam/OpenList/internal/db"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )

View File

@ -1,311 +0,0 @@
package cmd
import (
log "github.com/sirupsen/logrus"
"io"
"os"
"path"
"path/filepath"
"strings"
"github.com/spf13/cobra"
rcCrypt "github.com/rclone/rclone/backend/crypt"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/obscure"
)
// encryption and decryption command format for Crypt driver
type options struct {
op string //decrypt or encrypt
src string //source dir or file
dst string //out destination
pwd string //de/encrypt password
salt string
filenameEncryption string //reference drivers\crypt\meta.go Addtion
dirnameEncryption string
filenameEncode string
suffix string
}
var opt options
// CryptCmd represents the crypt command
var CryptCmd = &cobra.Command{
Use: "crypt",
Short: "Encrypt or decrypt local file or dir",
Example: `openlist crypt -s ./src/encrypt/ --op=de --pwd=123456 --salt=345678`,
Run: func(cmd *cobra.Command, args []string) {
opt.validate()
opt.cryptFileDir()
},
}
func init() {
RootCmd.AddCommand(CryptCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// versionCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
CryptCmd.Flags().StringVarP(&opt.src, "src", "s", "", "src file or dir to encrypt/decrypt")
CryptCmd.Flags().StringVarP(&opt.dst, "dst", "d", "", "dst dir to output,if not set,output to src dir")
CryptCmd.Flags().StringVar(&opt.op, "op", "", "de or en which stands for decrypt or encrypt")
CryptCmd.Flags().StringVar(&opt.pwd, "pwd", "", "password used to encrypt/decrypt,if not contain ___Obfuscated___ prefix,will be obfuscated before used")
CryptCmd.Flags().StringVar(&opt.salt, "salt", "", "salt used to encrypt/decrypt,if not contain ___Obfuscated___ prefix,will be obfuscated before used")
CryptCmd.Flags().StringVar(&opt.filenameEncryption, "filename-encrypt", "off", "filename encryption mode: off,standard,obfuscate")
CryptCmd.Flags().StringVar(&opt.dirnameEncryption, "dirname-encrypt", "false", "is dirname encryption enabled:true,false")
CryptCmd.Flags().StringVar(&opt.filenameEncode, "filename-encode", "base64", "filename encoding mode: base64,base32,base32768")
CryptCmd.Flags().StringVar(&opt.suffix, "suffix", ".bin", "suffix for encrypted file,default is .bin")
}
func (o *options) validate() {
if o.src == "" {
log.Fatal("src can not be empty")
}
if o.op != "encrypt" && o.op != "decrypt" && o.op != "en" && o.op != "de" {
log.Fatal("op must be encrypt or decrypt")
}
if o.filenameEncryption != "off" && o.filenameEncryption != "standard" && o.filenameEncryption != "obfuscate" {
log.Fatal("filename_encryption must be off,standard,obfuscate")
}
if o.filenameEncode != "base64" && o.filenameEncode != "base32" && o.filenameEncode != "base32768" {
log.Fatal("filename_encode must be base64,base32,base32768")
}
}
func (o *options) cryptFileDir() {
src, _ := filepath.Abs(o.src)
log.Infof("src abs is %v", src)
fileInfo, err := os.Stat(src)
if err != nil {
log.Fatalf("reading file/dir %v failed,err:%v", src, err)
}
pwd := updateObfusParm(o.pwd)
salt := updateObfusParm(o.salt)
//create cipher
config := configmap.Simple{
"password": pwd,
"password2": salt,
"filename_encryption": o.filenameEncryption,
"directory_name_encryption": o.dirnameEncryption,
"filename_encoding": o.filenameEncode,
"suffix": o.suffix,
"pass_bad_blocks": "",
}
log.Infof("config:%v", config)
cipher, err := rcCrypt.NewCipher(config)
if err != nil {
log.Fatalf("create cipher failed,err:%v", err)
}
dst := ""
//check and create dst dir
if o.dst != "" {
dst, _ = filepath.Abs(o.dst)
checkCreateDir(dst)
}
// src is file
if !fileInfo.IsDir() { //file
if dst == "" {
dst = filepath.Dir(src)
}
o.cryptFile(cipher, src, dst)
return
}
// src is dir
if dst == "" {
//if src is dir and not set dst dir ,create ${src}_crypt dir as dst dir
dst = path.Join(filepath.Dir(src), fileInfo.Name()+"_crypt")
}
log.Infof("dst : %v", dst)
dirnameMap := make(map[string]string)
pathSeparator := string(os.PathSeparator)
filepath.Walk(src, func(p string, info os.FileInfo, err error) error {
if err != nil {
log.Errorf("get file %v info failed, err:%v", p, err)
return err
}
if p == src {
return nil
}
log.Infof("current path %v", p)
// relative path
rp := strings.ReplaceAll(p, src, "")
log.Infof("relative path %v", rp)
rpds := strings.Split(rp, pathSeparator)
if info.IsDir() {
// absolute dst dir for current path
dd := ""
if o.dirnameEncryption == "true" {
if o.op == "encrypt" || o.op == "en" {
for i := range rpds {
oname := rpds[i]
if _, ok := dirnameMap[rpds[i]]; ok {
rpds[i] = dirnameMap[rpds[i]]
} else {
rpds[i] = cipher.EncryptDirName(rpds[i])
dirnameMap[oname] = rpds[i]
}
}
dd = path.Join(dst, strings.Join(rpds, pathSeparator))
} else {
for i := range rpds {
oname := rpds[i]
if _, ok := dirnameMap[rpds[i]]; ok {
rpds[i] = dirnameMap[rpds[i]]
} else {
dnn, err := cipher.DecryptDirName(rpds[i])
if err != nil {
log.Fatalf("decrypt dir name %v failed,err:%v", rpds[i], err)
}
rpds[i] = dnn
dirnameMap[oname] = dnn
}
}
dd = path.Join(dst, strings.Join(rpds, pathSeparator))
}
} else {
dd = path.Join(dst, rp)
}
log.Infof("create output dir %v", dd)
checkCreateDir(dd)
return nil
}
// file dst dir
fdd := dst
if o.dirnameEncryption == "true" {
for i := range rpds {
if i == len(rpds)-1 {
break
}
fdd = path.Join(fdd, dirnameMap[rpds[i]])
}
} else {
fdd = path.Join(fdd, strings.Join(rpds[:len(rpds)-1], pathSeparator))
}
log.Infof("file output dir %v", fdd)
o.cryptFile(cipher, p, fdd)
return nil
})
}
func (o *options) cryptFile(cipher *rcCrypt.Cipher, src string, dst string) {
fileInfo, err := os.Stat(src)
if err != nil {
log.Fatalf("get file %v info failed,err:%v", src, err)
}
fd, err := os.OpenFile(src, os.O_RDWR, 0666)
if err != nil {
log.Fatalf("open file %v failed,err:%v", src, err)
}
defer fd.Close()
var cryptSrcReader io.Reader
var outFile string
if o.op == "encrypt" || o.op == "en" {
filename := fileInfo.Name()
if o.filenameEncryption != "off" {
filename = cipher.EncryptFileName(fileInfo.Name())
log.Infof("encrypt file name %v to %v", fileInfo.Name(), filename)
} else {
filename = fileInfo.Name() + o.suffix
}
cryptSrcReader, err = cipher.EncryptData(fd)
if err != nil {
log.Fatalf("encrypt file %v failed,err:%v", src, err)
}
outFile = path.Join(dst, filename)
} else {
filename := fileInfo.Name()
if o.filenameEncryption != "off" {
filename, err = cipher.DecryptFileName(filename)
if err != nil {
log.Fatalf("decrypt file name %v failed,err:%v", src, err)
}
log.Infof("decrypt file name %v to %v, ", fileInfo.Name(), filename)
} else {
filename = strings.TrimSuffix(filename, o.suffix)
}
cryptSrcReader, err = cipher.DecryptData(fd)
if err != nil {
log.Fatalf("decrypt file %v failed,err:%v", src, err)
}
outFile = path.Join(dst, filename)
}
//write new file
wr, err := os.OpenFile(outFile, os.O_CREATE|os.O_WRONLY, 0755)
if err != nil {
log.Fatalf("create file %v failed,err:%v", outFile, err)
}
defer wr.Close()
_, err = io.Copy(wr, cryptSrcReader)
if err != nil {
log.Fatalf("write file %v failed,err:%v", outFile, err)
}
}
// check dir exist ,if not ,create
func checkCreateDir(dir string) {
_, err := os.Stat(dir)
if os.IsNotExist(err) {
err := os.MkdirAll(dir, 0755)
if err != nil {
log.Fatalf("create dir %v failed,err:%v", dir, err)
}
return
} else if err != nil {
log.Fatalf("read dir %v err: %v", dir, err)
}
}
func updateObfusParm(str string) string {
obfuscatedPrefix := "___Obfuscated___"
if !strings.HasPrefix(str, obfuscatedPrefix) {
str, err := obscure.Obscure(str)
if err != nil {
log.Fatalf("update obfuscated parameter failed,err:%v", str)
}
} else {
str, _ = strings.CutPrefix(str, obfuscatedPrefix)
}
return str
}

View File

@ -11,12 +11,12 @@ import (
"reflect" "reflect"
"strings" "strings"
_ "github.com/OpenListTeam/OpenList/v4/drivers" _ "github.com/OpenListTeam/OpenList/drivers"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap" "github.com/OpenListTeam/OpenList/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap/data" "github.com/OpenListTeam/OpenList/internal/bootstrap/data"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )

View File

@ -4,10 +4,10 @@ import (
"fmt" "fmt"
"os" "os"
"github.com/OpenListTeam/OpenList/v4/cmd/flags" "github.com/OpenListTeam/OpenList/cmd/flags"
_ "github.com/OpenListTeam/OpenList/v4/drivers" _ "github.com/OpenListTeam/OpenList/drivers"
_ "github.com/OpenListTeam/OpenList/v4/internal/archive" _ "github.com/OpenListTeam/OpenList/internal/archive"
_ "github.com/OpenListTeam/OpenList/v4/internal/offline_download" _ "github.com/OpenListTeam/OpenList/internal/offline_download"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -15,8 +15,8 @@ var RootCmd = &cobra.Command{
Use: "openlist", Use: "openlist",
Short: "A file list program that supports multiple storage.", Short: "A file list program that supports multiple storage.",
Long: `A file list program that supports multiple storage, Long: `A file list program that supports multiple storage,
built with love by OpenListTeam. built with love by Xhofe and friends in Go/Solid.js.
Complete documentation is available at https://doc.oplist.org/`, Complete documentation is available at https://docs.openlist.team/`,
} }
func Execute() { func Execute() {

View File

@ -13,13 +13,12 @@ import (
"syscall" "syscall"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/cmd/flags" "github.com/OpenListTeam/OpenList/cmd/flags"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap" "github.com/OpenListTeam/OpenList/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/fs" "github.com/OpenListTeam/OpenList/internal/fs"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server" "github.com/OpenListTeam/OpenList/server"
"github.com/OpenListTeam/OpenList/v4/server/middlewares"
"github.com/OpenListTeam/sftpd-openlist" "github.com/OpenListTeam/sftpd-openlist"
ftpserver "github.com/fclairamb/ftpserverlib" ftpserver "github.com/fclairamb/ftpserverlib"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
@ -48,15 +47,7 @@ the address is defined in config file`,
gin.SetMode(gin.ReleaseMode) gin.SetMode(gin.ReleaseMode)
} }
r := gin.New() r := gin.New()
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
// gin log
if conf.Conf.Log.Filter.Enable {
r.Use(middlewares.FilteredLogger())
} else {
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out))
}
r.Use(gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r) server.Init(r)
var httpHandler http.Handler = r var httpHandler http.Handler = r
if conf.Conf.Scheme.EnableH2c { if conf.Conf.Scheme.EnableH2c {
@ -65,7 +56,6 @@ the address is defined in config file`,
var httpSrv, httpsSrv, unixSrv *http.Server var httpSrv, httpsSrv, unixSrv *http.Server
if conf.Conf.Scheme.HttpPort != -1 { if conf.Conf.Scheme.HttpPort != -1 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort) httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
fmt.Printf("start HTTP server @ %s\n", httpBase)
utils.Log.Infof("start HTTP server @ %s", httpBase) utils.Log.Infof("start HTTP server @ %s", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler} httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler}
go func() { go func() {
@ -77,7 +67,6 @@ the address is defined in config file`,
} }
if conf.Conf.Scheme.HttpsPort != -1 { if conf.Conf.Scheme.HttpsPort != -1 {
httpsBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpsPort) httpsBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpsPort)
fmt.Printf("start HTTPS server @ %s\n", httpsBase)
utils.Log.Infof("start HTTPS server @ %s", httpsBase) utils.Log.Infof("start HTTPS server @ %s", httpsBase)
httpsSrv = &http.Server{Addr: httpsBase, Handler: r} httpsSrv = &http.Server{Addr: httpsBase, Handler: r}
go func() { go func() {
@ -88,7 +77,6 @@ the address is defined in config file`,
}() }()
} }
if conf.Conf.Scheme.UnixFile != "" { if conf.Conf.Scheme.UnixFile != "" {
fmt.Printf("start unix server @ %s\n", conf.Conf.Scheme.UnixFile)
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile) utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: httpHandler} unixSrv = &http.Server{Handler: httpHandler}
go func() { go func() {
@ -117,7 +105,6 @@ the address is defined in config file`,
s3r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out)) s3r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.InitS3(s3r) server.InitS3(s3r)
s3Base := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.S3.Port) s3Base := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.S3.Port)
fmt.Printf("start S3 server @ %s\n", s3Base)
utils.Log.Infof("start S3 server @ %s", s3Base) utils.Log.Infof("start S3 server @ %s", s3Base)
go func() { go func() {
var err error var err error
@ -142,7 +129,6 @@ the address is defined in config file`,
if err != nil { if err != nil {
utils.Log.Fatalf("failed to start ftp driver: %s", err.Error()) utils.Log.Fatalf("failed to start ftp driver: %s", err.Error())
} else { } else {
fmt.Printf("start ftp server on %s\n", conf.Conf.FTP.Listen)
utils.Log.Infof("start ftp server on %s", conf.Conf.FTP.Listen) utils.Log.Infof("start ftp server on %s", conf.Conf.FTP.Listen)
go func() { go func() {
ftpServer = ftpserver.NewFtpServer(ftpDriver) ftpServer = ftpserver.NewFtpServer(ftpDriver)
@ -161,7 +147,6 @@ the address is defined in config file`,
if err != nil { if err != nil {
utils.Log.Fatalf("failed to start sftp driver: %s", err.Error()) utils.Log.Fatalf("failed to start sftp driver: %s", err.Error())
} else { } else {
fmt.Printf("start sftp server on %s", conf.Conf.SFTP.Listen)
utils.Log.Infof("start sftp server on %s", conf.Conf.SFTP.Listen) utils.Log.Infof("start sftp server on %s", conf.Conf.SFTP.Listen)
go func() { go func() {
sftpServer = sftpd.NewSftpServer(sftpDriver) sftpServer = sftpd.NewSftpServer(sftpDriver)

View File

@ -4,12 +4,11 @@ Copyright © 2023 NAME HERE <EMAIL ADDRESS>
package cmd package cmd
import ( import (
"fmt"
"os" "os"
"strconv" "strconv"
"github.com/OpenListTeam/OpenList/v4/internal/db" "github.com/OpenListTeam/OpenList/internal/db"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/charmbracelet/bubbles/table" "github.com/charmbracelet/bubbles/table"
tea "github.com/charmbracelet/bubbletea" tea "github.com/charmbracelet/bubbletea"
"github.com/charmbracelet/lipgloss" "github.com/charmbracelet/lipgloss"
@ -23,61 +22,28 @@ var storageCmd = &cobra.Command{
} }
var disableStorageCmd = &cobra.Command{ var disableStorageCmd = &cobra.Command{
Use: "disable [mount path]", Use: "disable",
Short: "Disable a storage by mount path", Short: "Disable a storage",
RunE: func(cmd *cobra.Command, args []string) error { Run: func(cmd *cobra.Command, args []string) {
if len(args) < 1 { if len(args) < 1 {
return fmt.Errorf("mount path is required") utils.Log.Errorf("mount path is required")
return
} }
mountPath := args[0] mountPath := args[0]
Init() Init()
defer Release() defer Release()
storage, err := db.GetStorageByMountPath(mountPath) storage, err := db.GetStorageByMountPath(mountPath)
if err != nil { if err != nil {
return fmt.Errorf("failed to query storage: %+v", err) utils.Log.Errorf("failed to query storage: %+v", err)
} } else {
storage.Disabled = true storage.Disabled = true
err = db.UpdateStorage(storage) err = db.UpdateStorage(storage)
if err != nil { if err != nil {
return fmt.Errorf("failed to update storage: %+v", err) utils.Log.Errorf("failed to update storage: %+v", err)
} } else {
utils.Log.Infof("Storage with mount path [%s] has been disabled from CLI", mountPath) utils.Log.Infof("Storage with mount path [%s] have been disabled", mountPath)
fmt.Printf("Storage with mount path [%s] has been disabled\n", mountPath)
return nil
},
}
var deleteStorageCmd = &cobra.Command{
Use: "delete [id]",
Short: "Delete a storage by id",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("id is required")
}
id, err := strconv.Atoi(args[0])
if err != nil {
return fmt.Errorf("id must be a number")
}
if force, _ := cmd.Flags().GetBool("force"); force {
fmt.Printf("Are you sure you want to delete storage with id [%d]? [y/N]: ", id)
var confirm string
fmt.Scanln(&confirm)
if confirm != "y" && confirm != "Y" {
fmt.Println("Delete operation cancelled.")
return nil
} }
} }
Init()
defer Release()
err = db.DeleteStorageById(uint(id))
if err != nil {
return fmt.Errorf("failed to delete storage by id: %+v", err)
}
utils.Log.Infof("Storage with id [%d] have been deleted from CLI", id)
fmt.Printf("Storage with id [%d] have been deleted\n", id)
return nil
}, },
} }
@ -122,14 +88,14 @@ var storageTableHeight int
var listStorageCmd = &cobra.Command{ var listStorageCmd = &cobra.Command{
Use: "list", Use: "list",
Short: "List all storages", Short: "List all storages",
RunE: func(cmd *cobra.Command, args []string) error { Run: func(cmd *cobra.Command, args []string) {
Init() Init()
defer Release() defer Release()
storages, _, err := db.GetStorages(1, -1) storages, _, err := db.GetStorages(1, -1)
if err != nil { if err != nil {
return fmt.Errorf("failed to query storages: %+v", err) utils.Log.Errorf("failed to query storages: %+v", err)
} else { } else {
fmt.Printf("Found %d storages\n", len(storages)) utils.Log.Infof("Found %d storages", len(storages))
columns := []table.Column{ columns := []table.Column{
{Title: "ID", Width: 4}, {Title: "ID", Width: 4},
{Title: "Driver", Width: 16}, {Title: "Driver", Width: 16},
@ -172,11 +138,10 @@ var listStorageCmd = &cobra.Command{
m := model{t} m := model{t}
if _, err := tea.NewProgram(m).Run(); err != nil { if _, err := tea.NewProgram(m).Run(); err != nil {
fmt.Printf("failed to run program: %+v\n", err) utils.Log.Errorf("failed to run program: %+v", err)
os.Exit(1) os.Exit(1)
} }
} }
return nil
}, },
} }
@ -186,8 +151,6 @@ func init() {
storageCmd.AddCommand(disableStorageCmd) storageCmd.AddCommand(disableStorageCmd)
storageCmd.AddCommand(listStorageCmd) storageCmd.AddCommand(listStorageCmd)
storageCmd.PersistentFlags().IntVarP(&storageTableHeight, "height", "H", 10, "Table height") storageCmd.PersistentFlags().IntVarP(&storageTableHeight, "height", "H", 10, "Table height")
storageCmd.AddCommand(deleteStorageCmd)
deleteStorageCmd.Flags().BoolP("force", "f", false, "Force delete without confirmation")
// Here you will define your flags and configuration settings. // Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command // Cobra supports Persistent Flags which will work for this command

View File

@ -5,10 +5,10 @@ import (
"fmt" "fmt"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/setting" "github.com/OpenListTeam/OpenList/internal/setting"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
) )

View File

@ -8,7 +8,7 @@ import (
"os" "os"
"runtime" "runtime"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/internal/conf"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )

View File

@ -1,3 +1,4 @@
version: '3.3'
services: services:
openlist: openlist:
restart: always restart: always
@ -6,9 +7,10 @@ services:
ports: ports:
- '5244:5244' - '5244:5244'
- '5245:5245' - '5245:5245'
user: '0:0'
environment: environment:
- PUID=0
- PGID=0
- UMASK=022 - UMASK=022
- TZ=Asia/Shanghai - TZ=UTC
container_name: openlist container_name: openlist
image: 'openlistteam/openlist:latest' image: 'ghcr.io/openlistteam/openlist:latest'

View File

@ -1,60 +1,43 @@
package _115 package _115
import ( import (
"errors" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
driver115 "github.com/SheltonZhu/115driver/pkg/driver" driver115 "github.com/SheltonZhu/115driver/pkg/driver"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
var ( var (
md5Salt = "Qclm8MGWUv59TnrR0XPg" md5Salt = "Qclm8MGWUv59TnrR0XPg"
appVer = "35.6.0.3" appVer = "27.0.5.7"
) )
func (d *Pan115) getAppVersion() (string, error) { func (d *Pan115) getAppVersion() ([]driver115.AppVersion, error) {
result := VersionResp{} result := driver115.VersionResp{}
res, err := base.RestyClient.R().Get(driver115.ApiGetVersion) resp, err := base.RestyClient.R().Get(driver115.ApiGetVersion)
err = driver115.CheckErr(err, &result, resp)
if err != nil { if err != nil {
return "", err return nil, err
} }
err = utils.Json.Unmarshal(res.Body(), &result)
if err != nil { return result.Data.GetAppVersions(), nil
return "", err
}
if len(result.Error) > 0 {
return "", errors.New(result.Error)
}
return result.Data.Win.Version, nil
} }
func (d *Pan115) getAppVer() string { func (d *Pan115) getAppVer() string {
ver, err := d.getAppVersion() // todo add some cache
vers, err := d.getAppVersion()
if err != nil { if err != nil {
log.Warnf("[115] get app version failed: %v", err) log.Warnf("[115] get app version failed: %v", err)
return appVer return appVer
} }
if len(ver) > 0 { for _, ver := range vers {
return ver if ver.AppName == "win" {
return ver.Version
}
} }
return appVer return appVer
} }
func (d *Pan115) initAppVer() { func (d *Pan115) initAppVer() {
appVer = d.getAppVer() appVer = d.getAppVer()
log.Debugf("use app version: %v", appVer)
}
type VersionResp struct {
Error string `json:"error,omitempty"`
Data Versions `json:"data"`
}
type Versions struct {
Win Version `json:"win"`
}
type Version struct {
Version string `json:"version_code"`
} }

View File

@ -5,11 +5,10 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream" "github.com/OpenListTeam/OpenList/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
driver115 "github.com/SheltonZhu/115driver/pkg/driver" driver115 "github.com/SheltonZhu/115driver/pkg/driver"
"github.com/pkg/errors" "github.com/pkg/errors"
"golang.org/x/time/rate" "golang.org/x/time/rate"
@ -185,8 +184,12 @@ func (d *Pan115) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
} }
preHash = strings.ToUpper(preHash) preHash = strings.ToUpper(preHash)
fullHash := stream.GetHash().GetHash(utils.SHA1) fullHash := stream.GetHash().GetHash(utils.SHA1)
if len(fullHash) != utils.SHA1.Width { if len(fullHash) <= 0 {
_, fullHash, err = streamPkg.CacheFullAndHash(stream, &up, utils.SHA1) tmpF, err := stream.CacheFullInTempFile()
if err != nil {
return nil, err
}
fullHash, err = utils.HashFile(utils.SHA1, tmpF)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -1,8 +1,8 @@
package _115 package _115
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {
@ -18,6 +18,7 @@ var config = driver.Config{
Name: "115 Cloud", Name: "115 Cloud",
DefaultRoot: "0", DefaultRoot: "0",
// OnlyProxy: true, // OnlyProxy: true,
// OnlyLocal: true,
// NoOverwriteUpload: true, // NoOverwriteUpload: true,
} }

View File

@ -3,8 +3,8 @@ package _115
import ( import (
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/SheltonZhu/115driver/pkg/driver" "github.com/SheltonZhu/115driver/pkg/driver"
) )

View File

@ -17,11 +17,11 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range" "github.com/OpenListTeam/OpenList/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/aliyun/aliyun-oss-go-sdk/oss"
cipher "github.com/SheltonZhu/115driver/pkg/crypto/ec115" cipher "github.com/SheltonZhu/115driver/pkg/crypto/ec115"
@ -321,7 +321,7 @@ func (d *Pan115) UploadByMultipart(ctx context.Context, params *driver115.Upload
err error err error
) )
tmpF, err := s.CacheFullAndWriter(&up, nil) tmpF, err := s.CacheFullInTempFile()
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -3,20 +3,19 @@ package _115_open
import ( import (
"context" "context"
"fmt" "fmt"
"io"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
"time" "time"
sdk "github.com/OpenListTeam/115-sdk-go" "github.com/OpenListTeam/OpenList/cmd/flags"
"github.com/OpenListTeam/OpenList/v4/cmd/flags" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/internal/stream" sdk "github.com/xhofe/115-sdk-go"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"golang.org/x/time/rate" "golang.org/x/time/rate"
) )
@ -131,23 +130,6 @@ func (d *Open115) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
}, nil }, nil
} }
func (d *Open115) GetObjInfo(ctx context.Context, path string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
resp, err := d.client.GetFolderInfoByPath(ctx, path)
if err != nil {
return nil, err
}
return &Obj{
Fid: resp.FileID,
Fn: resp.FileName,
Fc: resp.FileCategory,
Sha1: resp.Sha1,
Pc: resp.PickCode,
}, nil
}
func (d *Open115) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) { func (d *Open115) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil { if err := d.WaitLimit(ctx); err != nil {
return nil, err return nil, err
@ -233,27 +215,28 @@ func (d *Open115) Remove(ctx context.Context, obj model.Obj) error {
} }
func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error { func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
err := d.WaitLimit(ctx) if err := d.WaitLimit(ctx); err != nil {
return err
}
tempF, err := file.CacheFullInTempFile()
if err != nil { if err != nil {
return err return err
} }
sha1 := file.GetHash().GetHash(utils.SHA1) // cal full sha1
if len(sha1) != utils.SHA1.Width { sha1, err := utils.HashReader(utils.SHA1, tempF)
_, sha1, err = stream.CacheFullAndHash(file, &up, utils.SHA1)
if err != nil {
return err
}
}
const PreHashSize int64 = 128 * utils.KB
hashSize := PreHashSize
if file.GetSize() < PreHashSize {
hashSize = file.GetSize()
}
reader, err := file.RangeRead(http_range.Range{Start: 0, Length: hashSize})
if err != nil { if err != nil {
return err return err
} }
sha1128k, err := utils.HashReader(utils.SHA1, reader) _, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// pre 128k sha1
sha1128k, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, 128*1024))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil { if err != nil {
return err return err
} }
@ -269,7 +252,6 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
return err return err
} }
if resp.Status == 2 { if resp.Status == 2 {
up(100)
return nil return nil
} }
// 2. two way verify // 2. two way verify
@ -283,11 +265,15 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
if err != nil { if err != nil {
return err return err
} }
reader, err = file.RangeRead(http_range.Range{Start: start, Length: end - start + 1}) _, err = tempF.Seek(start, io.SeekStart)
if err != nil { if err != nil {
return err return err
} }
signVal, err := utils.HashReader(utils.SHA1, reader) signVal, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, end-start+1))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil { if err != nil {
return err return err
} }
@ -304,7 +290,6 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
return err return err
} }
if resp.Status == 2 { if resp.Status == 2 {
up(100)
return nil return nil
} }
} }
@ -314,29 +299,13 @@ func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStre
return err return err
} }
// 4. upload // 4. upload
err = d.multpartUpload(ctx, file, up, tokenResp, resp) err = d.multpartUpload(ctx, tempF, file, up, tokenResp, resp)
if err != nil { if err != nil {
return err return err
} }
return nil return nil
} }
func (d *Open115) OfflineDownload(ctx context.Context, uris []string, dstDir model.Obj) ([]string, error) {
return d.client.AddOfflineTaskURIs(ctx, uris, dstDir.GetID())
}
func (d *Open115) DeleteOfflineTask(ctx context.Context, infoHash string, deleteFiles bool) error {
return d.client.DeleteOfflineTask(ctx, infoHash, deleteFiles)
}
func (d *Open115) OfflineList(ctx context.Context) (*sdk.OfflineTaskListResp, error) {
resp, err := d.client.OfflineTaskList(ctx, 1)
if err != nil {
return nil, err
}
return resp, nil
}
// func (d *Open115) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) { // func (d *Open115) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// // TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional // // TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement // return nil, errs.NotImplement

View File

@ -1,8 +1,8 @@
package _115_open package _115_open
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {
@ -11,14 +11,23 @@ type Addition struct {
// define other // define other
OrderBy string `json:"order_by" type:"select" options:"file_name,file_size,user_utime,file_type"` OrderBy string `json:"order_by" type:"select" options:"file_name,file_size,user_utime,file_type"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"` OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"`
LimitRate float64 `json:"limit_rate" type:"float" default:"1" help:"limit all api request rate ([limit]r/1s)"` LimitRate float64 `json:"limit_rate,string" type:"float" default:"1" help:"limit all api request rate ([limit]r/1s)"`
AccessToken string `json:"access_token" required:"true"` AccessToken string `json:"access_token" required:"true"`
RefreshToken string `json:"refresh_token" required:"true"` RefreshToken string `json:"refresh_token" required:"true"`
} }
var config = driver.Config{ var config = driver.Config{
Name: "115 Open", Name: "115 Open",
DefaultRoot: "0", LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -3,9 +3,9 @@ package _115_open
import ( import (
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
sdk "github.com/OpenListTeam/115-sdk-go" sdk "github.com/xhofe/115-sdk-go"
) )
type Obj sdk.GetFilesResp_File type Obj sdk.GetFilesResp_File

View File

@ -6,13 +6,12 @@ import (
"io" "io"
"time" "time"
sdk "github.com/OpenListTeam/115-sdk-go" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/pkg/utils"
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/avast/retry-go" "github.com/avast/retry-go"
sdk "github.com/xhofe/115-sdk-go"
) )
func calPartSize(fileSize int64) int64 { func calPartSize(fileSize int64) int64 {
@ -69,7 +68,10 @@ func (d *Open115) singleUpload(ctx context.Context, tempF model.File, tokenResp
// } `json:"data"` // } `json:"data"`
// } // }
func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error { func (d *Open115) multpartUpload(ctx context.Context, tempF model.File, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
fileSize := stream.GetSize()
chunkSize := calPartSize(fileSize)
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken)) ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil { if err != nil {
return err return err
@ -84,13 +86,6 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
return err return err
} }
fileSize := stream.GetSize()
chunkSize := calPartSize(fileSize)
ss, err := streamPkg.NewStreamSectionReader(stream, int(chunkSize), &up)
if err != nil {
return err
}
partNum := (stream.GetSize() + chunkSize - 1) / chunkSize partNum := (stream.GetSize() + chunkSize - 1) / chunkSize
parts := make([]oss.UploadPart, partNum) parts := make([]oss.UploadPart, partNum)
offset := int64(0) offset := int64(0)
@ -103,13 +98,10 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
if i == partNum { if i == partNum {
partSize = fileSize - (i-1)*chunkSize partSize = fileSize - (i-1)*chunkSize
} }
rd, err := ss.GetSectionReader(offset, partSize) rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
if err != nil {
return err
}
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
err = retry.Do(func() error { err = retry.Do(func() error {
rd.Seek(0, io.SeekStart) _ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i)) part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i))
if err != nil { if err != nil {
return err return err
@ -120,7 +112,6 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
retry.Attempts(3), retry.Attempts(3),
retry.DelayType(retry.BackOffDelay), retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second)) retry.Delay(time.Second))
ss.FreeSectionReader(rd)
if err != nil { if err != nil {
return err return err
} }
@ -130,7 +121,7 @@ func (d *Open115) multpartUpload(ctx context.Context, stream model.FileStreamer,
} else { } else {
offset += partSize offset += partSize
} }
up(float64(offset) * 100 / float64(fileSize)) up(float64(offset) / float64(fileSize))
} }
// callbackRespBytes := make([]byte, 1024) // callbackRespBytes := make([]byte, 1024)

View File

@ -3,10 +3,10 @@ package _115_share
import ( import (
"context" "context"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
driver115 "github.com/SheltonZhu/115driver/pkg/driver" driver115 "github.com/SheltonZhu/115driver/pkg/driver"
"golang.org/x/time/rate" "golang.org/x/time/rate"
) )

View File

@ -1,8 +1,8 @@
package _115_share package _115_share
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {
@ -19,7 +19,12 @@ type Addition struct {
var config = driver.Config{ var config = driver.Config{
Name: "115 Share", Name: "115 Share",
DefaultRoot: "0", DefaultRoot: "0",
NoUpload: true, // OnlyProxy: true,
// OnlyLocal: true,
CheckStatus: false,
Alert: "",
NoOverwriteUpload: true,
NoUpload: true,
} }
func init() { func init() {

View File

@ -5,8 +5,8 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
driver115 "github.com/SheltonZhu/115driver/pkg/driver" driver115 "github.com/SheltonZhu/115driver/pkg/driver"
"github.com/pkg/errors" "github.com/pkg/errors"
) )

View File

@ -6,18 +6,17 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"net/url" "net/url"
"strings"
"sync" "sync"
"time" "time"
"golang.org/x/time/rate" "golang.org/x/time/rate"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream" "github.com/OpenListTeam/OpenList/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/aws/session"
@ -64,6 +63,14 @@ func (d *Pan123) List(ctx context.Context, dir model.Obj, args model.ListArgs) (
func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if f, ok := file.(File); ok { if f, ok := file.(File); ok {
//var resp DownResp
var headers map[string]string
if !utils.IsLocalIPAddr(args.IP) {
headers = map[string]string{
//"X-Real-IP": "1.1.1.1",
"X-Forwarded-For": args.IP,
}
}
data := base.Json{ data := base.Json{
"driveId": 0, "driveId": 0,
"etag": f.Etag, "etag": f.Etag,
@ -75,27 +82,25 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
} }
resp, err := d.Request(DownloadInfo, http.MethodPost, func(req *resty.Request) { resp, err := d.Request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
req.SetBody(data) req.SetBody(data).SetHeaders(headers)
}, nil) }, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
downloadUrl := utils.Json.Get(resp, "data", "DownloadUrl").ToString() downloadUrl := utils.Json.Get(resp, "data", "DownloadUrl").ToString()
ou, err := url.Parse(downloadUrl) u, err := url.Parse(downloadUrl)
if err != nil { if err != nil {
return nil, err return nil, err
} }
u_ := ou.String() nu := u.Query().Get("params")
nu := ou.Query().Get("params")
if nu != "" { if nu != "" {
du, _ := base64.StdEncoding.DecodeString(nu) du, _ := base64.StdEncoding.DecodeString(nu)
u, err := url.Parse(string(du)) u, err = url.Parse(string(du))
if err != nil { if err != nil {
return nil, err return nil, err
} }
u_ = u.String()
} }
u_ := u.String()
log.Debug("download url: ", u_) log.Debug("download url: ", u_)
res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_) res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_)
if err != nil { if err != nil {
@ -112,7 +117,7 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString() link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString()
} }
link.Header = http.Header{ link.Header = http.Header{
"Referer": []string{fmt.Sprintf("%s://%s/", ou.Scheme, ou.Host)}, "Referer": []string{"https://www.123pan.com/"},
} }
return &link, nil return &link, nil
} else { } else {
@ -182,7 +187,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStrea
etag := file.GetHash().GetHash(utils.MD5) etag := file.GetHash().GetHash(utils.MD5)
var err error var err error
if len(etag) < utils.MD5.Width { if len(etag) < utils.MD5.Width {
_, etag, err = stream.CacheFullAndHash(file, &up, utils.MD5) _, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil { if err != nil {
return err return err
} }
@ -190,7 +195,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStrea
data := base.Json{ data := base.Json{
"driveId": 0, "driveId": 0,
"duplicate": 2, // 2->覆盖 1->重命名 0->默认 "duplicate": 2, // 2->覆盖 1->重命名 0->默认
"etag": strings.ToLower(etag), "etag": etag,
"fileName": file.GetName(), "fileName": file.GetName(),
"parentFileId": dstDir.GetID(), "parentFileId": dstDir.GetID(),
"size": file.GetSize(), "size": file.GetSize(),

View File

@ -1,8 +1,8 @@
package _123 package _123
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {
@ -11,8 +11,7 @@ type Addition struct {
driver.RootID driver.RootID
//OrderBy string `json:"order_by" type:"select" options:"file_id,file_name,size,update_at" default:"file_name"` //OrderBy string `json:"order_by" type:"select" options:"file_id,file_name,size,update_at" default:"file_name"`
//OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"` //OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
AccessToken string AccessToken string
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
} }
var config = driver.Config{ var config = driver.Config{
@ -23,11 +22,6 @@ var config = driver.Config{
func init() { func init() {
op.RegisterDriver(func() driver.Driver { op.RegisterDriver(func() driver.Driver {
// 新增默认选项 要在RegisterDriver初始化设置 才会对正在使用的用户生效 return &Pan123{}
return &Pan123{
Addition: Addition{
UploadThread: 3,
},
}
}) })
} }

View File

@ -7,9 +7,9 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
) )
type File struct { type File struct {

View File

@ -6,16 +6,11 @@ import (
"io" "io"
"net/http" "net/http"
"strconv" "strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
) )
@ -74,21 +69,18 @@ func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp, file model.F
} }
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error { func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error {
// fetch s3 pre signed urls tmpF, err := file.CacheFullInTempFile()
size := file.GetSize()
chunkSize := int64(16 * utils.MB)
chunkCount := 1
if size > chunkSize {
chunkCount = int((size + chunkSize - 1) / chunkSize)
}
ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up)
if err != nil { if err != nil {
return err return err
} }
// fetch s3 pre signed urls
size := file.GetSize()
chunkSize := min(size, 16*utils.MB)
chunkCount := int(size / chunkSize)
lastChunkSize := size % chunkSize lastChunkSize := size % chunkSize
if lastChunkSize == 0 { if lastChunkSize > 0 {
chunkCount++
} else {
lastChunkSize = chunkSize lastChunkSize = chunkSize
} }
// only 1 batch is allowed // only 1 batch is allowed
@ -98,99 +90,73 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
batchSize = 10 batchSize = 10
getS3UploadUrl = d.getS3PreSignedUrls getS3UploadUrl = d.getS3PreSignedUrls
} }
thread := min(int(chunkCount), d.UploadThread)
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
for i := 1; i <= chunkCount; i += batchSize { for i := 1; i <= chunkCount; i += batchSize {
if utils.IsCanceled(uploadCtx) { if utils.IsCanceled(ctx) {
break return ctx.Err()
} }
start := i start := i
end := min(i+batchSize, chunkCount+1) end := min(i+batchSize, chunkCount+1)
s3PreSignedUrls, err := getS3UploadUrl(uploadCtx, upReq, start, end) s3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, start, end)
if err != nil { if err != nil {
return err return err
} }
// upload each chunk // upload each chunk
for cur := start; cur < end; cur++ { for j := start; j < end; j++ {
if utils.IsCanceled(uploadCtx) { if utils.IsCanceled(ctx) {
break return ctx.Err()
} }
offset := int64(cur-1) * chunkSize
curSize := chunkSize curSize := chunkSize
if cur == chunkCount { if j == chunkCount {
curSize = lastChunkSize curSize = lastChunkSize
} }
var reader *stream.SectionReader err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.NewSectionReader(tmpF, chunkSize*int64(j-1), curSize), curSize, false, getS3UploadUrl)
var rateLimitedRd io.Reader if err != nil {
threadG.GoWithLifecycle(errgroup.Lifecycle{ return err
Before: func(ctx context.Context) error { }
if reader == nil { up(float64(j) * 100 / float64(chunkCount))
var err error
reader, err = ss.GetSectionReader(offset, curSize)
if err != nil {
return err
}
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader)
}
return nil
},
Do: func(ctx context.Context) error {
reader.Seek(0, io.SeekStart)
uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)]
if uploadUrl == "" {
return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls)
}
reader.Seek(0, io.SeekStart)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadUrl, rateLimitedRd)
if err != nil {
return err
}
req.ContentLength = curSize
//req.Header.Set("Content-Length", strconv.FormatInt(curSize, 10))
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode == http.StatusForbidden {
singleflight.AnyGroup.Do(fmt.Sprintf("Pan123.newUpload_%p", threadG), func() (any, error) {
newS3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, cur, end)
if err != nil {
return nil, err
}
s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls
return nil, nil
})
if err != nil {
return err
}
return fmt.Errorf("upload s3 chunk %d failed, status code: %d", cur, res.StatusCode)
}
if res.StatusCode != http.StatusOK {
body, err := io.ReadAll(res.Body)
if err != nil {
return err
}
return fmt.Errorf("upload s3 chunk %d failed, status code: %d, body: %s", cur, res.StatusCode, body)
}
progress := 10.0 + 85.0*float64(threadG.Success())/float64(chunkCount)
up(progress)
return nil
},
After: func(err error) {
ss.FreeSectionReader(reader)
},
})
} }
} }
if err := threadG.Wait(); err != nil {
return err
}
defer up(100)
// complete s3 upload // complete s3 upload
return d.completeS3(ctx, upReq, file, chunkCount > 1) return d.completeS3(ctx, upReq, file, chunkCount > 1)
} }
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader *io.SectionReader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)]
if uploadUrl == "" {
return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls)
}
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, reader))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = curSize
//req.Header.Set("Content-Length", strconv.FormatInt(curSize, 10))
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode == http.StatusForbidden {
if retry {
return fmt.Errorf("upload s3 chunk %d failed, status code: %d", cur, res.StatusCode)
}
// refresh s3 pre signed urls
newS3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, cur, end)
if err != nil {
return err
}
s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls
// retry
reader.Seek(0, io.SeekStart)
return d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, cur, end, reader, curSize, true, getS3UploadUrl)
}
if res.StatusCode != http.StatusOK {
body, err := io.ReadAll(res.Body)
if err != nil {
return err
}
return fmt.Errorf("upload s3 chunk %d failed, status code: %d, body: %s", cur, res.StatusCode, body)
}
return nil
}

View File

@ -13,8 +13,8 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go" jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"

View File

@ -5,10 +5,10 @@ import (
stdpath "path" stdpath "path"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
) )
type Pan123Link struct { type Pan123Link struct {

View File

@ -1,8 +1,8 @@
package _123Link package _123Link
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {

View File

@ -3,8 +3,8 @@ package _123Link
import ( import (
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
) )
// Node is a node in the folder tree // Node is a node in the folder tree

View File

@ -2,22 +2,19 @@ package _123_open
import ( import (
"context" "context"
"fmt"
"strconv" "strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/stream" "github.com/OpenListTeam/OpenList/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
) )
type Open123 struct { type Open123 struct {
model.Storage model.Storage
Addition Addition
UID uint64
} }
func (d *Open123) Config() driver.Config { func (d *Open123) Config() driver.Config {
@ -70,45 +67,13 @@ func (d *Open123) List(ctx context.Context, dir model.Obj, args model.ListArgs)
func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
fileId, _ := strconv.ParseInt(file.GetID(), 10, 64) fileId, _ := strconv.ParseInt(file.GetID(), 10, 64)
if d.DirectLink {
res, err := d.getDirectLink(fileId)
if err != nil {
return nil, err
}
if d.DirectLinkPrivateKey == "" {
duration := 365 * 24 * time.Hour // 缓存1年
return &model.Link{
URL: res.Data.URL,
Expiration: &duration,
}, nil
}
uid, err := d.getUID()
if err != nil {
return nil, err
}
duration := time.Duration(d.DirectLinkValidDuration) * time.Minute
newURL, err := d.SignURL(res.Data.URL, d.DirectLinkPrivateKey,
uid, duration)
if err != nil {
return nil, err
}
return &model.Link{
URL: newURL,
Expiration: &duration,
}, nil
}
res, err := d.getDownloadInfo(fileId) res, err := d.getDownloadInfo(fileId)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &model.Link{URL: res.Data.DownloadUrl}, nil link := model.Link{URL: res.Data.DownloadUrl}
return &link, nil
} }
func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
@ -130,22 +95,6 @@ func (d *Open123) Rename(ctx context.Context, srcObj model.Obj, newName string)
} }
func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
// 尝试使用上传+MD5秒传功能实现复制
// 1. 创建文件
// parentFileID 父目录id上传到根目录时填写 0
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
if err != nil {
return fmt.Errorf("parse parentFileID error: %v", err)
}
etag := srcObj.(File).Etag
createResp, err := d.create(parentFileId, srcObj.GetName(), etag, srcObj.GetSize(), 2, false)
if err != nil {
return err
}
// 是否秒传
if createResp.Data.Reuse {
return nil
}
return errs.NotSupport return errs.NotSupport
} }
@ -155,64 +104,26 @@ func (d *Open123) Remove(ctx context.Context, obj model.Obj) error {
return d.trash(fileId) return d.trash(fileId)
} }
func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) { func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
// 1. 创建文件
// parentFileID 父目录id上传到根目录时填写 0
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64) parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("parse parentFileID error: %v", err)
}
// etag 文件md5
etag := file.GetHash().GetHash(utils.MD5) etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width { if len(etag) < utils.MD5.Width {
_, etag, err = stream.CacheFullAndHash(file, &up, utils.MD5) _, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil { if err != nil {
return nil, err return err
} }
} }
createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false) createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false)
if err != nil { if err != nil {
return nil, err return err
} }
// 是否秒传
if createResp.Data.Reuse { if createResp.Data.Reuse {
// 秒传成功才会返回正确的 FileID否则为 0 return nil
if createResp.Data.FileID != 0 {
return File{
FileName: file.GetName(),
Size: file.GetSize(),
FileId: createResp.Data.FileID,
Type: 2,
Etag: etag,
}, nil
}
} }
up(10)
// 2. 上传分片 return d.Upload(ctx, file, createResp, up)
err = d.Upload(ctx, file, createResp, up)
if err != nil {
return nil, err
}
// 3. 上传完毕
for range 60 {
uploadCompleteResp, err := d.complete(createResp.Data.PreuploadID)
// 返回错误代码未知20103文档也没有具体说
if err == nil && uploadCompleteResp.Data.Completed && uploadCompleteResp.Data.FileID != 0 {
up(100)
return File{
FileName: file.GetName(),
Size: file.GetSize(),
FileId: uploadCompleteResp.Data.FileID,
Type: 2,
Etag: etag,
}, nil
}
// 若接口返回的completed为 false 时则需间隔1秒继续轮询此接口获取上传最终结果。
time.Sleep(time.Second)
}
return nil, fmt.Errorf("upload complete timeout")
} }
var _ driver.Driver = (*Open123)(nil) var _ driver.Driver = (*Open123)(nil)
var _ driver.PutResult = (*Open123)(nil)

View File

@ -1,8 +1,8 @@
package _123_open package _123_open
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {
@ -23,11 +23,6 @@ type Addition struct {
// 上传线程数 // 上传线程数
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"` UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
// 使用直链
DirectLink bool `json:"DirectLink" type:"bool" default:"false" required:"false" help:"use direct link when download file"`
DirectLinkPrivateKey string `json:"DirectLinkPrivateKey" required:"false" help:"private key for direct link, if URL authentication is enabled"`
DirectLinkValidDuration int64 `json:"DirectLinkValidDuration" type:"number" default:"30" required:"false" help:"minutes, if URL authentication is enabled"`
driver.RootID driver.RootID
} }

View File

@ -4,8 +4,8 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
) )
type ApiInfo struct { type ApiInfo struct {
@ -73,9 +73,7 @@ func (f File) GetName() string {
} }
func (f File) CreateTime() time.Time { func (f File) CreateTime() time.Time {
// 返回的时间没有时区信息,默认 UTC+8 parsedTime, err := time.Parse("2006-01-02 15:04:05", f.CreateAt)
loc := time.FixedZone("UTC+8", 8*60*60)
parsedTime, err := time.ParseInLocation("2006-01-02 15:04:05", f.CreateAt, loc)
if err != nil { if err != nil {
return time.Now() return time.Now()
} }
@ -83,9 +81,7 @@ func (f File) CreateTime() time.Time {
} }
func (f File) ModTime() time.Time { func (f File) ModTime() time.Time {
// 返回的时间没有时区信息,默认 UTC+8 parsedTime, err := time.Parse("2006-01-02 15:04:05", f.UpdateAt)
loc := time.FixedZone("UTC+8", 8*60*60)
parsedTime, err := time.ParseInLocation("2006-01-02 15:04:05", f.UpdateAt, loc)
if err != nil { if err != nil {
return time.Now() return time.Now()
} }
@ -127,19 +123,19 @@ type RefreshTokenResp struct {
type UserInfoResp struct { type UserInfoResp struct {
BaseResp BaseResp
Data struct { Data struct {
UID uint64 `json:"uid"` UID int64 `json:"uid"`
// Username string `json:"username"` Username string `json:"username"`
// DisplayName string `json:"displayName"` DisplayName string `json:"displayName"`
// HeadImage string `json:"headImage"` HeadImage string `json:"headImage"`
// Passport string `json:"passport"` Passport string `json:"passport"`
// Mail string `json:"mail"` Mail string `json:"mail"`
// SpaceUsed int64 `json:"spaceUsed"` SpaceUsed int64 `json:"spaceUsed"`
// SpacePermanent int64 `json:"spacePermanent"` SpacePermanent int64 `json:"spacePermanent"`
// SpaceTemp int64 `json:"spaceTemp"` SpaceTemp int64 `json:"spaceTemp"`
// SpaceTempExpr int64 `json:"spaceTempExpr"` SpaceTempExpr string `json:"spaceTempExpr"`
// Vip bool `json:"vip"` Vip bool `json:"vip"`
// DirectTraffic int64 `json:"directTraffic"` DirectTraffic int64 `json:"directTraffic"`
// IsHideUID bool `json:"isHideUID"` IsHideUID bool `json:"isHideUID"`
} `json:"data"` } `json:"data"`
} }
@ -158,30 +154,52 @@ type DownloadInfoResp struct {
} `json:"data"` } `json:"data"`
} }
type DirectLinkResp struct {
BaseResp
Data struct {
URL string `json:"url"`
} `json:"data"`
}
// 创建文件V2返回
type UploadCreateResp struct { type UploadCreateResp struct {
BaseResp BaseResp
Data struct { Data struct {
FileID int64 `json:"fileID"` FileID int64 `json:"fileID"`
PreuploadID string `json:"preuploadID"` PreuploadID string `json:"preuploadID"`
Reuse bool `json:"reuse"` Reuse bool `json:"reuse"`
SliceSize int64 `json:"sliceSize"` SliceSize int64 `json:"sliceSize"`
Servers []string `json:"servers"`
} `json:"data"` } `json:"data"`
} }
// 上传完毕V2返回 type UploadUrlResp struct {
BaseResp
Data struct {
PresignedURL string `json:"presignedURL"`
}
}
type UploadCompleteResp struct { type UploadCompleteResp struct {
BaseResp
Data struct {
Async bool `json:"async"`
Completed bool `json:"completed"`
FileID int64 `json:"fileID"`
} `json:"data"`
}
type UploadAsyncResp struct {
BaseResp BaseResp
Data struct { Data struct {
Completed bool `json:"completed"` Completed bool `json:"completed"`
FileID int64 `json:"fileID"` FileID int64 `json:"fileID"`
} `json:"data"` } `json:"data"`
} }
type UploadResp struct {
BaseResp
Data struct {
AccessKeyId string `json:"AccessKeyId"`
Bucket string `json:"Bucket"`
Key string `json:"Key"`
SecretAccessKey string `json:"SecretAccessKey"`
SessionToken string `json:"SessionToken"`
FileId int64 `json:"FileId"`
Reuse bool `json:"Reuse"`
EndPoint string `json:"EndPoint"`
StorageNode string `json:"StorageNode"`
UploadId string `json:"UploadId"`
} `json:"data"`
}

View File

@ -1,35 +1,27 @@
package _123_open package _123_open
import ( import (
"bytes"
"context" "context"
"encoding/json"
"fmt"
"io"
"mime/multipart"
"net/http" "net/http"
"strconv"
"strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream" "github.com/OpenListTeam/OpenList/pkg/errgroup"
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup" "github.com/OpenListTeam/OpenList/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/avast/retry-go" "github.com/avast/retry-go"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
) )
// 创建文件 V2
func (d *Open123) create(parentFileID int64, filename string, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) { func (d *Open123) create(parentFileID int64, filename string, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) {
var resp UploadCreateResp var resp UploadCreateResp
_, err := d.Request(UploadCreate, http.MethodPost, func(req *resty.Request) { _, err := d.Request(UploadCreate, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"parentFileId": parentFileID, "parentFileId": parentFileID,
"filename": filename, "filename": filename,
"etag": strings.ToLower(etag), "etag": etag,
"size": size, "size": size,
"duplicate": duplicate, "duplicate": duplicate,
"containDir": containDir, "containDir": containDir,
@ -41,136 +33,21 @@ func (d *Open123) create(parentFileID int64, filename string, etag string, size
return &resp, nil return &resp, nil
} }
// 上传分片 V2 func (d *Open123) url(preuploadID string, sliceNo int64) (string, error) {
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createResp *UploadCreateResp, up driver.UpdateProgress) error { // get upload url
uploadDomain := createResp.Data.Servers[0] var resp UploadUrlResp
size := file.GetSize() _, err := d.Request(UploadUrl, http.MethodPost, func(req *resty.Request) {
chunkSize := createResp.Data.SliceSize req.SetBody(base.Json{
"preuploadId": preuploadID,
ss, err := stream.NewStreamSectionReader(file, int(chunkSize), &up) "sliceNo": sliceNo,
if err != nil {
return err
}
uploadNums := (size + chunkSize - 1) / chunkSize
thread := min(int(uploadNums), d.UploadThread)
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
for partIndex := range uploadNums {
if utils.IsCanceled(uploadCtx) {
break
}
partIndex := partIndex
partNumber := partIndex + 1 // 分片号从1开始
offset := partIndex * chunkSize
size := min(chunkSize, size-offset)
var reader *stream.SectionReader
var rateLimitedRd io.Reader
sliceMD5 := ""
// 表单
b := bytes.NewBuffer(make([]byte, 0, 2048))
threadG.GoWithLifecycle(errgroup.Lifecycle{
Before: func(ctx context.Context) error {
if reader == nil {
var err error
// 每个分片一个reader
reader, err = ss.GetSectionReader(offset, size)
if err != nil {
return err
}
// 计算当前分片的MD5
sliceMD5, err = utils.HashReader(utils.MD5, reader)
if err != nil {
return err
}
}
return nil
},
Do: func(ctx context.Context) error {
// 重置分片reader位置因为HashReader、上一次失败已经读取到分片EOF
reader.Seek(0, io.SeekStart)
b.Reset()
w := multipart.NewWriter(b)
// 添加表单字段
err = w.WriteField("preuploadID", createResp.Data.PreuploadID)
if err != nil {
return err
}
err = w.WriteField("sliceNo", strconv.FormatInt(partNumber, 10))
if err != nil {
return err
}
err = w.WriteField("sliceMD5", sliceMD5)
if err != nil {
return err
}
// 写入文件内容
_, err = w.CreateFormFile("slice", fmt.Sprintf("%s.part%d", file.GetName(), partNumber))
if err != nil {
return err
}
headSize := b.Len()
err = w.Close()
if err != nil {
return err
}
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd = driver.NewLimitedUploadStream(ctx, io.MultiReader(head, reader, tail))
// 创建请求并设置header
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadDomain+"/upload/v2/file/slice", rateLimitedRd)
if err != nil {
return err
}
// 设置请求头
req.Header.Add("Authorization", "Bearer "+d.AccessToken)
req.Header.Add("Content-Type", w.FormDataContentType())
req.Header.Add("Platform", "open_platform")
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return fmt.Errorf("slice %d upload failed, status code: %d", partNumber, res.StatusCode)
}
var resp BaseResp
respBody, err := io.ReadAll(res.Body)
if err != nil {
return err
}
err = json.Unmarshal(respBody, &resp)
if err != nil {
return err
}
if resp.Code != 0 {
return fmt.Errorf("slice %d upload failed: %s", partNumber, resp.Message)
}
progress := 10.0 + 85.0*float64(threadG.Success())/float64(uploadNums)
up(progress)
return nil
},
After: func(err error) {
ss.FreeSectionReader(reader)
},
}) })
}, &resp)
if err != nil {
return "", err
} }
return resp.Data.PresignedURL, nil
if err := threadG.Wait(); err != nil {
return err
}
return nil
} }
// 上传完毕
func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) { func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) {
var resp UploadCompleteResp var resp UploadCompleteResp
_, err := d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) { _, err := d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) {
@ -183,3 +60,92 @@ func (d *Open123) complete(preuploadID string) (*UploadCompleteResp, error) {
} }
return &resp, nil return &resp, nil
} }
func (d *Open123) async(preuploadID string) (*UploadAsyncResp, error) {
var resp UploadAsyncResp
_, err := d.Request(UploadAsync, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"preuploadID": preuploadID,
})
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createResp *UploadCreateResp, up driver.UpdateProgress) error {
size := file.GetSize()
chunkSize := createResp.Data.SliceSize
uploadNums := (size + chunkSize - 1) / chunkSize
threadG, uploadCtx := errgroup.NewGroupWithContext(ctx, d.UploadThread,
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
threadG.SetLimit(3)
for partIndex := int64(0); partIndex < uploadNums; partIndex++ {
if utils.IsCanceled(uploadCtx) {
return ctx.Err()
}
partIndex := partIndex
partNumber := partIndex + 1 // 分片号从1开始
offset := partIndex * chunkSize
size := min(chunkSize, size-offset)
limitedReader, err := file.RangeRead(http_range.Range{
Start: offset,
Length: size})
if err != nil {
return err
}
limitedReader = driver.NewLimitedUploadStream(ctx, limitedReader)
threadG.Go(func(ctx context.Context) error {
uploadPartUrl, err := d.url(createResp.Data.PreuploadID, partNumber)
if err != nil {
return err
}
req, err := http.NewRequestWithContext(ctx, "PUT", uploadPartUrl, limitedReader)
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = size
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
_ = res.Body.Close()
progress := 10.0 + 85.0*float64(threadG.Success())/float64(uploadNums)
up(progress)
return nil
})
}
if err := threadG.Wait(); err != nil {
return err
}
uploadCompleteResp, err := d.complete(createResp.Data.PreuploadID)
if err != nil {
return err
}
if uploadCompleteResp.Data.Async == false || uploadCompleteResp.Data.Completed {
return nil
}
for {
uploadAsyncResp, err := d.async(createResp.Data.PreuploadID)
if err != nil {
return err
}
if uploadAsyncResp.Data.Completed {
break
}
}
up(100)
return nil
}

View File

@ -1,20 +1,15 @@
package _123_open package _123_open
import ( import (
"crypto/md5"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt"
"net/http" "net/http"
"net/url"
"strconv" "strconv"
"strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
"github.com/google/uuid"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
@ -24,15 +19,16 @@ var ( //不同情况下获取的AccessTokenQPS限制不同 如下模块化易于
AccessToken = InitApiInfo(Api+"/api/v1/access_token", 1) AccessToken = InitApiInfo(Api+"/api/v1/access_token", 1)
RefreshToken = InitApiInfo(Api+"/api/v1/oauth2/access_token", 1) RefreshToken = InitApiInfo(Api+"/api/v1/oauth2/access_token", 1)
UserInfo = InitApiInfo(Api+"/api/v1/user/info", 1) UserInfo = InitApiInfo(Api+"/api/v1/user/info", 1)
FileList = InitApiInfo(Api+"/api/v2/file/list", 3) FileList = InitApiInfo(Api+"/api/v2/file/list", 4)
DownloadInfo = InitApiInfo(Api+"/api/v1/file/download_info", 5) DownloadInfo = InitApiInfo(Api+"/api/v1/file/download_info", 0)
DirectLink = InitApiInfo(Api+"/api/v1/direct-link/url", 5)
Mkdir = InitApiInfo(Api+"/upload/v1/file/mkdir", 2) Mkdir = InitApiInfo(Api+"/upload/v1/file/mkdir", 2)
Move = InitApiInfo(Api+"/api/v1/file/move", 1) Move = InitApiInfo(Api+"/api/v1/file/move", 1)
Rename = InitApiInfo(Api+"/api/v1/file/name", 1) Rename = InitApiInfo(Api+"/api/v1/file/name", 1)
Trash = InitApiInfo(Api+"/api/v1/file/trash", 2) Trash = InitApiInfo(Api+"/api/v1/file/trash", 2)
UploadCreate = InitApiInfo(Api+"/upload/v2/file/create", 2) UploadCreate = InitApiInfo(Api+"/upload/v1/file/create", 2)
UploadComplete = InitApiInfo(Api+"/upload/v2/file/upload_complete", 0) UploadUrl = InitApiInfo(Api+"/upload/v1/file/get_upload_url", 0)
UploadComplete = InitApiInfo(Api+"/upload/v1/file/upload_complete", 0)
UploadAsync = InitApiInfo(Api+"/upload/v1/file/upload_async_result", 1)
) )
func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) { func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
@ -86,24 +82,8 @@ func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCall
} }
func (d *Open123) flushAccessToken() error { func (d *Open123) flushAccessToken() error {
if d.ClientID != "" { if d.Addition.ClientID != "" {
if d.RefreshToken != "" { if d.Addition.ClientSecret != "" {
var resp RefreshTokenResp
_, err := d.Request(RefreshToken, http.MethodPost, func(req *resty.Request) {
req.SetQueryParam("client_id", d.ClientID)
if d.ClientSecret != "" {
req.SetQueryParam("client_secret", d.ClientSecret)
}
req.SetQueryParam("grant_type", "refresh_token")
req.SetQueryParam("refresh_token", d.RefreshToken)
}, &resp)
if err != nil {
return err
}
d.AccessToken = resp.AccessToken
d.RefreshToken = resp.RefreshToken
op.MustSaveDriverStorage(d)
} else if d.ClientSecret != "" {
var resp AccessTokenResp var resp AccessTokenResp
_, err := d.Request(AccessToken, http.MethodPost, func(req *resty.Request) { _, err := d.Request(AccessToken, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
@ -116,38 +96,24 @@ func (d *Open123) flushAccessToken() error {
} }
d.AccessToken = resp.Data.AccessToken d.AccessToken = resp.Data.AccessToken
op.MustSaveDriverStorage(d) op.MustSaveDriverStorage(d)
} else if d.Addition.RefreshToken != "" {
var resp RefreshTokenResp
_, err := d.Request(RefreshToken, http.MethodPost, func(req *resty.Request) {
req.SetQueryParam("client_id", d.ClientID)
req.SetQueryParam("grant_type", "refresh_token")
req.SetQueryParam("refresh_token", d.Addition.RefreshToken)
}, &resp)
if err != nil {
return err
}
d.AccessToken = resp.AccessToken
d.RefreshToken = resp.RefreshToken
op.MustSaveDriverStorage(d)
} }
} }
return nil return nil
} }
func (d *Open123) SignURL(originURL, privateKey string, uid uint64, validDuration time.Duration) (newURL string, err error) {
// 生成Unix时间戳
ts := time.Now().Add(validDuration).Unix()
// 生成随机数建议使用UUID不能包含中划线-
rand := strings.ReplaceAll(uuid.New().String(), "-", "")
// 解析URL
objURL, err := url.Parse(originURL)
if err != nil {
return "", err
}
// 待签名字符串格式path-timestamp-rand-uid-privateKey
unsignedStr := fmt.Sprintf("%s-%d-%s-%d-%s", objURL.Path, ts, rand, uid, privateKey)
md5Hash := md5.Sum([]byte(unsignedStr))
// 生成鉴权参数格式timestamp-rand-uid-md5hash
authKey := fmt.Sprintf("%d-%s-%d-%x", ts, rand, uid, md5Hash)
// 添加鉴权参数到URL查询参数
v := objURL.Query()
v.Add("auth_key", authKey)
objURL.RawQuery = v.Encode()
return objURL.String(), nil
}
func (d *Open123) getUserInfo() (*UserInfoResp, error) { func (d *Open123) getUserInfo() (*UserInfoResp, error) {
var resp UserInfoResp var resp UserInfoResp
@ -158,18 +124,6 @@ func (d *Open123) getUserInfo() (*UserInfoResp, error) {
return &resp, nil return &resp, nil
} }
func (d *Open123) getUID() (uint64, error) {
if d.UID != 0 {
return d.UID, nil
}
resp, err := d.getUserInfo()
if err != nil {
return 0, err
}
d.UID = resp.Data.UID
return resp.Data.UID, nil
}
func (d *Open123) getFiles(parentFileId int64, limit int, lastFileId int64) (*FileListResp, error) { func (d *Open123) getFiles(parentFileId int64, limit int, lastFileId int64) (*FileListResp, error) {
var resp FileListResp var resp FileListResp
@ -207,21 +161,6 @@ func (d *Open123) getDownloadInfo(fileId int64) (*DownloadInfoResp, error) {
return &resp, nil return &resp, nil
} }
func (d *Open123) getDirectLink(fileId int64) (*DirectLinkResp, error) {
var resp DirectLinkResp
_, err := d.Request(DirectLink, http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"fileID": strconv.FormatInt(fileId, 10),
})
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *Open123) mkdir(parentID int64, name string) error { func (d *Open123) mkdir(parentID int64, name string) error {
_, err := d.Request(Mkdir, http.MethodPost, func(req *resty.Request) { _, err := d.Request(Mkdir, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{

View File

@ -11,12 +11,12 @@ import (
"golang.org/x/time/rate" "golang.org/x/time/rate"
_123 "github.com/OpenListTeam/OpenList/v4/drivers/123" _123 "github.com/OpenListTeam/OpenList/drivers/123"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
@ -70,6 +70,14 @@ func (d *Pan123Share) List(ctx context.Context, dir model.Obj, args model.ListAr
func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
// TODO return link of file, required // TODO return link of file, required
if f, ok := file.(File); ok { if f, ok := file.(File); ok {
//var resp DownResp
var headers map[string]string
if !utils.IsLocalIPAddr(args.IP) {
headers = map[string]string{
//"X-Real-IP": "1.1.1.1",
"X-Forwarded-For": args.IP,
}
}
data := base.Json{ data := base.Json{
"shareKey": d.ShareKey, "shareKey": d.ShareKey,
"SharePwd": d.SharePwd, "SharePwd": d.SharePwd,
@ -79,27 +87,25 @@ func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkA
"size": f.Size, "size": f.Size,
} }
resp, err := d.request(DownloadInfo, http.MethodPost, func(req *resty.Request) { resp, err := d.request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
req.SetBody(data) req.SetBody(data).SetHeaders(headers)
}, nil) }, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
downloadUrl := utils.Json.Get(resp, "data", "DownloadURL").ToString() downloadUrl := utils.Json.Get(resp, "data", "DownloadURL").ToString()
ou, err := url.Parse(downloadUrl) u, err := url.Parse(downloadUrl)
if err != nil { if err != nil {
return nil, err return nil, err
} }
u_ := ou.String() nu := u.Query().Get("params")
nu := ou.Query().Get("params")
if nu != "" { if nu != "" {
du, _ := base64.StdEncoding.DecodeString(nu) du, _ := base64.StdEncoding.DecodeString(nu)
u, err := url.Parse(string(du)) u, err = url.Parse(string(du))
if err != nil { if err != nil {
return nil, err return nil, err
} }
u_ = u.String()
} }
u_ := u.String()
log.Debug("download url: ", u_) log.Debug("download url: ", u_)
res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_) res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_)
if err != nil { if err != nil {
@ -116,7 +122,7 @@ func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkA
link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString() link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString()
} }
link.Header = http.Header{ link.Header = http.Header{
"Referer": []string{fmt.Sprintf("%s://%s/", ou.Scheme, ou.Host)}, "Referer": []string{"https://www.123pan.com/"},
} }
return &link, nil return &link, nil
} }

View File

@ -1,8 +1,8 @@
package _123Share package _123Share
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {
@ -15,10 +15,17 @@ type Addition struct {
} }
var config = driver.Config{ var config = driver.Config{
Name: "123PanShare", Name: "123PanShare",
LocalSort: true, LocalSort: true,
NoUpload: true, OnlyLocal: false,
DefaultRoot: "0", OnlyProxy: false,
NoCache: false,
NoUpload: true,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
} }
func init() { func init() {

View File

@ -7,9 +7,9 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
) )
type File struct { type File struct {

View File

@ -13,8 +13,8 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go" jsoniter "github.com/json-iterator/go"
) )
@ -61,7 +61,7 @@ func (d *Pan123Share) request(url string, method string, callback base.ReqCallba
"origin": "https://www.123pan.com", "origin": "https://www.123pan.com",
"referer": "https://www.123pan.com/", "referer": "https://www.123pan.com/",
"authorization": "Bearer " + d.AccessToken, "authorization": "Bearer " + d.AccessToken,
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) openlist-client", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) opnelist-client",
"platform": "web", "platform": "web",
"app-version": "3", "app-version": "3",
//"user-agent": base.UserAgent, //"user-agent": base.UserAgent,

View File

@ -10,14 +10,14 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
streamPkg "github.com/OpenListTeam/OpenList/v4/internal/stream" streamPkg "github.com/OpenListTeam/OpenList/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/cron" "github.com/OpenListTeam/OpenList/pkg/cron"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random" "github.com/OpenListTeam/OpenList/pkg/utils/random"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
@ -522,27 +522,30 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
var err error var err error
fullHash := stream.GetHash().GetHash(utils.SHA256) fullHash := stream.GetHash().GetHash(utils.SHA256)
if len(fullHash) != utils.SHA256.Width { if len(fullHash) != utils.SHA256.Width {
_, fullHash, err = streamPkg.CacheFullAndHash(stream, &up, utils.SHA256) _, fullHash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA256)
if err != nil { if err != nil {
return err return err
} }
} }
size := stream.GetSize() size := stream.GetSize()
partSize := d.getPartSize(size) var partSize = d.getPartSize(size)
part := int64(1) part := size / partSize
if size > partSize { if size%partSize > 0 {
part = (size + partSize - 1) / partSize part++
} else if part == 0 {
part = 1
} }
// 生成所有 partInfos
partInfos := make([]PartInfo, 0, part) partInfos := make([]PartInfo, 0, part)
for i := int64(0); i < part; i++ { for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
} }
start := i * partSize start := i * partSize
byteSize := min(size-start, partSize) byteSize := size - start
if byteSize > partSize {
byteSize = partSize
}
partNumber := i + 1 partNumber := i + 1
partInfo := PartInfo{ partInfo := PartInfo{
PartNumber: partNumber, PartNumber: partNumber,
@ -590,20 +593,17 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
// resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址 // resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址
// 快传的情况下同样需要手动处理冲突 // 快传的情况下同样需要手动处理冲突
if resp.Data.PartInfos != nil { if resp.Data.PartInfos != nil {
// Progress // 读取前100个分片的上传地址
p := driver.NewProgress(size, up) uploadPartInfos := resp.Data.PartInfos
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
// 先上传前100个分片 // 获取后续分片的上传地址
err = d.uploadPersonalParts(ctx, partInfos, resp.Data.PartInfos, rateLimited, p) for i := 101; i < len(partInfos); i += 100 {
if err != nil { end := i + 100
return err if end > len(partInfos) {
} end = len(partInfos)
}
// 如果还有剩余分片,分批获取上传地址并上传
for i := 100; i < len(partInfos); i += 100 {
end := min(i+100, len(partInfos))
batchPartInfos := partInfos[i:end] batchPartInfos := partInfos[i:end]
moredata := base.Json{ moredata := base.Json{
"fileId": resp.Data.FileId, "fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId, "uploadId": resp.Data.UploadId,
@ -619,13 +619,45 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil { if err != nil {
return err return err
} }
err = d.uploadPersonalParts(ctx, partInfos, moreresp.Data.PartInfos, rateLimited, p) uploadPartInfos = append(uploadPartInfos, moreresp.Data.PartInfos...)
}
// Progress
p := driver.NewProgress(size, up)
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
// 上传所有分片
for _, uploadPartInfo := range uploadPartInfos {
index := uploadPartInfo.PartNumber - 1
partSize := partInfos[index].PartSize
log.Debugf("[139] uploading part %+v/%+v", index, len(uploadPartInfos))
limitReader := io.LimitReader(rateLimited, partSize)
// Update Progress
r := io.TeeReader(limitReader, p)
req, err := http.NewRequest("PUT", uploadPartInfo.UploadUrl, r)
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(partSize))
req.Header.Set("Origin", "https://yun.139.com")
req.Header.Set("Referer", "https://yun.139.com/")
req.ContentLength = partSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
_ = res.Body.Close()
log.Debugf("[139] uploaded: %+v", res)
if res.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
} }
// 全部分片上传完毕后complete
data = base.Json{ data = base.Json{
"contentHash": fullHash, "contentHash": fullHash,
"contentHashAlgorithm": "SHA256", "contentHashAlgorithm": "SHA256",
@ -754,10 +786,12 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
size := stream.GetSize() size := stream.GetSize()
// Progress // Progress
p := driver.NewProgress(size, up) p := driver.NewProgress(size, up)
partSize := d.getPartSize(size) var partSize = d.getPartSize(size)
part := int64(1) part := size / partSize
if size > partSize { if size%partSize > 0 {
part = (size + partSize - 1) / partSize part++
} else if part == 0 {
part = 1
} }
rateLimited := driver.NewLimitedUploadStream(ctx, stream) rateLimited := driver.NewLimitedUploadStream(ctx, stream)
for i := int64(0); i < part; i++ { for i := int64(0); i < part; i++ {
@ -771,10 +805,12 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
limitReader := io.LimitReader(rateLimited, byteSize) limitReader := io.LimitReader(rateLimited, byteSize)
// Update Progress // Update Progress
r := io.TeeReader(limitReader, p) r := io.TeeReader(limitReader, p)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, resp.Data.UploadResult.RedirectionURL, r) req, err := http.NewRequest("POST", resp.Data.UploadResult.RedirectionURL, r)
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName())) req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName()))
req.Header.Set("contentSize", strconv.FormatInt(size, 10)) req.Header.Set("contentSize", strconv.FormatInt(size, 10))
req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1)) req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1))

View File

@ -1,8 +1,8 @@
package _139 package _139
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {

View File

@ -1,11 +1,9 @@
package _139 package _139
import ( import (
"context"
"encoding/base64" "encoding/base64"
"errors" "errors"
"fmt" "fmt"
"io"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@ -14,12 +12,11 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils/random"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go" jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
@ -626,47 +623,3 @@ func (d *Yun139) getPersonalCloudHost() string {
} }
return d.PersonalCloudHost return d.PersonalCloudHost
} }
func (d *Yun139) uploadPersonalParts(ctx context.Context, partInfos []PartInfo, uploadPartInfos []PersonalPartInfo, rateLimited *driver.RateLimitReader, p *driver.Progress) error {
// 确保数组以 PartNumber 从小到大排序
sort.Slice(uploadPartInfos, func(i, j int) bool {
return uploadPartInfos[i].PartNumber < uploadPartInfos[j].PartNumber
})
for _, uploadPartInfo := range uploadPartInfos {
index := uploadPartInfo.PartNumber - 1
if index < 0 || index >= len(partInfos) {
return fmt.Errorf("invalid PartNumber %d: index out of bounds (partInfos length: %d)", uploadPartInfo.PartNumber, len(partInfos))
}
partSize := partInfos[index].PartSize
log.Debugf("[139] uploading part %+v/%+v", index, len(partInfos))
limitReader := io.LimitReader(rateLimited, partSize)
r := io.TeeReader(limitReader, p)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadPartInfo.UploadUrl, r)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(partSize))
req.Header.Set("Origin", "https://yun.139.com")
req.Header.Set("Referer", "https://yun.139.com/")
req.ContentLength = partSize
err = func() error {
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
log.Debugf("[139] uploaded: %+v", res)
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("unexpected status code: %d, body: %s", res.StatusCode, string(body))
}
return nil
}()
if err != nil {
return err
}
}
return nil
}

View File

@ -5,10 +5,10 @@ import (
"net/http" "net/http"
"strings" "strings"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )

View File

@ -18,7 +18,7 @@ import (
"strconv" "strconv"
"strings" "strings"
myrand "github.com/OpenListTeam/OpenList/v4/pkg/utils/random" myrand "github.com/OpenListTeam/OpenList/pkg/utils/random"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )

View File

@ -4,7 +4,7 @@ import (
"errors" "errors"
"strconv" "strconv"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )

View File

@ -1,8 +1,8 @@
package _189 package _189
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {

View File

@ -15,11 +15,11 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
myrand "github.com/OpenListTeam/OpenList/v4/pkg/utils/random" myrand "github.com/OpenListTeam/OpenList/pkg/utils/random"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go" jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
@ -365,10 +365,11 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
log.Debugf("uploadData: %+v", uploadData) log.Debugf("uploadData: %+v", uploadData)
requestURL := uploadData.RequestURL requestURL := uploadData.RequestURL
uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&") uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&")
req, err := http.NewRequestWithContext(ctx, http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData))) req, err := http.NewRequest(http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx)
for _, v := range uploadHeaders { for _, v := range uploadHeaders {
i := strings.Index(v, "=") i := strings.Index(v, "=")
req.Header.Set(v[0:i], v[i+1:]) req.Header.Set(v[0:i], v[i+1:])

View File

@ -1,274 +0,0 @@
package _189_tv
import (
"container/ring"
"context"
"net/http"
"strconv"
"strings"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/go-resty/resty/v2"
)
type Cloud189TV struct {
model.Storage
Addition
client *resty.Client
tokenInfo *AppSessionResp
uploadThread int
familyTransferFolder *ring.Ring
cleanFamilyTransferFile func()
storageConfig driver.Config
}
func (y *Cloud189TV) Config() driver.Config {
if y.storageConfig.Name == "" {
y.storageConfig = config
}
return y.storageConfig
}
func (y *Cloud189TV) GetAddition() driver.Additional {
return &y.Addition
}
func (y *Cloud189TV) Init(ctx context.Context) (err error) {
// 兼容旧上传接口
y.storageConfig.NoOverwriteUpload = y.isFamily() && y.Addition.RapidUpload
// 处理个人云和家庭云参数
if y.isFamily() && y.RootFolderID == "-11" {
y.RootFolderID = ""
}
if !y.isFamily() && y.RootFolderID == "" {
y.RootFolderID = "-11"
}
// 限制上传线程数
y.uploadThread, _ = strconv.Atoi(y.UploadThread)
if y.uploadThread < 1 || y.uploadThread > 32 {
y.uploadThread, y.UploadThread = 3, "3"
}
// 初始化请求客户端
if y.client == nil {
y.client = base.NewRestyClient().SetHeaders(
map[string]string{
"Accept": "application/json;charset=UTF-8",
"User-Agent": "EcloudTV/6.5.5 (PJX110; unknown; home02) Android/35",
},
)
}
// 避免重复登陆
if !y.isLogin() || y.Addition.AccessToken == "" {
if err = y.login(); err != nil {
return
}
}
// 处理家庭云ID
if y.FamilyID == "" {
if y.FamilyID, err = y.getFamilyID(); err != nil {
return err
}
}
return
}
func (y *Cloud189TV) Drop(ctx context.Context) error {
return nil
}
func (y *Cloud189TV) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
return y.getFiles(ctx, dir.GetID(), y.isFamily())
}
func (y *Cloud189TV) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var downloadUrl struct {
URL string `json:"fileDownloadUrl"`
}
isFamily := y.isFamily()
fullUrl := ApiUrl
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/getFileDownloadUrl.action"
_, err := y.get(fullUrl, func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParam("fileId", file.GetID())
if isFamily {
r.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
})
} else {
r.SetQueryParams(map[string]string{
"dt": "3",
"flag": "1",
})
}
}, &downloadUrl, isFamily)
if err != nil {
return nil, err
}
// 重定向获取真实链接
downloadUrl.URL = strings.Replace(strings.ReplaceAll(downloadUrl.URL, "&amp;", "&"), "http://", "https://", 1)
res, err := base.NoRedirectClient.R().SetContext(ctx).SetDoNotParseResponse(true).Get(downloadUrl.URL)
if err != nil {
return nil, err
}
defer res.RawBody().Close()
if res.StatusCode() == 302 {
downloadUrl.URL = res.Header().Get("location")
}
like := &model.Link{
URL: downloadUrl.URL,
Header: http.Header{
"User-Agent": []string{base.UserAgent},
},
}
return like, nil
}
func (y *Cloud189TV) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
isFamily := y.isFamily()
fullUrl := ApiUrl
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/createFolder.action"
var newFolder Cloud189Folder
_, err := y.post(fullUrl, func(req *resty.Request) {
req.SetContext(ctx)
req.SetQueryParams(map[string]string{
"folderName": dirName,
"relativePath": "",
})
if isFamily {
req.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"parentId": parentDir.GetID(),
})
} else {
req.SetQueryParams(map[string]string{
"parentFolderId": parentDir.GetID(),
})
}
}, &newFolder, isFamily)
if err != nil {
return nil, err
}
return &newFolder, nil
}
func (y *Cloud189TV) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
isFamily := y.isFamily()
other := map[string]string{"targetFileName": dstDir.GetName()}
resp, err := y.CreateBatchTask("MOVE", IF(isFamily, y.FamilyID, ""), dstDir.GetID(), other, BatchTaskInfo{
FileId: srcObj.GetID(),
FileName: srcObj.GetName(),
IsFolder: BoolToNumber(srcObj.IsDir()),
})
if err != nil {
return nil, err
}
if err = y.WaitBatchTask("MOVE", resp.TaskID, time.Millisecond*400); err != nil {
return nil, err
}
return srcObj, nil
}
func (y *Cloud189TV) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
isFamily := y.isFamily()
queryParam := make(map[string]string)
fullUrl := ApiUrl
method := http.MethodPost
if isFamily {
fullUrl += "/family/file"
method = http.MethodGet
queryParam["familyId"] = y.FamilyID
}
var newObj model.Obj
switch f := srcObj.(type) {
case *Cloud189File:
fullUrl += "/renameFile.action"
queryParam["fileId"] = srcObj.GetID()
queryParam["destFileName"] = newName
newObj = &Cloud189File{Icon: f.Icon} // 复用预览
case *Cloud189Folder:
fullUrl += "/renameFolder.action"
queryParam["folderId"] = srcObj.GetID()
queryParam["destFolderName"] = newName
newObj = &Cloud189Folder{}
default:
return nil, errs.NotSupport
}
_, err := y.request(fullUrl, method, func(req *resty.Request) {
req.SetContext(ctx).SetQueryParams(queryParam)
}, nil, newObj, isFamily)
if err != nil {
return nil, err
}
return newObj, nil
}
func (y *Cloud189TV) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
isFamily := y.isFamily()
other := map[string]string{"targetFileName": dstDir.GetName()}
resp, err := y.CreateBatchTask("COPY", IF(isFamily, y.FamilyID, ""), dstDir.GetID(), other, BatchTaskInfo{
FileId: srcObj.GetID(),
FileName: srcObj.GetName(),
IsFolder: BoolToNumber(srcObj.IsDir()),
})
if err != nil {
return err
}
return y.WaitBatchTask("COPY", resp.TaskID, time.Second)
}
func (y *Cloud189TV) Remove(ctx context.Context, obj model.Obj) error {
isFamily := y.isFamily()
resp, err := y.CreateBatchTask("DELETE", IF(isFamily, y.FamilyID, ""), "", nil, BatchTaskInfo{
FileId: obj.GetID(),
FileName: obj.GetName(),
IsFolder: BoolToNumber(obj.IsDir()),
})
if err != nil {
return err
}
// 批量任务数量限制,过快会导致无法删除
return y.WaitBatchTask("DELETE", resp.TaskID, time.Millisecond*200)
}
func (y *Cloud189TV) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (newObj model.Obj, err error) {
overwrite := true
isFamily := y.isFamily()
// 响应时间长,按需启用
if y.Addition.RapidUpload && !stream.IsForceStreamUpload() {
if newObj, err := y.RapidUpload(ctx, dstDir, stream, isFamily, overwrite); err == nil {
return newObj, nil
}
}
return y.OldUpload(ctx, dstDir, stream, up, isFamily, overwrite)
}

View File

@ -1,166 +0,0 @@
package _189_tv
import (
"bytes"
"crypto/hmac"
"crypto/sha1"
"encoding/hex"
"encoding/xml"
"fmt"
"net/http"
"regexp"
"strings"
"time"
)
func clientSuffix() map[string]string {
return map[string]string{
"clientType": AndroidTV,
"version": TvVersion,
"channelId": TvChannelId,
"clientSn": "unknown",
"model": "PJX110",
"osFamily": "Android",
"osVersion": "35",
"networkAccessMode": "WIFI",
"telecomsOperator": "46011",
}
}
// SessionKeySignatureOfHmac HMAC签名
func SessionKeySignatureOfHmac(sessionSecret, sessionKey, operate, fullUrl, dateOfGmt string) string {
urlpath := regexp.MustCompile(`://[^/]+((/[^/\s?#]+)*)`).FindStringSubmatch(fullUrl)[1]
mac := hmac.New(sha1.New, []byte(sessionSecret))
data := fmt.Sprintf("SessionKey=%s&Operate=%s&RequestURI=%s&Date=%s", sessionKey, operate, urlpath, dateOfGmt)
mac.Write([]byte(data))
return strings.ToUpper(hex.EncodeToString(mac.Sum(nil)))
}
// AppKeySignatureOfHmac HMAC签名
func AppKeySignatureOfHmac(sessionSecret, appKey, operate, fullUrl string, timestamp int64) string {
urlpath := regexp.MustCompile(`://[^/]+((/[^/\s?#]+)*)`).FindStringSubmatch(fullUrl)[1]
mac := hmac.New(sha1.New, []byte(sessionSecret))
data := fmt.Sprintf("AppKey=%s&Operate=%s&RequestURI=%s&Timestamp=%d", appKey, operate, urlpath, timestamp)
mac.Write([]byte(data))
return strings.ToUpper(hex.EncodeToString(mac.Sum(nil)))
}
// 获取http规范的时间
func getHttpDateStr() string {
return time.Now().UTC().Format(http.TimeFormat)
}
// 时间戳
func timestamp() int64 {
return time.Now().UTC().UnixNano() / 1e6
}
type Time time.Time
func (t *Time) UnmarshalJSON(b []byte) error { return t.Unmarshal(b) }
func (t *Time) UnmarshalXML(e *xml.Decoder, ee xml.StartElement) error {
b, err := e.Token()
if err != nil {
return err
}
if b, ok := b.(xml.CharData); ok {
if err = t.Unmarshal(b); err != nil {
return err
}
}
return e.Skip()
}
func (t *Time) Unmarshal(b []byte) error {
bs := strings.Trim(string(b), "\"")
var v time.Time
var err error
for _, f := range []string{"2006-01-02 15:04:05 -07", "Jan 2, 2006 15:04:05 PM -07"} {
v, err = time.ParseInLocation(f, bs+" +08", time.Local)
if err == nil {
break
}
}
*t = Time(v)
return err
}
type String string
func (t *String) UnmarshalJSON(b []byte) error { return t.Unmarshal(b) }
func (t *String) UnmarshalXML(e *xml.Decoder, ee xml.StartElement) error {
b, err := e.Token()
if err != nil {
return err
}
if b, ok := b.(xml.CharData); ok {
if err = t.Unmarshal(b); err != nil {
return err
}
}
return e.Skip()
}
func (s *String) Unmarshal(b []byte) error {
*s = String(bytes.Trim(b, "\""))
return nil
}
func toFamilyOrderBy(o string) string {
switch o {
case "filename":
return "1"
case "filesize":
return "2"
case "lastOpTime":
return "3"
default:
return "1"
}
}
func toDesc(o string) string {
switch o {
case "desc":
return "true"
case "asc":
fallthrough
default:
return "false"
}
}
func ParseHttpHeader(str string) map[string]string {
header := make(map[string]string)
for _, value := range strings.Split(str, "&") {
if k, v, found := strings.Cut(value, "="); found {
header[k] = v
}
}
return header
}
func MustString(str string, err error) string {
return str
}
func BoolToNumber(b bool) int {
if b {
return 1
}
return 0
}
func isBool(bs ...bool) bool {
for _, b := range bs {
if b {
return true
}
}
return false
}
func IF[V any](o bool, t V, f V) V {
if o {
return t
}
return f
}

View File

@ -1,30 +0,0 @@
package _189_tv
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootID
AccessToken string `json:"access_token"`
TempUuid string
OrderBy string `json:"order_by" type:"select" options:"filename,filesize,lastOpTime" default:"filename"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
Type string `json:"type" type:"select" options:"personal,family" default:"personal"`
FamilyID string `json:"family_id"`
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`
RapidUpload bool `json:"rapid_upload"`
}
var config = driver.Config{
Name: "189CloudTV",
DefaultRoot: "-11",
CheckStatus: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Cloud189TV{}
})
}

View File

@ -1,318 +0,0 @@
package _189_tv
import (
"encoding/xml"
"fmt"
"time"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
// 居然有四种返回方式
type RespErr struct {
ResCode any `json:"res_code"` // int or string
ResMessage string `json:"res_message"`
Error_ string `json:"error"`
XMLName xml.Name `xml:"error"`
Code string `json:"code" xml:"code"`
Message string `json:"message" xml:"message"`
Msg string `json:"msg"`
ErrorCode string `json:"errorCode"`
ErrorMsg string `json:"errorMsg"`
}
func (e *RespErr) HasError() bool {
switch v := e.ResCode.(type) {
case int, int64, int32:
return v != 0
case string:
return e.ResCode != ""
}
return (e.Code != "" && e.Code != "SUCCESS") || e.ErrorCode != "" || e.Error_ != ""
}
func (e *RespErr) Error() string {
switch v := e.ResCode.(type) {
case int, int64, int32:
if v != 0 {
return fmt.Sprintf("res_code: %d ,res_msg: %s", v, e.ResMessage)
}
case string:
if e.ResCode != "" {
return fmt.Sprintf("res_code: %s ,res_msg: %s", e.ResCode, e.ResMessage)
}
}
if e.Code != "" && e.Code != "SUCCESS" {
if e.Msg != "" {
return fmt.Sprintf("code: %s ,msg: %s", e.Code, e.Msg)
}
if e.Message != "" {
return fmt.Sprintf("code: %s ,msg: %s", e.Code, e.Message)
}
return "code: " + e.Code
}
if e.ErrorCode != "" {
return fmt.Sprintf("err_code: %s ,err_msg: %s", e.ErrorCode, e.ErrorMsg)
}
if e.Error_ != "" {
return fmt.Sprintf("error: %s ,message: %s", e.ErrorCode, e.Message)
}
return ""
}
// 刷新session返回
type UserSessionResp struct {
ResCode int `json:"res_code"`
ResMessage string `json:"res_message"`
LoginName string `json:"loginName"`
KeepAlive int `json:"keepAlive"`
GetFileDiffSpan int `json:"getFileDiffSpan"`
GetUserInfoSpan int `json:"getUserInfoSpan"`
// 个人云
SessionKey string `json:"sessionKey"`
SessionSecret string `json:"sessionSecret"`
// 家庭云
FamilySessionKey string `json:"familySessionKey"`
FamilySessionSecret string `json:"familySessionSecret"`
}
type UuidInfoResp struct {
Uuid string `json:"uuid"`
}
type E189AccessTokenResp struct {
E189AccessToken string `json:"accessToken"`
ExpiresIn int64 `json:"expiresIn"`
}
// 登录返回
type AppSessionResp struct {
UserSessionResp
IsSaveName string `json:"isSaveName"`
// 会话刷新Token
AccessToken string `json:"accessToken"`
//Token刷新
RefreshToken string `json:"refreshToken"`
}
// 家庭云账户
type FamilyInfoListResp struct {
FamilyInfoResp []FamilyInfoResp `json:"familyInfoResp"`
}
type FamilyInfoResp struct {
Count int `json:"count"`
CreateTime string `json:"createTime"`
FamilyID int64 `json:"familyId"`
RemarkName string `json:"remarkName"`
Type int `json:"type"`
UseFlag int `json:"useFlag"`
UserRole int `json:"userRole"`
}
/*文件部分*/
// 文件
type Cloud189File struct {
ID String `json:"id"`
Name string `json:"name"`
Size int64 `json:"size"`
Md5 string `json:"md5"`
LastOpTime Time `json:"lastOpTime"`
CreateDate Time `json:"createDate"`
Icon struct {
//iconOption 5
SmallUrl string `json:"smallUrl"`
LargeUrl string `json:"largeUrl"`
// iconOption 10
Max600 string `json:"max600"`
MediumURL string `json:"mediumUrl"`
} `json:"icon"`
// Orientation int64 `json:"orientation"`
// FileCata int64 `json:"fileCata"`
// MediaType int `json:"mediaType"`
// Rev string `json:"rev"`
// StarLabel int64 `json:"starLabel"`
}
func (c *Cloud189File) CreateTime() time.Time {
return time.Time(c.CreateDate)
}
func (c *Cloud189File) GetHash() utils.HashInfo {
return utils.NewHashInfo(utils.MD5, c.Md5)
}
func (c *Cloud189File) GetSize() int64 { return c.Size }
func (c *Cloud189File) GetName() string { return c.Name }
func (c *Cloud189File) ModTime() time.Time { return time.Time(c.LastOpTime) }
func (c *Cloud189File) IsDir() bool { return false }
func (c *Cloud189File) GetID() string { return string(c.ID) }
func (c *Cloud189File) GetPath() string { return "" }
func (c *Cloud189File) Thumb() string { return c.Icon.SmallUrl }
// 文件夹
type Cloud189Folder struct {
ID String `json:"id"`
ParentID int64 `json:"parentId"`
Name string `json:"name"`
LastOpTime Time `json:"lastOpTime"`
CreateDate Time `json:"createDate"`
// FileListSize int64 `json:"fileListSize"`
// FileCount int64 `json:"fileCount"`
// FileCata int64 `json:"fileCata"`
// Rev string `json:"rev"`
// StarLabel int64 `json:"starLabel"`
}
func (c *Cloud189Folder) CreateTime() time.Time {
return time.Time(c.CreateDate)
}
func (c *Cloud189Folder) GetHash() utils.HashInfo {
return utils.HashInfo{}
}
func (c *Cloud189Folder) GetSize() int64 { return 0 }
func (c *Cloud189Folder) GetName() string { return c.Name }
func (c *Cloud189Folder) ModTime() time.Time { return time.Time(c.LastOpTime) }
func (c *Cloud189Folder) IsDir() bool { return true }
func (c *Cloud189Folder) GetID() string { return string(c.ID) }
func (c *Cloud189Folder) GetPath() string { return "" }
type Cloud189FilesResp struct {
//ResCode int `json:"res_code"`
//ResMessage string `json:"res_message"`
FileListAO struct {
Count int `json:"count"`
FileList []Cloud189File `json:"fileList"`
FolderList []Cloud189Folder `json:"folderList"`
} `json:"fileListAO"`
}
// TaskInfo 任务信息
type BatchTaskInfo struct {
// FileId 文件ID
FileId string `json:"fileId"`
// FileName 文件名
FileName string `json:"fileName"`
// IsFolder 是否是文件夹0-否1-是
IsFolder int `json:"isFolder"`
// SrcParentId 文件所在父目录ID
SrcParentId string `json:"srcParentId,omitempty"`
/* 冲突管理 */
// 1 -> 跳过 2 -> 保留 3 -> 覆盖
DealWay int `json:"dealWay,omitempty"`
IsConflict int `json:"isConflict,omitempty"`
}
/* 上传部分 */
type InitMultiUploadResp struct {
//Code string `json:"code"`
Data struct {
UploadType int `json:"uploadType"`
UploadHost string `json:"uploadHost"`
UploadFileID string `json:"uploadFileId"`
FileDataExists int `json:"fileDataExists"`
} `json:"data"`
}
type UploadUrlsResp struct {
Code string `json:"code"`
Data map[string]UploadUrlsData `json:"uploadUrls"`
}
type UploadUrlsData struct {
RequestURL string `json:"requestURL"`
RequestHeader string `json:"requestHeader"`
}
/* 第二种上传方式 */
type CreateUploadFileResp struct {
// 上传文件请求ID
UploadFileId int64 `json:"uploadFileId"`
// 上传文件数据的URL路径
FileUploadUrl string `json:"fileUploadUrl"`
// 上传文件完成后确认路径
FileCommitUrl string `json:"fileCommitUrl"`
// 文件是否已存在云盘中0-未存在1-已存在
FileDataExists int `json:"fileDataExists"`
}
type GetUploadFileStatusResp struct {
CreateUploadFileResp
// 已上传的大小
DataSize int64 `json:"dataSize"`
Size int64 `json:"size"`
}
func (r *GetUploadFileStatusResp) GetSize() int64 {
return r.DataSize + r.Size
}
type CommitMultiUploadFileResp struct {
File struct {
UserFileID String `json:"userFileId"`
FileName string `json:"fileName"`
FileSize int64 `json:"fileSize"`
FileMd5 string `json:"fileMd5"`
CreateDate Time `json:"createDate"`
} `json:"file"`
}
type OldCommitUploadFileResp struct {
XMLName xml.Name `xml:"file"`
ID String `xml:"id"`
Name string `xml:"name"`
Size int64 `xml:"size"`
Md5 string `xml:"md5"`
CreateDate Time `xml:"createDate"`
}
func (f *OldCommitUploadFileResp) toFile() *Cloud189File {
return &Cloud189File{
ID: f.ID,
Name: f.Name,
Size: f.Size,
Md5: f.Md5,
CreateDate: f.CreateDate,
LastOpTime: f.CreateDate,
}
}
type CreateBatchTaskResp struct {
TaskID string `json:"taskId"`
}
type BatchTaskStateResp struct {
FailedCount int `json:"failedCount"`
Process int `json:"process"`
SkipCount int `json:"skipCount"`
SubTaskCount int `json:"subTaskCount"`
SuccessedCount int `json:"successedCount"`
SuccessedFileIDList []int64 `json:"successedFileIdList"`
TaskID string `json:"taskId"`
TaskStatus int `json:"taskStatus"` //1 初始化 2 存在冲突 3 执行中4 完成
}
type BatchTaskConflictTaskInfoResp struct {
SessionKey string `json:"sessionKey"`
TargetFolderID int `json:"targetFolderId"`
TaskID string `json:"taskId"`
TaskInfos []BatchTaskInfo
TaskType int `json:"taskType"`
}

View File

@ -1,574 +0,0 @@
package _189_tv
import (
"context"
"encoding/base64"
"encoding/xml"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/skip2/go-qrcode"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
jsoniter "github.com/json-iterator/go"
"github.com/pkg/errors"
)
const (
TVAppKey = "600100885"
TVAppSignatureSecre = "fe5734c74c2f96a38157f420b32dc995"
TvVersion = "6.5.5"
AndroidTV = "FAMILY_TV"
TvChannelId = "home02"
ApiUrl = "https://api.cloud.189.cn"
)
func (y *Cloud189TV) SignatureHeader(url, method string, isFamily bool) map[string]string {
dateOfGmt := getHttpDateStr()
sessionKey := y.tokenInfo.SessionKey
sessionSecret := y.tokenInfo.SessionSecret
if isFamily {
sessionKey = y.tokenInfo.FamilySessionKey
sessionSecret = y.tokenInfo.FamilySessionSecret
}
header := map[string]string{
"Date": dateOfGmt,
"SessionKey": sessionKey,
"X-Request-ID": uuid.NewString(),
"Signature": SessionKeySignatureOfHmac(sessionSecret, sessionKey, method, url, dateOfGmt),
}
return header
}
func (y *Cloud189TV) AppKeySignatureHeader(url, method string) map[string]string {
tempTime := timestamp()
header := map[string]string{
"Timestamp": strconv.FormatInt(tempTime, 10),
"X-Request-ID": uuid.NewString(),
"AppKey": TVAppKey,
"AppSignature": AppKeySignatureOfHmac(TVAppSignatureSecre, TVAppKey, method, url, tempTime),
}
return header
}
func (y *Cloud189TV) request(url, method string, callback base.ReqCallback, params map[string]string, resp interface{}, isFamily ...bool) ([]byte, error) {
req := y.client.R().SetQueryParams(clientSuffix())
if params != nil {
req.SetQueryParams(params)
}
// Signature
req.SetHeaders(y.SignatureHeader(url, method, isBool(isFamily...)))
var erron RespErr
req.SetError(&erron)
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
res, err := req.Execute(method, url)
if err != nil {
return nil, err
}
if strings.Contains(res.String(), "userSessionBO is null") ||
strings.Contains(res.String(), "InvalidSessionKey") {
return nil, errors.New("session expired")
}
// 处理错误
if erron.HasError() {
return nil, &erron
}
return res.Body(), nil
}
func (y *Cloud189TV) get(url string, callback base.ReqCallback, resp interface{}, isFamily ...bool) ([]byte, error) {
return y.request(url, http.MethodGet, callback, nil, resp, isFamily...)
}
func (y *Cloud189TV) post(url string, callback base.ReqCallback, resp interface{}, isFamily ...bool) ([]byte, error) {
return y.request(url, http.MethodPost, callback, nil, resp, isFamily...)
}
func (y *Cloud189TV) put(ctx context.Context, url string, headers map[string]string, sign bool, file io.Reader, isFamily bool) ([]byte, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodPut, url, file)
if err != nil {
return nil, err
}
query := req.URL.Query()
for key, value := range clientSuffix() {
query.Add(key, value)
}
req.URL.RawQuery = query.Encode()
for key, value := range headers {
req.Header.Add(key, value)
}
if sign {
for key, value := range y.SignatureHeader(url, http.MethodPut, isFamily) {
req.Header.Add(key, value)
}
}
// 请求完成后http.Client会Close Request.Body
resp, err := base.HttpClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
var erron RespErr
jsoniter.Unmarshal(body, &erron)
xml.Unmarshal(body, &erron)
if erron.HasError() {
return nil, &erron
}
if resp.StatusCode != http.StatusOK {
return nil, errors.Errorf("put fail,err:%s", string(body))
}
return body, nil
}
func (y *Cloud189TV) getFiles(ctx context.Context, fileId string, isFamily bool) ([]model.Obj, error) {
fullUrl := ApiUrl
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/listFiles.action"
res := make([]model.Obj, 0, 130)
for pageNum := 1; ; pageNum++ {
var resp Cloud189FilesResp
_, err := y.get(fullUrl, func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(map[string]string{
"folderId": fileId,
"fileType": "0",
"mediaAttr": "0",
"iconOption": "5",
"pageNum": fmt.Sprint(pageNum),
"pageSize": "130",
})
if isFamily {
r.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"orderBy": toFamilyOrderBy(y.OrderBy),
"descending": toDesc(y.OrderDirection),
})
} else {
r.SetQueryParams(map[string]string{
"recursive": "0",
"orderBy": y.OrderBy,
"descending": toDesc(y.OrderDirection),
})
}
}, &resp, isFamily)
if err != nil {
return nil, err
}
// 获取完毕跳出
if resp.FileListAO.Count == 0 {
break
}
for i := 0; i < len(resp.FileListAO.FolderList); i++ {
res = append(res, &resp.FileListAO.FolderList[i])
}
for i := 0; i < len(resp.FileListAO.FileList); i++ {
res = append(res, &resp.FileListAO.FileList[i])
}
}
return res, nil
}
func (y *Cloud189TV) login() (err error) {
req := y.client.R().SetQueryParams(clientSuffix())
var erron RespErr
var tokenInfo AppSessionResp
if y.Addition.AccessToken == "" {
if y.Addition.TempUuid == "" {
// 获取登录参数
var uuidInfo UuidInfoResp
req.SetResult(&uuidInfo).SetError(&erron)
// Signature
req.SetHeaders(y.AppKeySignatureHeader(ApiUrl+"/family/manage/getQrCodeUUID.action",
http.MethodGet))
_, err = req.Execute(http.MethodGet, ApiUrl+"/family/manage/getQrCodeUUID.action")
if err != nil {
return
}
if erron.HasError() {
return &erron
}
if uuidInfo.Uuid == "" {
return errors.New("uuidInfo is empty")
}
y.Addition.TempUuid = uuidInfo.Uuid
op.MustSaveDriverStorage(y)
// 展示二维码
qrTemplate := `<body>
<img src="data:image/jpeg;base64,%s"/>
<br>Or Click here: <a href="%s">%s</a>
</body>`
// Generate QR code
qrCode, err := qrcode.Encode(uuidInfo.Uuid, qrcode.Medium, 256)
if err != nil {
return fmt.Errorf("failed to generate QR code: %v", err)
}
// Encode QR code to base64
qrCodeBase64 := base64.StdEncoding.EncodeToString(qrCode)
// Create the HTML page
qrPage := fmt.Sprintf(qrTemplate, qrCodeBase64, uuidInfo.Uuid, uuidInfo.Uuid)
return fmt.Errorf("need verify: \n%s", qrPage)
} else {
var accessTokenResp E189AccessTokenResp
req.SetResult(&accessTokenResp).SetError(&erron)
// Signature
req.SetHeaders(y.AppKeySignatureHeader(ApiUrl+"/family/manage/qrcodeLoginResult.action",
http.MethodGet))
req.SetQueryParam("uuid", y.Addition.TempUuid)
_, err = req.Execute(http.MethodGet, ApiUrl+"/family/manage/qrcodeLoginResult.action")
if err != nil {
return
}
if erron.HasError() {
return &erron
}
if accessTokenResp.E189AccessToken == "" {
return errors.New("E189AccessToken is empty")
}
y.Addition.AccessToken = accessTokenResp.E189AccessToken
y.Addition.TempUuid = ""
}
}
// 获取SessionKey 和 SessionSecret
reqb := y.client.R().SetQueryParams(clientSuffix())
reqb.SetResult(&tokenInfo).SetError(&erron)
// Signature
reqb.SetHeaders(y.AppKeySignatureHeader(ApiUrl+"/family/manage/loginFamilyMerge.action",
http.MethodGet))
reqb.SetQueryParam("e189AccessToken", y.Addition.AccessToken)
_, err = reqb.Execute(http.MethodGet, ApiUrl+"/family/manage/loginFamilyMerge.action")
if err != nil {
return
}
if erron.HasError() {
return &erron
}
y.tokenInfo = &tokenInfo
op.MustSaveDriverStorage(y)
return
}
func (y *Cloud189TV) RapidUpload(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, isFamily bool, overwrite bool) (model.Obj, error) {
fileMd5 := stream.GetHash().GetHash(utils.MD5)
if len(fileMd5) < utils.MD5.Width {
return nil, errors.New("invalid hash")
}
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, stream.GetName(), fmt.Sprint(stream.GetSize()), isFamily)
if err != nil {
return nil, err
}
if uploadInfo.FileDataExists != 1 {
return nil, errors.New("rapid upload fail")
}
return y.OldUploadCommit(ctx, uploadInfo.FileCommitUrl, uploadInfo.UploadFileId, isFamily, overwrite)
}
// 旧版本上传,家庭云不支持覆盖
func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
fileMd5 := file.GetHash().GetHash(utils.MD5)
var tempFile = file.GetFile()
var err error
if len(fileMd5) != utils.MD5.Width {
tempFile, fileMd5, err = stream.CacheFullAndHash(file, &up, utils.MD5)
} else if tempFile == nil {
tempFile, err = file.CacheFullAndWriter(&up, nil)
}
if err != nil {
return nil, err
}
// 创建上传会话
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, file.GetName(), fmt.Sprint(file.GetSize()), isFamily)
if err != nil {
return nil, err
}
// 网盘中不存在该文件,开始上传
status := GetUploadFileStatusResp{CreateUploadFileResp: *uploadInfo}
// driver.RateLimitReader会尝试Close底层的reader
// 但这里的tempFile是一个*os.FileClose后就没法继续读了
// 所以这里用io.NopCloser包一层
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.NopCloser(tempFile))
for status.GetSize() < file.GetSize() && status.FileDataExists != 1 {
if utils.IsCanceled(ctx) {
return nil, ctx.Err()
}
header := map[string]string{
"ResumePolicy": "1",
"Expect": "100-continue",
}
if isFamily {
header["FamilyId"] = fmt.Sprint(y.FamilyID)
header["UploadFileId"] = fmt.Sprint(status.UploadFileId)
} else {
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
}
_, err := y.put(ctx, status.FileUploadUrl, header, true, rateLimitedRd, isFamily)
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
return nil, err
}
// 获取断点状态
fullUrl := ApiUrl + "/getUploadFileStatus.action"
if y.isFamily() {
fullUrl = ApiUrl + "/family/file/getFamilyFileStatus.action"
}
_, err = y.get(fullUrl, func(req *resty.Request) {
req.SetContext(ctx).SetQueryParams(map[string]string{
"uploadFileId": fmt.Sprint(status.UploadFileId),
"resumePolicy": "1",
})
if isFamily {
req.SetQueryParam("familyId", fmt.Sprint(y.FamilyID))
}
}, &status, isFamily)
if err != nil {
return nil, err
}
if _, err := tempFile.Seek(status.GetSize(), io.SeekStart); err != nil {
return nil, err
}
up(float64(status.GetSize()) / float64(file.GetSize()) * 100)
}
return y.OldUploadCommit(ctx, status.FileCommitUrl, status.UploadFileId, isFamily, overwrite)
}
// 创建上传会话
func (y *Cloud189TV) OldUploadCreate(ctx context.Context, parentID string, fileMd5, fileName, fileSize string, isFamily bool) (*CreateUploadFileResp, error) {
var uploadInfo CreateUploadFileResp
fullUrl := ApiUrl + "/createUploadFile.action"
if isFamily {
fullUrl = ApiUrl + "/family/file/createFamilyFile.action"
}
_, err := y.post(fullUrl, func(req *resty.Request) {
req.SetContext(ctx)
if isFamily {
req.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"parentId": parentID,
"fileMd5": fileMd5,
"fileName": fileName,
"fileSize": fileSize,
"resumePolicy": "1",
})
} else {
req.SetFormData(map[string]string{
"parentFolderId": parentID,
"fileName": fileName,
"size": fileSize,
"md5": fileMd5,
"opertype": "3",
"flag": "1",
"resumePolicy": "1",
"isLog": "0",
})
}
}, &uploadInfo, isFamily)
if err != nil {
return nil, err
}
return &uploadInfo, nil
}
// 提交上传文件
func (y *Cloud189TV) OldUploadCommit(ctx context.Context, fileCommitUrl string, uploadFileID int64, isFamily bool, overwrite bool) (model.Obj, error) {
var resp OldCommitUploadFileResp
_, err := y.post(fileCommitUrl, func(req *resty.Request) {
req.SetContext(ctx)
if isFamily {
req.SetHeaders(map[string]string{
"ResumePolicy": "1",
"UploadFileId": fmt.Sprint(uploadFileID),
"FamilyId": fmt.Sprint(y.FamilyID),
})
} else {
req.SetFormData(map[string]string{
"opertype": IF(overwrite, "3", "1"),
"resumePolicy": "1",
"uploadFileId": fmt.Sprint(uploadFileID),
"isLog": "0",
})
}
}, &resp, isFamily)
if err != nil {
return nil, err
}
return resp.toFile(), nil
}
func (y *Cloud189TV) isFamily() bool {
return y.Type == "family"
}
func (y *Cloud189TV) isLogin() bool {
if y.tokenInfo == nil {
return false
}
_, err := y.get(ApiUrl+"/getUserInfo.action", nil, nil)
return err == nil
}
// 获取家庭云所有用户信息
func (y *Cloud189TV) getFamilyInfoList() ([]FamilyInfoResp, error) {
var resp FamilyInfoListResp
_, err := y.get(ApiUrl+"/family/manage/getFamilyList.action", nil, &resp, true)
if err != nil {
return nil, err
}
return resp.FamilyInfoResp, nil
}
// 抽取家庭云ID
func (y *Cloud189TV) getFamilyID() (string, error) {
infos, err := y.getFamilyInfoList()
if err != nil {
return "", err
}
if len(infos) == 0 {
return "", fmt.Errorf("cannot get automatically,please input family_id")
}
for _, info := range infos {
if strings.Contains(y.tokenInfo.LoginName, info.RemarkName) {
return fmt.Sprint(info.FamilyID), nil
}
}
return fmt.Sprint(infos[0].FamilyID), nil
}
func (y *Cloud189TV) CreateBatchTask(aType string, familyID string, targetFolderId string, other map[string]string, taskInfos ...BatchTaskInfo) (*CreateBatchTaskResp, error) {
var resp CreateBatchTaskResp
_, err := y.post(ApiUrl+"/batch/createBatchTask.action", func(req *resty.Request) {
req.SetFormData(map[string]string{
"type": aType,
"taskInfos": MustString(utils.Json.MarshalToString(taskInfos)),
})
if targetFolderId != "" {
req.SetFormData(map[string]string{"targetFolderId": targetFolderId})
}
if familyID != "" {
req.SetFormData(map[string]string{"familyId": familyID})
}
req.SetFormData(other)
}, &resp, familyID != "")
if err != nil {
return nil, err
}
return &resp, nil
}
// 检测任务状态
func (y *Cloud189TV) CheckBatchTask(aType string, taskID string) (*BatchTaskStateResp, error) {
var resp BatchTaskStateResp
_, err := y.post(ApiUrl+"/batch/checkBatchTask.action", func(req *resty.Request) {
req.SetFormData(map[string]string{
"type": aType,
"taskId": taskID,
})
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
// 获取冲突的任务信息
func (y *Cloud189TV) GetConflictTaskInfo(aType string, taskID string) (*BatchTaskConflictTaskInfoResp, error) {
var resp BatchTaskConflictTaskInfoResp
_, err := y.post(ApiUrl+"/batch/getConflictTaskInfo.action", func(req *resty.Request) {
req.SetFormData(map[string]string{
"type": aType,
"taskId": taskID,
})
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
// 处理冲突
func (y *Cloud189TV) ManageBatchTask(aType string, taskID string, targetFolderId string, taskInfos ...BatchTaskInfo) error {
_, err := y.post(ApiUrl+"/batch/manageBatchTask.action", func(req *resty.Request) {
req.SetFormData(map[string]string{
"targetFolderId": targetFolderId,
"type": aType,
"taskId": taskID,
"taskInfos": MustString(utils.Json.MarshalToString(taskInfos)),
})
}, nil)
return err
}
var ErrIsConflict = errors.New("there is a conflict with the target object")
// 等待任务完成
func (y *Cloud189TV) WaitBatchTask(aType string, taskID string, t time.Duration) error {
for {
state, err := y.CheckBatchTask(aType, taskID)
if err != nil {
return err
}
switch state.TaskStatus {
case 2:
return ErrIsConflict
case 4:
return nil
}
time.Sleep(t)
}
}

View File

@ -8,11 +8,11 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
"github.com/google/uuid" "github.com/google/uuid"
) )

View File

@ -18,8 +18,8 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random" "github.com/OpenListTeam/OpenList/pkg/utils/random"
) )
func clientSuffix() map[string]string { func clientSuffix() map[string]string {

View File

@ -1,8 +1,8 @@
package _189pc package _189pc
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {

View File

@ -7,7 +7,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
) )
// 居然有四种返回方式 // 居然有四种返回方式

View File

@ -7,7 +7,6 @@ import (
"encoding/hex" "encoding/hex"
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"hash"
"io" "io"
"net/http" "net/http"
"net/http/cookiejar" "net/http/cookiejar"
@ -19,16 +18,16 @@ import (
"strings" "strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/setting" "github.com/OpenListTeam/OpenList/internal/setting"
"github.com/OpenListTeam/OpenList/v4/internal/stream" "github.com/OpenListTeam/OpenList/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup" "github.com/OpenListTeam/OpenList/pkg/errgroup"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/avast/retry-go" "github.com/avast/retry-go"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
@ -323,7 +322,7 @@ func (y *Cloud189PC) login() (err error) {
_, err = y.client.R(). _, err = y.client.R().
SetResult(&tokenInfo).SetError(&erron). SetResult(&tokenInfo).SetError(&erron).
SetQueryParams(clientSuffix()). SetQueryParams(clientSuffix()).
SetQueryParam("redirectURL", loginresp.ToUrl). SetQueryParam("redirectURL", url.QueryEscape(loginresp.ToUrl)).
Post(API_URL + "/getSessionForPC.action") Post(API_URL + "/getSessionForPC.action")
if err != nil { if err != nil {
return return
@ -472,16 +471,14 @@ func (y *Cloud189PC) refreshSession() (err error) {
// 普通上传 // 普通上传
// 无法上传大小为0的文件 // 无法上传大小为0的文件
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) { func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
// 文件大小 size := file.GetSize()
fileSize := file.GetSize() sliceSize := partSize(size)
// 分片大小,不得为文件大小
sliceSize := partSize(fileSize)
params := Params{ params := Params{
"parentFolderId": dstDir.GetID(), "parentFolderId": dstDir.GetID(),
"fileName": url.QueryEscape(file.GetName()), "fileName": url.QueryEscape(file.GetName()),
"fileSize": fmt.Sprint(fileSize), "fileSize": fmt.Sprint(file.GetSize()),
"sliceSize": fmt.Sprint(sliceSize), // 必须为特定分片大小 "sliceSize": fmt.Sprint(sliceSize),
"lazyCheck": "1", "lazyCheck": "1",
} }
@ -503,101 +500,67 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
return nil, err return nil, err
} }
ss, err := stream.NewStreamSectionReader(file, int(sliceSize), &up) threadG, upCtx := errgroup.NewGroupWithContext(ctx, y.uploadThread,
if err != nil {
return nil, err
}
threadG, upCtx := errgroup.NewOrderedGroupWithContext(ctx, y.uploadThread,
retry.Attempts(3), retry.Attempts(3),
retry.Delay(time.Second), retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay)) retry.DelayType(retry.BackOffDelay))
threadG.SetLimit(3)
count := 1 count := int(size / sliceSize)
if fileSize > sliceSize { lastPartSize := size % sliceSize
count = int((fileSize + sliceSize - 1) / sliceSize) if lastPartSize > 0 {
} count++
lastPartSize := fileSize % sliceSize } else {
if lastPartSize == 0 {
lastPartSize = sliceSize lastPartSize = sliceSize
} }
fileMd5 := utils.MD5.NewFunc()
silceMd5Hexs := make([]string, 0, count)
silceMd5 := utils.MD5.NewFunc() silceMd5 := utils.MD5.NewFunc()
var writers io.Writer = silceMd5 silceMd5Hexs := make([]string, 0, count)
teeReader := io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5))
fileMd5Hex := file.GetHash().GetHash(utils.MD5) byteSize := sliceSize
var fileMd5 hash.Hash
if len(fileMd5Hex) != utils.MD5.Width {
fileMd5 = utils.MD5.NewFunc()
writers = io.MultiWriter(silceMd5, fileMd5)
}
for i := 1; i <= count; i++ { for i := 1; i <= count; i++ {
if utils.IsCanceled(upCtx) { if utils.IsCanceled(upCtx) {
break break
} }
offset := int64((i)-1) * sliceSize
partSize := sliceSize
if i == count { if i == count {
partSize = lastPartSize byteSize = lastPartSize
}
byteData := make([]byte, byteSize)
// 读取块
silceMd5.Reset()
if _, err := io.ReadFull(teeReader, byteData); err != io.EOF && err != nil {
return nil, err
} }
partInfo := ""
var reader *stream.SectionReader
var rateLimitedRd io.Reader
threadG.GoWithLifecycle(errgroup.Lifecycle{
Before: func(ctx context.Context) error {
if reader == nil {
var err error
reader, err = ss.GetSectionReader(offset, partSize)
if err != nil {
return err
}
silceMd5.Reset()
w, err := utils.CopyWithBuffer(writers, reader)
if w != partSize {
return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", partSize, w, err)
}
// 计算块md5并进行hex和base64编码
md5Bytes := silceMd5.Sum(nil)
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Bytes)))
partInfo = fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader) // 计算块md5并进行hex和base64编码
} md5Bytes := silceMd5.Sum(nil)
return nil silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Bytes)))
}, partInfo := fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
Do: func(ctx context.Context) error {
reader.Seek(0, io.SeekStart)
uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
if err != nil {
return err
}
// step.4 上传切片 threadG.Go(func(ctx context.Context) error {
uploadUrl := uploadUrls[0] uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, if err != nil {
driver.NewLimitedUploadStream(ctx, rateLimitedRd), isFamily) return err
if err != nil { }
return err
} // step.4 上传切片
up(float64(threadG.Success()) * 100 / float64(count)) uploadUrl := uploadUrls[0]
return nil _, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false,
}, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)), isFamily)
After: func(err error) { if err != nil {
ss.FreeSectionReader(reader) return err
}, }
}, up(float64(threadG.Success()) * 100 / float64(count))
) return nil
})
} }
if err = threadG.Wait(); err != nil { if err = threadG.Wait(); err != nil {
return nil, err return nil, err
} }
if fileMd5 != nil { fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
fileMd5Hex = strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
}
sliceMd5Hex := fileMd5Hex sliceMd5Hex := fileMd5Hex
if fileSize > sliceSize { if file.GetSize() > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n"))) sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
} }
@ -658,12 +621,11 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
cache = tmpF cache = tmpF
} }
sliceSize := partSize(size) sliceSize := partSize(size)
count := 1 count := int(size / sliceSize)
if size > sliceSize {
count = int((size + sliceSize - 1) / sliceSize)
}
lastSliceSize := size % sliceSize lastSliceSize := size % sliceSize
if lastSliceSize == 0 { if lastSliceSize > 0 {
count++
} else {
lastSliceSize = sliceSize lastSliceSize = sliceSize
} }
@ -777,8 +739,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
} }
// step.4 上传切片 // step.4 上传切片
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize)) _, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(cache, offset, byteSize), isFamily)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, rateLimitedRd, isFamily)
if err != nil { if err != nil {
return err return err
} }
@ -860,7 +821,7 @@ func (y *Cloud189PC) GetMultiUploadUrls(ctx context.Context, isFamily bool, uplo
// 旧版本上传,家庭云不支持覆盖 // 旧版本上传,家庭云不支持覆盖
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) { func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
tempFile, fileMd5, err := stream.CacheFullAndHash(file, &up, utils.MD5) tempFile, fileMd5, err := stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -3,21 +3,16 @@ package alias
import ( import (
"context" "context"
"errors" "errors"
"fmt"
"io" "io"
"net/url"
stdpath "path" stdpath "path"
"strings" "strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs" "github.com/OpenListTeam/OpenList/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/fs" "github.com/OpenListTeam/OpenList/internal/fs"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/stream"
"github.com/OpenListTeam/OpenList/v4/internal/sign" "github.com/OpenListTeam/OpenList/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
) )
type Alias struct { type Alias struct {
@ -80,18 +75,10 @@ func (d *Alias) Get(ctx context.Context, path string) (model.Obj, error) {
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
for _, dst := range dsts { for _, dst := range dsts {
obj, err := fs.Get(ctx, stdpath.Join(dst, sub), &fs.GetArgs{NoLog: true}) obj, err := d.get(ctx, path, dst, sub)
if err != nil { if err == nil {
continue return obj, nil
} }
return &model.Object{
Path: path,
Name: obj.GetName(),
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}, nil
} }
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
@ -109,27 +96,7 @@ func (d *Alias) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
var objs []model.Obj var objs []model.Obj
fsArgs := &fs.ListArgs{NoLog: true, Refresh: args.Refresh} fsArgs := &fs.ListArgs{NoLog: true, Refresh: args.Refresh}
for _, dst := range dsts { for _, dst := range dsts {
tmp, err := fs.List(ctx, stdpath.Join(dst, sub), fsArgs) tmp, err := d.list(ctx, dst, sub, fsArgs)
if err == nil {
tmp, err = utils.SliceConvert(tmp, func(obj model.Obj) (model.Obj, error) {
thumb, ok := model.GetThumb(obj)
objRes := model.Object{
Name: obj.GetName(),
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
}
if !ok {
return &objRes, nil
}
return &model.ObjThumb{
Object: objRes,
Thumbnail: model.Thumbnail{
Thumbnail: thumb,
},
}, nil
})
}
if err == nil { if err == nil {
objs = append(objs, tmp...) objs = append(objs, tmp...)
} }
@ -143,45 +110,21 @@ func (d *Alias) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
if !ok { if !ok {
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
// proxy || ftp,s3
if common.GetApiUrl(ctx) == "" {
args.Redirect = false
}
for _, dst := range dsts { for _, dst := range dsts {
reqPath := stdpath.Join(dst, sub) link, err := d.link(ctx, dst, sub, args)
link, fi, err := d.link(ctx, reqPath, args) if err == nil {
if err != nil { if !args.Redirect && len(link.URL) > 0 {
continue // 正常情况下 多并发 仅支持返回URL的驱动
// alias套娃alias 可以让crypt、mega等驱动(不返回URL的) 支持并发
if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
} }
if link == nil {
// 重定向且需要通过代理
return &model.Link{
URL: fmt.Sprintf("%s/p%s?sign=%s",
common.GetApiUrl(ctx),
utils.EncodePath(reqPath, true),
sign.Sign(reqPath)),
}, nil
}
resultLink := *link
resultLink.SyncClosers = utils.NewSyncClosers(link)
if args.Redirect {
return &resultLink, nil
}
if resultLink.ContentLength == 0 {
resultLink.ContentLength = fi.GetSize()
}
if resultLink.MFile != nil {
return &resultLink, nil
}
if d.DownloadConcurrency > 0 {
resultLink.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
resultLink.PartSize = d.DownloadPartSize * utils.KB
}
return &resultLink, nil
} }
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
@ -223,8 +166,7 @@ func (d *Alias) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
} }
if len(srcPath) == len(dstPath) { if len(srcPath) == len(dstPath) {
for i := range srcPath { for i := range srcPath {
_, e := fs.Move(ctx, *srcPath[i], *dstPath[i]) err = errors.Join(err, fs.Move(ctx, *srcPath[i], *dstPath[i]))
err = errors.Join(err, e)
} }
return err return err
} else { } else {
@ -308,29 +250,20 @@ func (d *Alias) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer,
reqPath, err := d.getReqPath(ctx, dstDir, true) reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil { if err == nil {
if len(reqPath) == 1 { if len(reqPath) == 1 {
storage, reqActualPath, err := op.GetStorageAndActualPath(*reqPath[0]) return fs.PutDirectly(ctx, *reqPath[0], s)
if err != nil {
return err
}
return op.Put(ctx, storage, reqActualPath, &stream.FileStream{
Obj: s,
Mimetype: s.GetMimetype(),
Reader: s,
}, up)
} else { } else {
file, err := s.CacheFullAndWriter(nil, nil) defer s.Close()
file, err := s.CacheFullInTempFile()
if err != nil { if err != nil {
return err return err
} }
count := float64(len(reqPath) + 1) for _, path := range reqPath {
up(100 / count)
for i, path := range reqPath {
err = errors.Join(err, fs.PutDirectly(ctx, *path, &stream.FileStream{ err = errors.Join(err, fs.PutDirectly(ctx, *path, &stream.FileStream{
Obj: s, Obj: s,
Mimetype: s.GetMimetype(), Mimetype: s.GetMimetype(),
Reader: file, WebPutAsTask: s.NeedStore(),
Reader: file,
})) }))
up(float64(i+2) / float64(count) * 100)
_, e := file.Seek(0, io.SeekStart) _, e := file.Seek(0, io.SeekStart)
if e != nil { if e != nil {
return errors.Join(err, e) return errors.Join(err, e)
@ -402,24 +335,18 @@ func (d *Alias) Extract(ctx context.Context, obj model.Obj, args model.ArchiveIn
return nil, errs.ObjectNotFound return nil, errs.ObjectNotFound
} }
for _, dst := range dsts { for _, dst := range dsts {
reqPath := stdpath.Join(dst, sub) link, err := d.extract(ctx, dst, sub, args)
link, err := d.extract(ctx, reqPath, args) if err == nil {
if err != nil { if !args.Redirect && len(link.URL) > 0 {
continue if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
} }
if link == nil {
return &model.Link{
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s",
common.GetApiUrl(ctx),
utils.EncodePath(reqPath, true),
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
sign.SignArchive(reqPath)),
}, nil
}
resultLink := *link
resultLink.SyncClosers = utils.NewSyncClosers(link)
return &resultLink, nil
} }
return nil, errs.NotImplement return nil, errs.NotImplement
} }

View File

@ -1,8 +1,8 @@
package alias package alias
import ( import (
"github.com/OpenListTeam/OpenList/v4/internal/driver" "github.com/OpenListTeam/OpenList/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/internal/op"
) )
type Addition struct { type Addition struct {

Some files were not shown because too many files have changed in this diff Show More