mirror of
https://github.com/OpenListTeam/OpenList.git
synced 2025-09-20 04:36:09 +08:00
Compare commits
13 Commits
Author | SHA1 | Date | |
---|---|---|---|
c0a8321461 | |||
680501a8a8 | |||
83913a8031 | |||
929d4e65b9 | |||
6717d02f94 | |||
9d2a71e3eb | |||
62de731d37 | |||
7277163b0a | |||
98f65d5478 | |||
312e04ea69 | |||
6ddb4359d3 | |||
cf444c2f63 | |||
0176cfb0c9 |
81
.github/ISSUE_TEMPLATE/00-bug_report_zh.yml
vendored
81
.github/ISSUE_TEMPLATE/00-bug_report_zh.yml
vendored
@ -1,81 +0,0 @@
|
||||
name: "错误报告"
|
||||
description: 错误报告 / 问题
|
||||
title: "[BUG] 请修改标题为您遇到的问题"
|
||||
labels: [bug]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
感谢您花时间填写此错误报告。
|
||||
请**务必**确认您的问题无重复,且不是因为您的操作、网络或第三方软件问题。
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: 请确认以下事项
|
||||
description: |
|
||||
您必须勾选以下内容,否则您的问题可能会被直接关闭。
|
||||
或者您可以去[讨论区](https://github.com/OpenListTeam/OpenList/discussions)。
|
||||
options:
|
||||
- label: |
|
||||
我已确认阅读并同意 [AGPL-3.0 第15条](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=15.%20Disclaimer%20of%20Warranty.) 。
|
||||
本程序不提供任何明示或暗示的担保,使用风险由您自行承担。
|
||||
- label: |
|
||||
我已确认阅读并同意 [AGPL-3.0 第16条](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=16.%20Limitation%20of%20Liability.) 。
|
||||
无论何种情况,版权持有人或其他分发者均不对使用本程序所造成的任何损失承担责任。
|
||||
- label: |
|
||||
我确认我的描述清晰,语法礼貌,能帮助开发者快速定位问题,并符合社区规则。
|
||||
- label: |
|
||||
我已确认阅读了[OpenList文档](https://docs.oplist.org)。
|
||||
- label: |
|
||||
我已确认没有重复的问题或讨论。
|
||||
- label: |
|
||||
我已确认是`OpenList`的问题,而不是其他原因(例如 [网络](https://docs.oplist.org/zh/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host) ,`依赖`或`操作`)。
|
||||
- label: |
|
||||
我认为此问题必须由`OpenList`处理,而非第三方。
|
||||
- label: |
|
||||
我已确认这个问题在最新版本中没有被修复。
|
||||
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: OpenList 版本(必填)
|
||||
description: |
|
||||
您使用的是哪个版本的软件?请不要使用`latest`或`master`作为答案。
|
||||
placeholder: v4.xx.xx
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: driver
|
||||
attributes:
|
||||
label: 使用的存储驱动(必填)
|
||||
description: |
|
||||
您使用的是哪个存储驱动?
|
||||
placeholder: "例如: OneDrive"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: bug-description
|
||||
attributes:
|
||||
label: 问题描述(必填)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: config
|
||||
attributes:
|
||||
label: 配置文件内容(必填)
|
||||
description: |
|
||||
请提供您的`OpenList`应用的配置文件,并截图相关存储配置。(可隐藏隐私字段)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: 日志(可选)
|
||||
description: |
|
||||
请复制粘贴错误日志,或者截图。(可隐藏隐私字段)
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
attributes:
|
||||
label: 复现链接(可选)
|
||||
description: |
|
||||
请提供能复现此问题的链接。
|
81
.github/ISSUE_TEMPLATE/01-bug_report_en.yml
vendored
81
.github/ISSUE_TEMPLATE/01-bug_report_en.yml
vendored
@ -1,81 +0,0 @@
|
||||
name: "Bug Report"
|
||||
description: Bug Report / Issue
|
||||
title: "[BUG] Please modify the title to describe the issue you are facing"
|
||||
labels: [bug]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thank you for taking the time to fill out this bug report.
|
||||
Please **make sure** your issue is not a duplicate and is not caused by your own operation, network, or third-party software.
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Please confirm the following
|
||||
description: |
|
||||
You must check all the following, otherwise your issue may be closed directly.
|
||||
Or you can go to the [discussions](https://github.com/OpenListTeam/OpenList/discussions).
|
||||
options:
|
||||
- label: |
|
||||
I have read and agree to [AGPL-3.0 Section 15](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=15.%20Disclaimer%20of%20Warranty.) .
|
||||
The program is provided "as is" without any warranties; you bear all risks of using it.
|
||||
- label: |
|
||||
I have read and agree to [AGPL-3.0 Section 16](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=16.%20Limitation%20of%20Liability.) .
|
||||
The copyright holders and distributors are not liable for any damages resulting from the use or inability to use the program.
|
||||
- label: |
|
||||
I confirm my description is clear, polite, helps developers quickly locate the issue, and complies with community rules.
|
||||
- label: |
|
||||
I have read the [OpenList documentation](https://docs.oplist.org).
|
||||
- label: |
|
||||
I confirm there are no duplicate issues or discussions.
|
||||
- label: |
|
||||
I confirm this is an `OpenList` issue, not caused by other reasons (such as [network](https://docs.oplist.org/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host), dependencies, or operation).
|
||||
- label: |
|
||||
I believe this issue must be handled by `OpenList` and not by a third party.
|
||||
- label: |
|
||||
I confirm this issue is not fixed in the latest version.
|
||||
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: OpenList Version (required)
|
||||
description: |
|
||||
What version of the software are you using? Please do not use `latest` or `master` as the answer.
|
||||
placeholder: v4.xx.xx
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: driver
|
||||
attributes:
|
||||
label: Storage Driver Used (required)
|
||||
description: |
|
||||
Which storage driver are you using?
|
||||
placeholder: "e.g. OneDrive"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: bug-description
|
||||
attributes:
|
||||
label: Bug Description (required)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: config
|
||||
attributes:
|
||||
label: Configuration File Content (required)
|
||||
description: |
|
||||
Please provide your `OpenList` application's configuration file and a screenshot of the relevant storage configuration. (You may mask sensitive fields)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: Logs (optional)
|
||||
description: |
|
||||
Please copy and paste any relevant log output or screenshots. (You may mask sensitive fields)
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
attributes:
|
||||
label: Reproduction Link (optional)
|
||||
description: |
|
||||
Please provide a link to a repo or page that can reproduce this issue.
|
48
.github/ISSUE_TEMPLATE/02-feature_request_zh.yml
vendored
48
.github/ISSUE_TEMPLATE/02-feature_request_zh.yml
vendored
@ -1,48 +0,0 @@
|
||||
name: "功能请求"
|
||||
description: 功能请求 / 增强
|
||||
title: "[Feature] 请修改标题为您的功能名称"
|
||||
labels: [enhancement]
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: 请确认以下事项
|
||||
description: |
|
||||
您必须勾选以下内容,否则您的问题可能会被直接关闭。
|
||||
或者您可以去[讨论区](https://github.com/OpenListTeam/OpenList/discussions)。
|
||||
options:
|
||||
- label: |
|
||||
我已确认阅读并同意 [AGPL-3.0 第15条](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=15.%20Disclaimer%20of%20Warranty.) 。
|
||||
本程序不提供任何明示或暗示的担保,使用风险由您自行承担。
|
||||
- label: |
|
||||
我已确认阅读并同意 [AGPL-3.0 第16条](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=16.%20Limitation%20of%20Liability.) 。
|
||||
无论何种情况,版权持有人或其他分发者均不对使用本程序所造成的任何损失承担责任。
|
||||
- label: |
|
||||
我确认我的描述清晰,语法礼貌,能帮助开发者快速定位问题,并符合社区规则。
|
||||
- label: |
|
||||
我已确认阅读了[OpenList文档](https://docs.oplist.org)。
|
||||
- label: |
|
||||
我已确认没有重复的问题或讨论。
|
||||
- label: |
|
||||
我认为此问题必须由`OpenList`处理,而非第三方。
|
||||
- label: |
|
||||
我已确认此功能尚未被实现。
|
||||
- label: |
|
||||
我已确认此功能是合理的,且有普遍需求,并非我个人需要。
|
||||
- type: textarea
|
||||
id: feature-description
|
||||
attributes:
|
||||
label: 需求描述
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: suggested-solution
|
||||
attributes:
|
||||
label: 实现思路
|
||||
description: |
|
||||
实现此需求的解决思路。
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: 附加信息
|
||||
description: |
|
||||
相关的任何其他上下文或截图,或者你觉得有帮助的信息
|
48
.github/ISSUE_TEMPLATE/03-feature_request_en.yml
vendored
48
.github/ISSUE_TEMPLATE/03-feature_request_en.yml
vendored
@ -1,48 +0,0 @@
|
||||
name: "Feature Request"
|
||||
description: Feature Request / Enhancement
|
||||
title: "[Feature] Please change the title to your feature name"
|
||||
labels: [enhancement]
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Please confirm the following
|
||||
description: |
|
||||
You must check all the following, otherwise your request may be closed directly.
|
||||
Or you can go to the [discussions](https://github.com/OpenListTeam/OpenList/discussions).
|
||||
options:
|
||||
- label: |
|
||||
I have read and agree to [AGPL-3.0 Section 15](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=15.%20Disclaimer%20of%20Warranty.).
|
||||
The program is provided "as is" without any warranties; you bear all risks of using it.
|
||||
- label: |
|
||||
I have read and agree to [AGPL-3.0 Section 16](https://www.gnu.org/licenses/agpl-3.0.txt#:~:text=16.%20Limitation%20of%20Liability.).
|
||||
The copyright holders and distributors are not liable for any damages resulting from the use or inability to use the program.
|
||||
- label: |
|
||||
I confirm my description is clear, polite, helps developers quickly locate the issue, and complies with community rules.
|
||||
- label: |
|
||||
I have read the [OpenList documentation](https://docs.oplist.org).
|
||||
- label: |
|
||||
I confirm there are no duplicate issues or discussions.
|
||||
- label: |
|
||||
I believe this issue must be handled by `OpenList` and not by a third party.
|
||||
- label: |
|
||||
I confirm this feature has not been implemented yet.
|
||||
- label: |
|
||||
I confirm this feature is reasonable and has general demand, not just my personal need.
|
||||
- type: textarea
|
||||
id: feature-description
|
||||
attributes:
|
||||
label: Feature Description
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: suggested-solution
|
||||
attributes:
|
||||
label: Suggested Solution
|
||||
description: |
|
||||
Solution or approach to achieve this feature.
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: Additional Information
|
||||
description: |
|
||||
Any other context or screenshots related to this feature request, or information you find helpful.
|
81
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
81
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
@ -0,0 +1,81 @@
|
||||
name: "Bug report"
|
||||
description: Bug report
|
||||
labels: [bug]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for taking the time to fill out this bug report, please **confirm that your issue is not a duplicate issue and not because of your operation or version issues**
|
||||
感谢您花时间填写此错误报告,请**务必确认您的issue不是重复的且不是因为您的操作或版本问题**
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Please make sure of the following things
|
||||
description: |
|
||||
You must check all the following, otherwise your issue may be closed directly. Or you can go to the [discussions](https://github.com/OpenListTeam/OpenList/discussions)
|
||||
您必须勾选以下所有内容,否则您的issue可能会被直接关闭。或者您可以去[讨论区](https://github.com/OpenListTeam/OpenList/discussions)
|
||||
options:
|
||||
- label: |
|
||||
I have read the [documentation](https://openlistteam.github.io/docs).
|
||||
我已经阅读了[文档](https://openlistteam.github.io/docs)。
|
||||
- label: |
|
||||
I'm sure there are no duplicate issues or discussions.
|
||||
我确定没有重复的issue或讨论。
|
||||
- label: |
|
||||
I'm sure it's due to `OpenList` and not something else(such as [Network](https://openlistteam.github.io/docs/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host) ,`Dependencies` or `Operational`).
|
||||
我确定是`OpenList`的问题,而不是其他原因(例如[网络](https://openlistteam.github.io/docs/zh/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host),`依赖`或`操作`)。
|
||||
- label: |
|
||||
I'm sure this issue is not fixed in the latest version.
|
||||
我确定这个问题在最新版本中没有被修复。
|
||||
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: OpenList Version / OpenList 版本
|
||||
description: |
|
||||
What version of our software are you running? Do not use `latest` or `master` as an answer.
|
||||
您使用的是哪个版本的软件?请不要使用`latest`或`master`作为答案。
|
||||
placeholder: v3.xx.xx
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: driver
|
||||
attributes:
|
||||
label: Driver used / 使用的存储驱动
|
||||
description: |
|
||||
What storage driver are you using?
|
||||
您使用的是哪个存储驱动?
|
||||
placeholder: "for example: Onedrive"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: bug-description
|
||||
attributes:
|
||||
label: Describe the bug / 问题描述
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
attributes:
|
||||
label: Reproduction / 复现链接
|
||||
description: |
|
||||
Please provide a link to a repo that can reproduce the problem you ran into. Please be aware that your issue may be closed directly if you don't provide it.
|
||||
请提供能复现此问题的链接,请知悉如果不提供它你的issue可能会被直接关闭。
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: config
|
||||
attributes:
|
||||
label: Config / 配置
|
||||
description: |
|
||||
Please provide the configuration file of your `OpenList` application and take a screenshot of the relevant storage configuration. (hide privacy field)
|
||||
请提供您的`OpenList`应用的配置文件,并截图相关存储配置。(隐藏隐私字段)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: Logs / 日志
|
||||
description: |
|
||||
Please copy and paste any relevant log output.
|
||||
请复制粘贴错误日志,或者截图
|
11
.github/ISSUE_TEMPLATE/config.yml
vendored
11
.github/ISSUE_TEMPLATE/config.yml
vendored
@ -1,14 +1,5 @@
|
||||
blank_issues_enabled: true
|
||||
contact_links:
|
||||
- name: 问题和讨论
|
||||
url: https://github.com/OpenListTeam/OpenList/discussions
|
||||
about: 讨论、问题、想法等
|
||||
- name: Questions & Discussions
|
||||
url: https://github.com/OpenListTeam/OpenList/discussions
|
||||
about: Discuss issues, ideas, etc.
|
||||
- name: 即时聊天
|
||||
url: https://t.me/OpenListTeam
|
||||
about: 与我们聊天
|
||||
- name: Chat
|
||||
url: https://t.me/OpenListTeam
|
||||
about: Chat with us
|
||||
about: Use GitHub discussions for message-board style questions and discussions.
|
||||
|
33
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
33
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
@ -0,0 +1,33 @@
|
||||
name: "Feature request"
|
||||
description: Feature request
|
||||
labels: [enhancement]
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Please make sure of the following things
|
||||
description: You may select more than one, even select all.
|
||||
options:
|
||||
- label: I have read the [documentation](https://openlistteam.github.io/docs).
|
||||
- label: I'm sure there are no duplicate issues or discussions.
|
||||
- label: I'm sure this feature is not implemented.
|
||||
- label: I'm sure it's a reasonable and popular requirement.
|
||||
- type: textarea
|
||||
id: feature-description
|
||||
attributes:
|
||||
label: Description of the feature / 需求描述
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: suggested-solution
|
||||
attributes:
|
||||
label: Suggested solution / 实现思路
|
||||
description: |
|
||||
Solutions to achieve this requirement.
|
||||
实现此需求的解决思路。
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: Additional context / 附件
|
||||
description: |
|
||||
Any other context or screenshots about the feature request here, or information you find helpful.
|
||||
相关的任何其他上下文或截图,或者你觉得有帮助的信息
|
2
.github/workflows/changelog.yml
vendored
2
.github/workflows/changelog.yml
vendored
@ -21,4 +21,4 @@ jobs:
|
||||
|
||||
- run: npx changelogithub # or changelogithub@0.12 if ensure the stable result
|
||||
env:
|
||||
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
|
||||
GITHUB_TOKEN: ${{secrets.MY_TOKEN}}
|
||||
|
22
.github/workflows/issue_close_question.yml
vendored
Normal file
22
.github/workflows/issue_close_question.yml
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
name: Close need info
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 0 */1 * *"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
close-need-info:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: close-issues
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'close-issues'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
labels: 'question'
|
||||
inactive-day: 3
|
||||
close-reason: 'not_planned'
|
||||
body: |
|
||||
Hello @${{ github.event.issue.user.login }}, this issue was closed due to no activities in 3 days.
|
||||
你好 @${{ github.event.issue.user.login }},此issue因超过3天未回复被关闭。
|
21
.github/workflows/issue_close_stale.yml
vendored
Normal file
21
.github/workflows/issue_close_stale.yml
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
name: Close inactive
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 0 */7 * *"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
close-inactive:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: close-issues
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'close-issues'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
labels: 'stale'
|
||||
inactive-day: 8
|
||||
close-reason: 'not_planned'
|
||||
body: |
|
||||
Hello @${{ github.event.issue.user.login }}, this issue was closed due to inactive more than 52 days. You can reopen or recreate it if you think it should continue. Thank you for your contributions again.
|
25
.github/workflows/issue_duplicate.yml
vendored
Normal file
25
.github/workflows/issue_duplicate.yml
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
name: Issue Duplicate
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [labeled]
|
||||
|
||||
jobs:
|
||||
create-comment:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.label.name == 'duplicate'
|
||||
steps:
|
||||
- name: Create comment
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'create-comment'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
issue-number: ${{ github.event.issue.number }}
|
||||
body: |
|
||||
Hello @${{ github.event.issue.user.login }}, your issue is a duplicate and will be closed.
|
||||
你好 @${{ github.event.issue.user.login }},你的issue是重复的,将被关闭。
|
||||
- name: Close issue
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'close-issue'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
25
.github/workflows/issue_invalid.yml
vendored
Normal file
25
.github/workflows/issue_invalid.yml
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
name: Issue Invalid
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [labeled]
|
||||
|
||||
jobs:
|
||||
create-comment:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.label.name == 'invalid'
|
||||
steps:
|
||||
- name: Create comment
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'create-comment'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
issue-number: ${{ github.event.issue.number }}
|
||||
body: |
|
||||
Hello @${{ github.event.issue.user.login }}, your issue is invalid and will be closed.
|
||||
你好 @${{ github.event.issue.user.login }},你的issue无效,将被关闭。
|
||||
- name: Close issue
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'close-issue'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
17
.github/workflows/issue_on_close.yml
vendored
Normal file
17
.github/workflows/issue_on_close.yml
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
name: Remove working label when issue closed
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [closed]
|
||||
|
||||
jobs:
|
||||
rm-working:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Remove working label
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'remove-labels'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
issue-number: ${{ github.event.issue.number }}
|
||||
labels: 'working,pr-welcome'
|
20
.github/workflows/issue_question.yml
vendored
Normal file
20
.github/workflows/issue_question.yml
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
name: Issue Question
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [labeled]
|
||||
|
||||
jobs:
|
||||
create-comment:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.label.name == 'question'
|
||||
steps:
|
||||
- name: Create comment
|
||||
uses: actions-cool/issues-helper@v3.6.0
|
||||
with:
|
||||
actions: 'create-comment'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
issue-number: ${{ github.event.issue.number }}
|
||||
body: |
|
||||
Hello @${{ github.event.issue.user.login }}, please input issue by template and add detail. Issues labeled by `question` will be closed if no activities in 3 days.
|
||||
你好 @${{ github.event.issue.user.login }},请按照issue模板填写, 并详细说明问题/日志记录/复现步骤/复现链接/实现思路或提供更多信息等, 3天内未回复issue自动关闭。
|
19
.github/workflows/issue_similarity.yml
vendored
Normal file
19
.github/workflows/issue_similarity.yml
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
name: Issues Similarity Analysis
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, edited]
|
||||
|
||||
jobs:
|
||||
similarity-analysis:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: analysis
|
||||
uses: actions-cool/issues-similarity-analysis@v1
|
||||
with:
|
||||
filter-threshold: 0.5
|
||||
comment-title: '### See'
|
||||
comment-body: '${index}. ${similarity} #${number}'
|
||||
show-footer: false
|
||||
show-mentioned: true
|
||||
since-days: 730
|
13
.github/workflows/issue_translate.yml
vendored
Normal file
13
.github/workflows/issue_translate.yml
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
name: Translation Helper
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened]
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
translate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions-cool/translation-helper@v1.2.0
|
25
.github/workflows/issue_wontfix.yml
vendored
Normal file
25
.github/workflows/issue_wontfix.yml
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
name: Issue Wontfix
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [labeled]
|
||||
|
||||
jobs:
|
||||
lock-issue:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.label.name == 'wontfix'
|
||||
steps:
|
||||
- name: Create comment
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'create-comment'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
issue-number: ${{ github.event.issue.number }}
|
||||
body: |
|
||||
Hello @${{ github.event.issue.user.login }}, this issue will not be worked on and will be closed.
|
||||
你好 @${{ github.event.issue.user.login }},这不会被处理,将被关闭。
|
||||
- name: Close issue
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'close-issue'
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
26
.github/workflows/release_docker.yml
vendored
26
.github/workflows/release_docker.yml
vendored
@ -2,19 +2,6 @@ name: release_docker
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
manual_tag:
|
||||
description: 'Tag name (like v0.1.0). Required if as_latest is true.'
|
||||
required: false
|
||||
type: string
|
||||
as_latest:
|
||||
description: 'Tag as latest?'
|
||||
required: true
|
||||
default: 'false'
|
||||
type: choice
|
||||
options:
|
||||
- 'true'
|
||||
- 'false'
|
||||
push:
|
||||
tags:
|
||||
- 'v*'
|
||||
@ -30,8 +17,8 @@ env:
|
||||
REGISTRY: ghcr.io
|
||||
ARTIFACT_NAME: 'binaries_docker_release'
|
||||
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64'
|
||||
IMAGE_PUSH: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }}
|
||||
IMAGE_IS_PROD: ${{ github.ref_type == 'tag' || github.event.inputs.as_latest == 'true' }}
|
||||
IMAGE_PUSH: ${{ github.event_name == 'push' }}
|
||||
IMAGE_IS_PROD: ${{ github.ref_type == 'tag' }}
|
||||
IMAGE_TAGS_BETA: |
|
||||
type=raw,value=beta,enable={{is_default_branch}}
|
||||
|
||||
@ -142,14 +129,9 @@ jobs:
|
||||
images: |
|
||||
${{ env.REGISTRY }}/${{ env.ORG_NAME }}/${{ env.IMAGE_NAME }}
|
||||
${{ env.ORG_NAME }}/${{ env.IMAGE_NAME_DOCKERHUB }}
|
||||
tags: >
|
||||
${{ env.IMAGE_IS_PROD == 'true' && (
|
||||
github.event_name == 'workflow_dispatch'
|
||||
&& format('type=raw,value={0}', github.event.inputs.manual_tag)
|
||||
|| format('type=raw,value={0}', github.ref_name)
|
||||
) || env.IMAGE_TAGS_BETA }}
|
||||
tags: ${{ env.IMAGE_IS_PROD == 'true' && '' || env.IMAGE_TAGS_BETA }}
|
||||
flavor: |
|
||||
latest=${{ env.IMAGE_IS_PROD }}
|
||||
${{ env.IMAGE_IS_PROD == 'true' && 'latest=true' || '' }}
|
||||
${{ matrix.tag_favor }}
|
||||
|
||||
- name: Build and push
|
||||
|
40
.github/workflows/trigger-makefile-update.yml
vendored
40
.github/workflows/trigger-makefile-update.yml
vendored
@ -1,40 +0,0 @@
|
||||
name: Trigger OpenWRT Update
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v*'
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
tag:
|
||||
description: 'Release tag to trigger update for'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
trigger-makefile-update:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Trigger Makefile hash update
|
||||
uses: peter-evans/repository-dispatch@v3
|
||||
with:
|
||||
token: ${{ secrets.EXTERNAL_REPO_TOKEN_LUCI_APP_OPENLIST }}
|
||||
repository: ${{ vars.HOOK_REPO || 'OpenListTeam/luci-app-openlist' }}
|
||||
event-type: update-hashes
|
||||
client-payload: |
|
||||
{
|
||||
"source_repository": "${{ github.repository }}",
|
||||
"release_tag": "${{ inputs.tag || github.ref_name }}",
|
||||
"release_name": "${{ inputs.tag || github.ref_name }}",
|
||||
"release_url": "${{ github.server_url }}/${{ github.repository }}/releases/tag/${{ inputs.tag || github.ref_name }}",
|
||||
"triggered_by": "${{ github.actor }}",
|
||||
"trigger_reason": "${{ github.event_name }}"
|
||||
}
|
||||
|
||||
- name: Log trigger information
|
||||
run: |
|
||||
echo "🚀 Successfully triggered Makefile hash update"
|
||||
echo "📦 Target repository: OpenListTeam/luci-app-openlist"
|
||||
echo "🏷️ Tag: ${{ inputs.tag || github.ref_name }}"
|
||||
echo "👤 Triggered by: ${{ github.actor }}"
|
||||
echo "📅 Trigger time: $(date -u '+%Y-%m-%d %H:%M:%S UTC')"
|
@ -95,8 +95,7 @@ English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing
|
||||
|
||||
## Document
|
||||
|
||||
- https://docs.oplist.org
|
||||
- https://docs.openlist.team
|
||||
<https://docs.openlist.team>
|
||||
|
||||
## Demo
|
||||
|
||||
@ -126,4 +125,4 @@ The `OpenList` is open-source software licensed under the AGPL-3.0 license.
|
||||
|
||||
---
|
||||
|
||||
> [@GitHub](https://github.com/OpenListTeam) · [Telegram Group](https://t.me/OpenListTeam) · [Telegram Channel](https://t.me/OpenListOfficial)
|
||||
> [@GitHub](https://github.com/OpenListTeam) · [Telegram Group](https://t.me/OpenListTeam)
|
||||
|
@ -93,8 +93,7 @@
|
||||
|
||||
## 文档
|
||||
|
||||
- https://docs.oplist.org
|
||||
- https://docs.openlist.team
|
||||
<https://docs.openlist.team>
|
||||
|
||||
## Demo
|
||||
|
||||
@ -124,4 +123,4 @@ N/A(重建中)
|
||||
|
||||
---
|
||||
|
||||
> [@GitHub](https://github.com/OpenListTeam) · [Telegram 交流群](https://t.me/OpenListTeam) · [Telegram 频道](https://t.me/OpenListOfficial)
|
||||
> [@GitHub](https://github.com/OpenListTeam) · [Telegram 交流群](https://t.me/OpenListTeam)
|
||||
|
@ -94,8 +94,7 @@
|
||||
|
||||
## ドキュメント
|
||||
|
||||
- https://docs.oplist.org
|
||||
- https://docs.openlist.team
|
||||
<https://docs.openlist.team>
|
||||
|
||||
## デモ
|
||||
|
||||
@ -125,4 +124,4 @@ N/A (再構築中)
|
||||
|
||||
---
|
||||
|
||||
> [@GitHub](https://github.com/OpenListTeam) · [Telegram Group](https://t.me/OpenListTeam) · [Telegram Channel](https://t.me/OpenListOfficial)
|
||||
> [@GitHub](https://github.com/OpenListTeam) · [Telegram Group](https://t.me/OpenListTeam)
|
||||
|
18
build.sh
18
build.sh
@ -4,9 +4,11 @@ builtAt="$(date +'%F %T %z')"
|
||||
gitAuthor="The OpenList Projects Contributors <noreply@openlist.team>"
|
||||
gitCommit=$(git log --pretty=format:"%h" -1)
|
||||
|
||||
githubAuthArgs=""
|
||||
githubAuthHeader=""
|
||||
githubAuthValue=""
|
||||
if [ -n "$GITHUB_TOKEN" ]; then
|
||||
githubAuthArgs="--header \"Authorization: Bearer $GITHUB_TOKEN\""
|
||||
githubAuthHeader="--header"
|
||||
githubAuthValue="Authorization: Bearer $GITHUB_TOKEN"
|
||||
fi
|
||||
|
||||
if [ "$1" = "dev" ]; then
|
||||
@ -19,7 +21,7 @@ else
|
||||
git tag -d beta || true
|
||||
# Always true if there's no tag
|
||||
version=$(git describe --abbrev=0 --tags 2>/dev/null || echo "v0.0.0")
|
||||
webVersion=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest\"" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
|
||||
webVersion=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue "https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
|
||||
fi
|
||||
|
||||
echo "backend version: $version"
|
||||
@ -35,12 +37,12 @@ ldflags="\
|
||||
"
|
||||
|
||||
FetchWebDev() {
|
||||
pre_release_tag=$(eval "curl -fsSL --max-time 2 $githubAuthArgs https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases" | jq -r 'map(select(.prerelease)) | first | .tag_name')
|
||||
pre_release_tag=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases | jq -r 'map(select(.prerelease)) | first | .tag_name')
|
||||
if [ -z "$pre_release_tag" ] || [ "$pre_release_tag" == "null" ]; then
|
||||
# fall back to latest release
|
||||
pre_release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest\"")
|
||||
pre_release_json=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest")
|
||||
else
|
||||
pre_release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/tags/$pre_release_tag\"")
|
||||
pre_release_json=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/tags/$pre_release_tag")
|
||||
fi
|
||||
pre_release_assets=$(echo "$pre_release_json" | jq -r '.assets[].browser_download_url')
|
||||
pre_release_tar_url=$(echo "$pre_release_assets" | grep "openlist-frontend-dist" | grep "\.tar\.gz$")
|
||||
@ -51,7 +53,7 @@ FetchWebDev() {
|
||||
}
|
||||
|
||||
FetchWebRelease() {
|
||||
release_json=$(eval "curl -fsSL --max-time 2 $githubAuthArgs -H \"Accept: application/vnd.github.v3+json\" \"https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest\"")
|
||||
release_json=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/OpenListTeam/OpenList-Frontend/releases/latest")
|
||||
release_assets=$(echo "$release_json" | jq -r '.assets[].browser_download_url')
|
||||
release_tar_url=$(echo "$release_assets" | grep "openlist-frontend-dist" | grep "\.tar\.gz$")
|
||||
curl -fsSL "$release_tar_url" -o dist.tar.gz
|
||||
@ -252,7 +254,7 @@ BuildReleaseFreeBSD() {
|
||||
mkdir -p "build/freebsd"
|
||||
|
||||
# Get latest FreeBSD 14.x release version from GitHub
|
||||
freebsd_version=$(eval "curl -fsSL --max-time 2 $githubAuthArgs \"https://api.github.com/repos/freebsd/freebsd-src/tags\"" | \
|
||||
freebsd_version=$(curl -fsSL --max-time 2 $githubAuthHeader $githubAuthValue "https://api.github.com/repos/freebsd/freebsd-src/tags" | \
|
||||
jq -r '.[].name' | \
|
||||
grep '^release/14\.' | \
|
||||
sort -V | \
|
||||
|
241
cmd/crypt.go
241
cmd/crypt.go
@ -1,241 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"io"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
rcCrypt "github.com/rclone/rclone/backend/crypt"
|
||||
"github.com/rclone/rclone/fs/config/configmap"
|
||||
"github.com/rclone/rclone/fs/config/obscure"
|
||||
)
|
||||
|
||||
// encryption and decryption command format for Crypt driver
|
||||
|
||||
type options struct {
|
||||
Op string //decrypt or encrypt
|
||||
src string //source dir or file
|
||||
dst string //out destination
|
||||
|
||||
pwd string //de/encrypt password
|
||||
salt string
|
||||
filenameEncryption string //reference drivers\crypt\meta.go Addtion
|
||||
dirnameEncryption string
|
||||
filenameEncode string
|
||||
suffix string
|
||||
}
|
||||
|
||||
var opt options
|
||||
|
||||
// CryptCmd represents the crypt command
|
||||
var CryptCmd = &cobra.Command{
|
||||
Use: "crypt",
|
||||
Short: "Encrypt or decrypt local file or dir",
|
||||
Example: `openlist crypt -s ./src/encrypt/ --op=de --pwd=123456 --salt=345678`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
opt.validate()
|
||||
opt.cryptFileDir()
|
||||
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(CryptCmd)
|
||||
// Here you will define your flags and configuration settings.
|
||||
|
||||
// Cobra supports Persistent Flags which will work for this command
|
||||
// and all subcommands, e.g.:
|
||||
// versionCmd.PersistentFlags().String("foo", "", "A help for foo")
|
||||
|
||||
// Cobra supports local flags which will only run when this command
|
||||
// is called directly, e.g.:
|
||||
CryptCmd.Flags().StringVarP(&opt.src, "src", "s", "", "src file or dir to encrypt/decrypt")
|
||||
CryptCmd.Flags().StringVarP(&opt.dst, "dst", "d", "", "dst dir to output,if not set,output to src dir")
|
||||
CryptCmd.Flags().StringVar(&opt.Op, "op", "", "de or en which stands for decrypt or encrypt")
|
||||
|
||||
CryptCmd.Flags().StringVar(&opt.pwd, "pwd", "", "password used to encrypt/decrypt,if not contain ___Obfuscated___ prefix,will be obfuscated before used")
|
||||
CryptCmd.Flags().StringVar(&opt.salt, "salt", "", "salt used to encrypt/decrypt,if not contain ___Obfuscated___ prefix,will be obfuscated before used")
|
||||
CryptCmd.Flags().StringVar(&opt.filenameEncryption, "filename-encrypt", "off", "filename encryption mode: off,standard,obfuscate")
|
||||
CryptCmd.Flags().StringVar(&opt.dirnameEncryption, "dirname-encrypt", "false", "is dirname encryption enabled:true,false")
|
||||
CryptCmd.Flags().StringVar(&opt.filenameEncode, "filename-encode", "base64", "filename encoding mode: base64,base32,base32768")
|
||||
CryptCmd.Flags().StringVar(&opt.suffix, "suffix", ".bin", "suffix for encrypted file,default is .bin")
|
||||
}
|
||||
|
||||
func (o *options) validate() {
|
||||
if o.src == "" {
|
||||
log.Fatal("src can not be empty")
|
||||
}
|
||||
if o.Op != "encrypt" && o.Op != "decrypt" && o.Op != "en" && o.Op != "de" {
|
||||
log.Fatal("op must be encrypt or decrypt")
|
||||
}
|
||||
if o.filenameEncryption != "off" && o.filenameEncryption != "standard" && o.filenameEncryption != "obfuscate" {
|
||||
log.Fatal("filename_encryption must be off,standard,obfuscate")
|
||||
}
|
||||
if o.filenameEncode != "base64" && o.filenameEncode != "base32" && o.filenameEncode != "base32768" {
|
||||
log.Fatal("filename_encode must be base64,base32,base32768")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (o *options) cryptFileDir() {
|
||||
src, _ := filepath.Abs(o.src)
|
||||
log.Infof("src abs is %v", src)
|
||||
|
||||
fileInfo, err := os.Stat(src)
|
||||
if err != nil {
|
||||
log.Fatalf("reading file/dir %v failed,err:%v", src, err)
|
||||
|
||||
}
|
||||
pwd := updateObfusParm(o.pwd)
|
||||
salt := updateObfusParm(o.salt)
|
||||
|
||||
//create cipher
|
||||
config := configmap.Simple{
|
||||
"password": pwd,
|
||||
"password2": salt,
|
||||
"filename_encryption": o.filenameEncryption,
|
||||
"directory_name_encryption": o.dirnameEncryption,
|
||||
"filename_encoding": o.filenameEncode,
|
||||
"suffix": o.suffix,
|
||||
"pass_bad_blocks": "",
|
||||
}
|
||||
log.Infof("config:%v", config)
|
||||
cipher, err := rcCrypt.NewCipher(config)
|
||||
if err != nil {
|
||||
log.Fatalf("create cipher failed,err:%v", err)
|
||||
|
||||
}
|
||||
dst := ""
|
||||
//check and create dst dir
|
||||
if o.dst != "" {
|
||||
dst, _ = filepath.Abs(o.dst)
|
||||
checkCreateDir(dst)
|
||||
}
|
||||
|
||||
// src is file
|
||||
if !fileInfo.IsDir() { //file
|
||||
if dst == "" {
|
||||
dst = filepath.Dir(src)
|
||||
}
|
||||
o.cryptFile(cipher, src, dst)
|
||||
return
|
||||
}
|
||||
|
||||
// src is dir
|
||||
if dst == "" {
|
||||
//if src is dir and not set dst dir ,create ${src}_crypt dir as dst dir
|
||||
dst = path.Join("./", fileInfo.Name()+"_crypt")
|
||||
}
|
||||
log.Infof("dst : %v", dst)
|
||||
filepath.Walk(src, func(p string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
log.Errorf("get file %v info failed, err:%v", p, err)
|
||||
return err
|
||||
}
|
||||
if info.IsDir() {
|
||||
//create output dir
|
||||
d := strings.Replace(p, src, dst, 1)
|
||||
log.Infof("create output dir %v", d)
|
||||
checkCreateDir(d)
|
||||
|
||||
return nil
|
||||
}
|
||||
d := strings.Replace(filepath.Dir(p), src, dst, 1)
|
||||
o.cryptFile(cipher, p, d)
|
||||
return nil
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func (o *options) cryptFile(cipher *rcCrypt.Cipher, src string, dst string) {
|
||||
fileInfo, err := os.Stat(src)
|
||||
if err != nil {
|
||||
log.Fatalf("get file %v info failed,err:%v", src, err)
|
||||
|
||||
}
|
||||
fd, err := os.OpenFile(src, os.O_RDWR, 0666)
|
||||
if err != nil {
|
||||
log.Fatalf("open file %v failed,err:%v", src, err)
|
||||
|
||||
}
|
||||
defer fd.Close()
|
||||
|
||||
var cryptSrcReader io.Reader
|
||||
var outFile string
|
||||
if o.Op == "encrypt" || o.Op == "en" {
|
||||
filename := fileInfo.Name()
|
||||
if o.filenameEncryption != "off" {
|
||||
filename = cipher.EncryptFileName(fileInfo.Name())
|
||||
log.Infof("encrypt file name %v to %v", fileInfo.Name(), filename)
|
||||
}
|
||||
cryptSrcReader, err = cipher.EncryptData(fd)
|
||||
if err != nil {
|
||||
log.Fatalf("encrypt file %v failed,err:%v", src, err)
|
||||
|
||||
}
|
||||
outFile = path.Join(dst, filename)
|
||||
} else {
|
||||
filename := fileInfo.Name()
|
||||
if o.filenameEncryption != "off" {
|
||||
filename, err = cipher.DecryptFileName(filename)
|
||||
if err != nil {
|
||||
log.Fatalf("decrypt file name %v failed,err:%v", src, err)
|
||||
}
|
||||
log.Infof("decrypt file name %v to %v, ", fileInfo.Name(), filename)
|
||||
}
|
||||
|
||||
cryptSrcReader, err = cipher.DecryptData(fd)
|
||||
if err != nil {
|
||||
log.Fatalf("decrypt file %v failed,err:%v", src, err)
|
||||
|
||||
}
|
||||
outFile = path.Join(dst, filename)
|
||||
}
|
||||
//write new file
|
||||
wr, err := os.OpenFile(outFile, os.O_CREATE|os.O_WRONLY, 0755)
|
||||
if err != nil {
|
||||
log.Fatalf("create file %v failed,err:%v", outFile, err)
|
||||
|
||||
}
|
||||
defer wr.Close()
|
||||
|
||||
_, err = io.Copy(wr, cryptSrcReader)
|
||||
if err != nil {
|
||||
log.Fatalf("write file %v failed,err:%v", outFile, err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// check dir exist ,if not ,create
|
||||
func checkCreateDir(dir string) {
|
||||
_, err := os.Stat(dir)
|
||||
|
||||
if os.IsNotExist(err) {
|
||||
err := os.MkdirAll(dir, 0755)
|
||||
if err != nil {
|
||||
log.Fatalf("create dir %v failed,err:%v", dir, err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
log.Fatalf("read dir %v err: %v", dir, err)
|
||||
}
|
||||
|
||||
func updateObfusParm(str string) string {
|
||||
obfuscatedPrefix := "___Obfuscated___"
|
||||
if !strings.HasPrefix(str, obfuscatedPrefix) {
|
||||
str, err := obscure.Obscure(str)
|
||||
if err != nil {
|
||||
log.Fatalf("update obfuscated parameter failed,err:%v", str)
|
||||
}
|
||||
} else {
|
||||
str, _ = strings.CutPrefix(str, obfuscatedPrefix)
|
||||
}
|
||||
return str
|
||||
}
|
@ -1,3 +1,4 @@
|
||||
version: '3.3'
|
||||
services:
|
||||
openlist:
|
||||
restart: always
|
||||
@ -12,4 +13,4 @@ services:
|
||||
- UMASK=022
|
||||
- TZ=UTC
|
||||
container_name: openlist
|
||||
image: 'openlistteam/openlist:latest'
|
||||
image: 'ghcr.io/openlistteam/openlist:latest'
|
||||
|
@ -6,7 +6,6 @@ import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@ -196,7 +195,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStrea
|
||||
data := base.Json{
|
||||
"driveId": 0,
|
||||
"duplicate": 2, // 2->覆盖 1->重命名 0->默认
|
||||
"etag": strings.ToLower(etag),
|
||||
"etag": etag,
|
||||
"fileName": file.GetName(),
|
||||
"parentFileId": dstDir.GetID(),
|
||||
"size": file.GetSize(),
|
||||
|
@ -3,7 +3,6 @@ package _123_open
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/drivers/base"
|
||||
@ -22,7 +21,7 @@ func (d *Open123) create(parentFileID int64, filename string, etag string, size
|
||||
req.SetBody(base.Json{
|
||||
"parentFileId": parentFileID,
|
||||
"filename": filename,
|
||||
"etag": strings.ToLower(etag),
|
||||
"etag": etag,
|
||||
"size": size,
|
||||
"duplicate": duplicate,
|
||||
"containDir": containDir,
|
||||
@ -83,6 +82,7 @@ func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createRes
|
||||
retry.Attempts(3),
|
||||
retry.Delay(time.Second),
|
||||
retry.DelayType(retry.BackOffDelay))
|
||||
threadG.SetLimit(3)
|
||||
|
||||
for partIndex := int64(0); partIndex < uploadNums; partIndex++ {
|
||||
if utils.IsCanceled(uploadCtx) {
|
||||
|
@ -1,274 +0,0 @@
|
||||
package _189_tv
|
||||
|
||||
import (
|
||||
"container/ring"
|
||||
"context"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/drivers/base"
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/go-resty/resty/v2"
|
||||
)
|
||||
|
||||
type Cloud189TV struct {
|
||||
model.Storage
|
||||
Addition
|
||||
client *resty.Client
|
||||
tokenInfo *AppSessionResp
|
||||
uploadThread int
|
||||
familyTransferFolder *ring.Ring
|
||||
cleanFamilyTransferFile func()
|
||||
storageConfig driver.Config
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Config() driver.Config {
|
||||
if y.storageConfig.Name == "" {
|
||||
y.storageConfig = config
|
||||
}
|
||||
return y.storageConfig
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) GetAddition() driver.Additional {
|
||||
return &y.Addition
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Init(ctx context.Context) (err error) {
|
||||
// 兼容旧上传接口
|
||||
y.storageConfig.NoOverwriteUpload = y.isFamily() && y.Addition.RapidUpload
|
||||
|
||||
// 处理个人云和家庭云参数
|
||||
if y.isFamily() && y.RootFolderID == "-11" {
|
||||
y.RootFolderID = ""
|
||||
}
|
||||
if !y.isFamily() && y.RootFolderID == "" {
|
||||
y.RootFolderID = "-11"
|
||||
}
|
||||
|
||||
// 限制上传线程数
|
||||
y.uploadThread, _ = strconv.Atoi(y.UploadThread)
|
||||
if y.uploadThread < 1 || y.uploadThread > 32 {
|
||||
y.uploadThread, y.UploadThread = 3, "3"
|
||||
}
|
||||
|
||||
// 初始化请求客户端
|
||||
if y.client == nil {
|
||||
y.client = base.NewRestyClient().SetHeaders(
|
||||
map[string]string{
|
||||
"Accept": "application/json;charset=UTF-8",
|
||||
"User-Agent": "EcloudTV/6.5.5 (PJX110; unknown; home02) Android/35",
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// 避免重复登陆
|
||||
if !y.isLogin() || y.Addition.AccessToken == "" {
|
||||
if err = y.login(); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// 处理家庭云ID
|
||||
if y.FamilyID == "" {
|
||||
if y.FamilyID, err = y.getFamilyID(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Drop(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||
return y.getFiles(ctx, dir.GetID(), y.isFamily())
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||
var downloadUrl struct {
|
||||
URL string `json:"fileDownloadUrl"`
|
||||
}
|
||||
|
||||
isFamily := y.isFamily()
|
||||
fullUrl := ApiUrl
|
||||
if isFamily {
|
||||
fullUrl += "/family/file"
|
||||
}
|
||||
fullUrl += "/getFileDownloadUrl.action"
|
||||
|
||||
_, err := y.get(fullUrl, func(r *resty.Request) {
|
||||
r.SetContext(ctx)
|
||||
r.SetQueryParam("fileId", file.GetID())
|
||||
if isFamily {
|
||||
r.SetQueryParams(map[string]string{
|
||||
"familyId": y.FamilyID,
|
||||
})
|
||||
} else {
|
||||
r.SetQueryParams(map[string]string{
|
||||
"dt": "3",
|
||||
"flag": "1",
|
||||
})
|
||||
}
|
||||
}, &downloadUrl, isFamily)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// 重定向获取真实链接
|
||||
downloadUrl.URL = strings.Replace(strings.ReplaceAll(downloadUrl.URL, "&", "&"), "http://", "https://", 1)
|
||||
res, err := base.NoRedirectClient.R().SetContext(ctx).SetDoNotParseResponse(true).Get(downloadUrl.URL)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer res.RawBody().Close()
|
||||
if res.StatusCode() == 302 {
|
||||
downloadUrl.URL = res.Header().Get("location")
|
||||
}
|
||||
|
||||
like := &model.Link{
|
||||
URL: downloadUrl.URL,
|
||||
Header: http.Header{
|
||||
"User-Agent": []string{base.UserAgent},
|
||||
},
|
||||
}
|
||||
|
||||
return like, nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
|
||||
isFamily := y.isFamily()
|
||||
fullUrl := ApiUrl
|
||||
if isFamily {
|
||||
fullUrl += "/family/file"
|
||||
}
|
||||
fullUrl += "/createFolder.action"
|
||||
|
||||
var newFolder Cloud189Folder
|
||||
_, err := y.post(fullUrl, func(req *resty.Request) {
|
||||
req.SetContext(ctx)
|
||||
req.SetQueryParams(map[string]string{
|
||||
"folderName": dirName,
|
||||
"relativePath": "",
|
||||
})
|
||||
if isFamily {
|
||||
req.SetQueryParams(map[string]string{
|
||||
"familyId": y.FamilyID,
|
||||
"parentId": parentDir.GetID(),
|
||||
})
|
||||
} else {
|
||||
req.SetQueryParams(map[string]string{
|
||||
"parentFolderId": parentDir.GetID(),
|
||||
})
|
||||
}
|
||||
}, &newFolder, isFamily)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &newFolder, nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||
isFamily := y.isFamily()
|
||||
other := map[string]string{"targetFileName": dstDir.GetName()}
|
||||
|
||||
resp, err := y.CreateBatchTask("MOVE", IF(isFamily, y.FamilyID, ""), dstDir.GetID(), other, BatchTaskInfo{
|
||||
FileId: srcObj.GetID(),
|
||||
FileName: srcObj.GetName(),
|
||||
IsFolder: BoolToNumber(srcObj.IsDir()),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err = y.WaitBatchTask("MOVE", resp.TaskID, time.Millisecond*400); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return srcObj, nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
|
||||
isFamily := y.isFamily()
|
||||
queryParam := make(map[string]string)
|
||||
fullUrl := ApiUrl
|
||||
method := http.MethodPost
|
||||
if isFamily {
|
||||
fullUrl += "/family/file"
|
||||
method = http.MethodGet
|
||||
queryParam["familyId"] = y.FamilyID
|
||||
}
|
||||
|
||||
var newObj model.Obj
|
||||
switch f := srcObj.(type) {
|
||||
case *Cloud189File:
|
||||
fullUrl += "/renameFile.action"
|
||||
queryParam["fileId"] = srcObj.GetID()
|
||||
queryParam["destFileName"] = newName
|
||||
newObj = &Cloud189File{Icon: f.Icon} // 复用预览
|
||||
case *Cloud189Folder:
|
||||
fullUrl += "/renameFolder.action"
|
||||
queryParam["folderId"] = srcObj.GetID()
|
||||
queryParam["destFolderName"] = newName
|
||||
newObj = &Cloud189Folder{}
|
||||
default:
|
||||
return nil, errs.NotSupport
|
||||
}
|
||||
|
||||
_, err := y.request(fullUrl, method, func(req *resty.Request) {
|
||||
req.SetContext(ctx).SetQueryParams(queryParam)
|
||||
}, nil, newObj, isFamily)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return newObj, nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||
isFamily := y.isFamily()
|
||||
other := map[string]string{"targetFileName": dstDir.GetName()}
|
||||
|
||||
resp, err := y.CreateBatchTask("COPY", IF(isFamily, y.FamilyID, ""), dstDir.GetID(), other, BatchTaskInfo{
|
||||
FileId: srcObj.GetID(),
|
||||
FileName: srcObj.GetName(),
|
||||
IsFolder: BoolToNumber(srcObj.IsDir()),
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return y.WaitBatchTask("COPY", resp.TaskID, time.Second)
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Remove(ctx context.Context, obj model.Obj) error {
|
||||
isFamily := y.isFamily()
|
||||
|
||||
resp, err := y.CreateBatchTask("DELETE", IF(isFamily, y.FamilyID, ""), "", nil, BatchTaskInfo{
|
||||
FileId: obj.GetID(),
|
||||
FileName: obj.GetName(),
|
||||
IsFolder: BoolToNumber(obj.IsDir()),
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// 批量任务数量限制,过快会导致无法删除
|
||||
return y.WaitBatchTask("DELETE", resp.TaskID, time.Millisecond*200)
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (newObj model.Obj, err error) {
|
||||
overwrite := true
|
||||
isFamily := y.isFamily()
|
||||
|
||||
// 响应时间长,按需启用
|
||||
if y.Addition.RapidUpload && !stream.IsForceStreamUpload() {
|
||||
if newObj, err := y.RapidUpload(ctx, dstDir, stream, isFamily, overwrite); err == nil {
|
||||
return newObj, nil
|
||||
}
|
||||
}
|
||||
|
||||
return y.OldUpload(ctx, dstDir, stream, up, isFamily, overwrite)
|
||||
|
||||
}
|
@ -1,166 +0,0 @@
|
||||
package _189_tv
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/hmac"
|
||||
"crypto/sha1"
|
||||
"encoding/hex"
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func clientSuffix() map[string]string {
|
||||
return map[string]string{
|
||||
"clientType": AndroidTV,
|
||||
"version": TvVersion,
|
||||
"channelId": TvChannelId,
|
||||
"clientSn": "unknown",
|
||||
"model": "PJX110",
|
||||
"osFamily": "Android",
|
||||
"osVersion": "35",
|
||||
"networkAccessMode": "WIFI",
|
||||
"telecomsOperator": "46011",
|
||||
}
|
||||
}
|
||||
|
||||
// SessionKeySignatureOfHmac HMAC签名
|
||||
func SessionKeySignatureOfHmac(sessionSecret, sessionKey, operate, fullUrl, dateOfGmt string) string {
|
||||
urlpath := regexp.MustCompile(`://[^/]+((/[^/\s?#]+)*)`).FindStringSubmatch(fullUrl)[1]
|
||||
mac := hmac.New(sha1.New, []byte(sessionSecret))
|
||||
data := fmt.Sprintf("SessionKey=%s&Operate=%s&RequestURI=%s&Date=%s", sessionKey, operate, urlpath, dateOfGmt)
|
||||
mac.Write([]byte(data))
|
||||
return strings.ToUpper(hex.EncodeToString(mac.Sum(nil)))
|
||||
}
|
||||
|
||||
// AppKeySignatureOfHmac HMAC签名
|
||||
func AppKeySignatureOfHmac(sessionSecret, appKey, operate, fullUrl string, timestamp int64) string {
|
||||
urlpath := regexp.MustCompile(`://[^/]+((/[^/\s?#]+)*)`).FindStringSubmatch(fullUrl)[1]
|
||||
mac := hmac.New(sha1.New, []byte(sessionSecret))
|
||||
data := fmt.Sprintf("AppKey=%s&Operate=%s&RequestURI=%s&Timestamp=%d", appKey, operate, urlpath, timestamp)
|
||||
mac.Write([]byte(data))
|
||||
return strings.ToUpper(hex.EncodeToString(mac.Sum(nil)))
|
||||
}
|
||||
|
||||
// 获取http规范的时间
|
||||
func getHttpDateStr() string {
|
||||
return time.Now().UTC().Format(http.TimeFormat)
|
||||
}
|
||||
|
||||
// 时间戳
|
||||
func timestamp() int64 {
|
||||
return time.Now().UTC().UnixNano() / 1e6
|
||||
}
|
||||
|
||||
type Time time.Time
|
||||
|
||||
func (t *Time) UnmarshalJSON(b []byte) error { return t.Unmarshal(b) }
|
||||
func (t *Time) UnmarshalXML(e *xml.Decoder, ee xml.StartElement) error {
|
||||
b, err := e.Token()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if b, ok := b.(xml.CharData); ok {
|
||||
if err = t.Unmarshal(b); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return e.Skip()
|
||||
}
|
||||
func (t *Time) Unmarshal(b []byte) error {
|
||||
bs := strings.Trim(string(b), "\"")
|
||||
var v time.Time
|
||||
var err error
|
||||
for _, f := range []string{"2006-01-02 15:04:05 -07", "Jan 2, 2006 15:04:05 PM -07"} {
|
||||
v, err = time.ParseInLocation(f, bs+" +08", time.Local)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
*t = Time(v)
|
||||
return err
|
||||
}
|
||||
|
||||
type String string
|
||||
|
||||
func (t *String) UnmarshalJSON(b []byte) error { return t.Unmarshal(b) }
|
||||
func (t *String) UnmarshalXML(e *xml.Decoder, ee xml.StartElement) error {
|
||||
b, err := e.Token()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if b, ok := b.(xml.CharData); ok {
|
||||
if err = t.Unmarshal(b); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return e.Skip()
|
||||
}
|
||||
func (s *String) Unmarshal(b []byte) error {
|
||||
*s = String(bytes.Trim(b, "\""))
|
||||
return nil
|
||||
}
|
||||
|
||||
func toFamilyOrderBy(o string) string {
|
||||
switch o {
|
||||
case "filename":
|
||||
return "1"
|
||||
case "filesize":
|
||||
return "2"
|
||||
case "lastOpTime":
|
||||
return "3"
|
||||
default:
|
||||
return "1"
|
||||
}
|
||||
}
|
||||
|
||||
func toDesc(o string) string {
|
||||
switch o {
|
||||
case "desc":
|
||||
return "true"
|
||||
case "asc":
|
||||
fallthrough
|
||||
default:
|
||||
return "false"
|
||||
}
|
||||
}
|
||||
|
||||
func ParseHttpHeader(str string) map[string]string {
|
||||
header := make(map[string]string)
|
||||
for _, value := range strings.Split(str, "&") {
|
||||
if k, v, found := strings.Cut(value, "="); found {
|
||||
header[k] = v
|
||||
}
|
||||
}
|
||||
return header
|
||||
}
|
||||
|
||||
func MustString(str string, err error) string {
|
||||
return str
|
||||
}
|
||||
|
||||
func BoolToNumber(b bool) int {
|
||||
if b {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func isBool(bs ...bool) bool {
|
||||
for _, b := range bs {
|
||||
if b {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func IF[V any](o bool, t V, f V) V {
|
||||
if o {
|
||||
return t
|
||||
}
|
||||
return f
|
||||
}
|
@ -1,30 +0,0 @@
|
||||
package _189_tv
|
||||
|
||||
import (
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
)
|
||||
|
||||
type Addition struct {
|
||||
driver.RootID
|
||||
AccessToken string `json:"access_token"`
|
||||
TempUuid string
|
||||
OrderBy string `json:"order_by" type:"select" options:"filename,filesize,lastOpTime" default:"filename"`
|
||||
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
|
||||
Type string `json:"type" type:"select" options:"personal,family" default:"personal"`
|
||||
FamilyID string `json:"family_id"`
|
||||
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`
|
||||
RapidUpload bool `json:"rapid_upload"`
|
||||
}
|
||||
|
||||
var config = driver.Config{
|
||||
Name: "189CloudTV",
|
||||
DefaultRoot: "-11",
|
||||
CheckStatus: true,
|
||||
}
|
||||
|
||||
func init() {
|
||||
op.RegisterDriver(func() driver.Driver {
|
||||
return &Cloud189TV{}
|
||||
})
|
||||
}
|
@ -1,318 +0,0 @@
|
||||
package _189_tv
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils"
|
||||
)
|
||||
|
||||
// 居然有四种返回方式
|
||||
type RespErr struct {
|
||||
ResCode any `json:"res_code"` // int or string
|
||||
ResMessage string `json:"res_message"`
|
||||
|
||||
Error_ string `json:"error"`
|
||||
|
||||
XMLName xml.Name `xml:"error"`
|
||||
Code string `json:"code" xml:"code"`
|
||||
Message string `json:"message" xml:"message"`
|
||||
Msg string `json:"msg"`
|
||||
|
||||
ErrorCode string `json:"errorCode"`
|
||||
ErrorMsg string `json:"errorMsg"`
|
||||
}
|
||||
|
||||
func (e *RespErr) HasError() bool {
|
||||
switch v := e.ResCode.(type) {
|
||||
case int, int64, int32:
|
||||
return v != 0
|
||||
case string:
|
||||
return e.ResCode != ""
|
||||
}
|
||||
return (e.Code != "" && e.Code != "SUCCESS") || e.ErrorCode != "" || e.Error_ != ""
|
||||
}
|
||||
|
||||
func (e *RespErr) Error() string {
|
||||
switch v := e.ResCode.(type) {
|
||||
case int, int64, int32:
|
||||
if v != 0 {
|
||||
return fmt.Sprintf("res_code: %d ,res_msg: %s", v, e.ResMessage)
|
||||
}
|
||||
case string:
|
||||
if e.ResCode != "" {
|
||||
return fmt.Sprintf("res_code: %s ,res_msg: %s", e.ResCode, e.ResMessage)
|
||||
}
|
||||
}
|
||||
|
||||
if e.Code != "" && e.Code != "SUCCESS" {
|
||||
if e.Msg != "" {
|
||||
return fmt.Sprintf("code: %s ,msg: %s", e.Code, e.Msg)
|
||||
}
|
||||
if e.Message != "" {
|
||||
return fmt.Sprintf("code: %s ,msg: %s", e.Code, e.Message)
|
||||
}
|
||||
return "code: " + e.Code
|
||||
}
|
||||
|
||||
if e.ErrorCode != "" {
|
||||
return fmt.Sprintf("err_code: %s ,err_msg: %s", e.ErrorCode, e.ErrorMsg)
|
||||
}
|
||||
|
||||
if e.Error_ != "" {
|
||||
return fmt.Sprintf("error: %s ,message: %s", e.ErrorCode, e.Message)
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// 刷新session返回
|
||||
type UserSessionResp struct {
|
||||
ResCode int `json:"res_code"`
|
||||
ResMessage string `json:"res_message"`
|
||||
|
||||
LoginName string `json:"loginName"`
|
||||
|
||||
KeepAlive int `json:"keepAlive"`
|
||||
GetFileDiffSpan int `json:"getFileDiffSpan"`
|
||||
GetUserInfoSpan int `json:"getUserInfoSpan"`
|
||||
|
||||
// 个人云
|
||||
SessionKey string `json:"sessionKey"`
|
||||
SessionSecret string `json:"sessionSecret"`
|
||||
// 家庭云
|
||||
FamilySessionKey string `json:"familySessionKey"`
|
||||
FamilySessionSecret string `json:"familySessionSecret"`
|
||||
}
|
||||
|
||||
type UuidInfoResp struct {
|
||||
Uuid string `json:"uuid"`
|
||||
}
|
||||
|
||||
type E189AccessTokenResp struct {
|
||||
E189AccessToken string `json:"accessToken"`
|
||||
ExpiresIn int64 `json:"expiresIn"`
|
||||
}
|
||||
|
||||
// 登录返回
|
||||
type AppSessionResp struct {
|
||||
UserSessionResp
|
||||
|
||||
IsSaveName string `json:"isSaveName"`
|
||||
|
||||
// 会话刷新Token
|
||||
AccessToken string `json:"accessToken"`
|
||||
//Token刷新
|
||||
RefreshToken string `json:"refreshToken"`
|
||||
}
|
||||
|
||||
// 家庭云账户
|
||||
type FamilyInfoListResp struct {
|
||||
FamilyInfoResp []FamilyInfoResp `json:"familyInfoResp"`
|
||||
}
|
||||
type FamilyInfoResp struct {
|
||||
Count int `json:"count"`
|
||||
CreateTime string `json:"createTime"`
|
||||
FamilyID int64 `json:"familyId"`
|
||||
RemarkName string `json:"remarkName"`
|
||||
Type int `json:"type"`
|
||||
UseFlag int `json:"useFlag"`
|
||||
UserRole int `json:"userRole"`
|
||||
}
|
||||
|
||||
/*文件部分*/
|
||||
// 文件
|
||||
type Cloud189File struct {
|
||||
ID String `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Size int64 `json:"size"`
|
||||
Md5 string `json:"md5"`
|
||||
|
||||
LastOpTime Time `json:"lastOpTime"`
|
||||
CreateDate Time `json:"createDate"`
|
||||
Icon struct {
|
||||
//iconOption 5
|
||||
SmallUrl string `json:"smallUrl"`
|
||||
LargeUrl string `json:"largeUrl"`
|
||||
|
||||
// iconOption 10
|
||||
Max600 string `json:"max600"`
|
||||
MediumURL string `json:"mediumUrl"`
|
||||
} `json:"icon"`
|
||||
|
||||
// Orientation int64 `json:"orientation"`
|
||||
// FileCata int64 `json:"fileCata"`
|
||||
// MediaType int `json:"mediaType"`
|
||||
// Rev string `json:"rev"`
|
||||
// StarLabel int64 `json:"starLabel"`
|
||||
}
|
||||
|
||||
func (c *Cloud189File) CreateTime() time.Time {
|
||||
return time.Time(c.CreateDate)
|
||||
}
|
||||
|
||||
func (c *Cloud189File) GetHash() utils.HashInfo {
|
||||
return utils.NewHashInfo(utils.MD5, c.Md5)
|
||||
}
|
||||
|
||||
func (c *Cloud189File) GetSize() int64 { return c.Size }
|
||||
func (c *Cloud189File) GetName() string { return c.Name }
|
||||
func (c *Cloud189File) ModTime() time.Time { return time.Time(c.LastOpTime) }
|
||||
func (c *Cloud189File) IsDir() bool { return false }
|
||||
func (c *Cloud189File) GetID() string { return string(c.ID) }
|
||||
func (c *Cloud189File) GetPath() string { return "" }
|
||||
func (c *Cloud189File) Thumb() string { return c.Icon.SmallUrl }
|
||||
|
||||
// 文件夹
|
||||
type Cloud189Folder struct {
|
||||
ID String `json:"id"`
|
||||
ParentID int64 `json:"parentId"`
|
||||
Name string `json:"name"`
|
||||
|
||||
LastOpTime Time `json:"lastOpTime"`
|
||||
CreateDate Time `json:"createDate"`
|
||||
|
||||
// FileListSize int64 `json:"fileListSize"`
|
||||
// FileCount int64 `json:"fileCount"`
|
||||
// FileCata int64 `json:"fileCata"`
|
||||
// Rev string `json:"rev"`
|
||||
// StarLabel int64 `json:"starLabel"`
|
||||
}
|
||||
|
||||
func (c *Cloud189Folder) CreateTime() time.Time {
|
||||
return time.Time(c.CreateDate)
|
||||
}
|
||||
|
||||
func (c *Cloud189Folder) GetHash() utils.HashInfo {
|
||||
return utils.HashInfo{}
|
||||
}
|
||||
|
||||
func (c *Cloud189Folder) GetSize() int64 { return 0 }
|
||||
func (c *Cloud189Folder) GetName() string { return c.Name }
|
||||
func (c *Cloud189Folder) ModTime() time.Time { return time.Time(c.LastOpTime) }
|
||||
func (c *Cloud189Folder) IsDir() bool { return true }
|
||||
func (c *Cloud189Folder) GetID() string { return string(c.ID) }
|
||||
func (c *Cloud189Folder) GetPath() string { return "" }
|
||||
|
||||
type Cloud189FilesResp struct {
|
||||
//ResCode int `json:"res_code"`
|
||||
//ResMessage string `json:"res_message"`
|
||||
FileListAO struct {
|
||||
Count int `json:"count"`
|
||||
FileList []Cloud189File `json:"fileList"`
|
||||
FolderList []Cloud189Folder `json:"folderList"`
|
||||
} `json:"fileListAO"`
|
||||
}
|
||||
|
||||
// TaskInfo 任务信息
|
||||
type BatchTaskInfo struct {
|
||||
// FileId 文件ID
|
||||
FileId string `json:"fileId"`
|
||||
// FileName 文件名
|
||||
FileName string `json:"fileName"`
|
||||
// IsFolder 是否是文件夹,0-否,1-是
|
||||
IsFolder int `json:"isFolder"`
|
||||
// SrcParentId 文件所在父目录ID
|
||||
SrcParentId string `json:"srcParentId,omitempty"`
|
||||
|
||||
/* 冲突管理 */
|
||||
// 1 -> 跳过 2 -> 保留 3 -> 覆盖
|
||||
DealWay int `json:"dealWay,omitempty"`
|
||||
IsConflict int `json:"isConflict,omitempty"`
|
||||
}
|
||||
|
||||
/* 上传部分 */
|
||||
type InitMultiUploadResp struct {
|
||||
//Code string `json:"code"`
|
||||
Data struct {
|
||||
UploadType int `json:"uploadType"`
|
||||
UploadHost string `json:"uploadHost"`
|
||||
UploadFileID string `json:"uploadFileId"`
|
||||
FileDataExists int `json:"fileDataExists"`
|
||||
} `json:"data"`
|
||||
}
|
||||
type UploadUrlsResp struct {
|
||||
Code string `json:"code"`
|
||||
Data map[string]UploadUrlsData `json:"uploadUrls"`
|
||||
}
|
||||
type UploadUrlsData struct {
|
||||
RequestURL string `json:"requestURL"`
|
||||
RequestHeader string `json:"requestHeader"`
|
||||
}
|
||||
|
||||
/* 第二种上传方式 */
|
||||
type CreateUploadFileResp struct {
|
||||
// 上传文件请求ID
|
||||
UploadFileId int64 `json:"uploadFileId"`
|
||||
// 上传文件数据的URL路径
|
||||
FileUploadUrl string `json:"fileUploadUrl"`
|
||||
// 上传文件完成后确认路径
|
||||
FileCommitUrl string `json:"fileCommitUrl"`
|
||||
// 文件是否已存在云盘中,0-未存在,1-已存在
|
||||
FileDataExists int `json:"fileDataExists"`
|
||||
}
|
||||
|
||||
type GetUploadFileStatusResp struct {
|
||||
CreateUploadFileResp
|
||||
|
||||
// 已上传的大小
|
||||
DataSize int64 `json:"dataSize"`
|
||||
Size int64 `json:"size"`
|
||||
}
|
||||
|
||||
func (r *GetUploadFileStatusResp) GetSize() int64 {
|
||||
return r.DataSize + r.Size
|
||||
}
|
||||
|
||||
type CommitMultiUploadFileResp struct {
|
||||
File struct {
|
||||
UserFileID String `json:"userFileId"`
|
||||
FileName string `json:"fileName"`
|
||||
FileSize int64 `json:"fileSize"`
|
||||
FileMd5 string `json:"fileMd5"`
|
||||
CreateDate Time `json:"createDate"`
|
||||
} `json:"file"`
|
||||
}
|
||||
|
||||
type OldCommitUploadFileResp struct {
|
||||
XMLName xml.Name `xml:"file"`
|
||||
ID String `xml:"id"`
|
||||
Name string `xml:"name"`
|
||||
Size int64 `xml:"size"`
|
||||
Md5 string `xml:"md5"`
|
||||
CreateDate Time `xml:"createDate"`
|
||||
}
|
||||
|
||||
func (f *OldCommitUploadFileResp) toFile() *Cloud189File {
|
||||
return &Cloud189File{
|
||||
ID: f.ID,
|
||||
Name: f.Name,
|
||||
Size: f.Size,
|
||||
Md5: f.Md5,
|
||||
CreateDate: f.CreateDate,
|
||||
LastOpTime: f.CreateDate,
|
||||
}
|
||||
}
|
||||
|
||||
type CreateBatchTaskResp struct {
|
||||
TaskID string `json:"taskId"`
|
||||
}
|
||||
|
||||
type BatchTaskStateResp struct {
|
||||
FailedCount int `json:"failedCount"`
|
||||
Process int `json:"process"`
|
||||
SkipCount int `json:"skipCount"`
|
||||
SubTaskCount int `json:"subTaskCount"`
|
||||
SuccessedCount int `json:"successedCount"`
|
||||
SuccessedFileIDList []int64 `json:"successedFileIdList"`
|
||||
TaskID string `json:"taskId"`
|
||||
TaskStatus int `json:"taskStatus"` //1 初始化 2 存在冲突 3 执行中,4 完成
|
||||
}
|
||||
|
||||
type BatchTaskConflictTaskInfoResp struct {
|
||||
SessionKey string `json:"sessionKey"`
|
||||
TargetFolderID int `json:"targetFolderId"`
|
||||
TaskID string `json:"taskId"`
|
||||
TaskInfos []BatchTaskInfo
|
||||
TaskType int `json:"taskType"`
|
||||
}
|
@ -1,564 +0,0 @@
|
||||
package _189_tv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"github.com/skip2/go-qrcode"
|
||||
"io"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/drivers/base"
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils"
|
||||
|
||||
"github.com/go-resty/resty/v2"
|
||||
"github.com/google/uuid"
|
||||
jsoniter "github.com/json-iterator/go"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const (
|
||||
TVAppKey = "600100885"
|
||||
TVAppSignatureSecre = "fe5734c74c2f96a38157f420b32dc995"
|
||||
TvVersion = "6.5.5"
|
||||
AndroidTV = "FAMILY_TV"
|
||||
TvChannelId = "home02"
|
||||
|
||||
ApiUrl = "https://api.cloud.189.cn"
|
||||
)
|
||||
|
||||
func (y *Cloud189TV) SignatureHeader(url, method string, isFamily bool) map[string]string {
|
||||
dateOfGmt := getHttpDateStr()
|
||||
sessionKey := y.tokenInfo.SessionKey
|
||||
sessionSecret := y.tokenInfo.SessionSecret
|
||||
if isFamily {
|
||||
sessionKey = y.tokenInfo.FamilySessionKey
|
||||
sessionSecret = y.tokenInfo.FamilySessionSecret
|
||||
}
|
||||
|
||||
header := map[string]string{
|
||||
"Date": dateOfGmt,
|
||||
"SessionKey": sessionKey,
|
||||
"X-Request-ID": uuid.NewString(),
|
||||
"Signature": SessionKeySignatureOfHmac(sessionSecret, sessionKey, method, url, dateOfGmt),
|
||||
}
|
||||
return header
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) AppKeySignatureHeader(url, method string) map[string]string {
|
||||
tempTime := timestamp()
|
||||
header := map[string]string{
|
||||
"Timestamp": strconv.FormatInt(tempTime, 10),
|
||||
"X-Request-ID": uuid.NewString(),
|
||||
"AppKey": TVAppKey,
|
||||
"AppSignature": AppKeySignatureOfHmac(TVAppSignatureSecre, TVAppKey, method, url, tempTime),
|
||||
}
|
||||
return header
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) request(url, method string, callback base.ReqCallback, params map[string]string, resp interface{}, isFamily ...bool) ([]byte, error) {
|
||||
req := y.client.R().SetQueryParams(clientSuffix())
|
||||
|
||||
if params != nil {
|
||||
req.SetQueryParams(params)
|
||||
}
|
||||
|
||||
// Signature
|
||||
req.SetHeaders(y.SignatureHeader(url, method, isBool(isFamily...)))
|
||||
|
||||
var erron RespErr
|
||||
req.SetError(&erron)
|
||||
|
||||
if callback != nil {
|
||||
callback(req)
|
||||
}
|
||||
if resp != nil {
|
||||
req.SetResult(resp)
|
||||
}
|
||||
res, err := req.Execute(method, url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if strings.Contains(res.String(), "userSessionBO is null") ||
|
||||
strings.Contains(res.String(), "InvalidSessionKey") {
|
||||
return nil, errors.New("session expired")
|
||||
}
|
||||
|
||||
// 处理错误
|
||||
if erron.HasError() {
|
||||
return nil, &erron
|
||||
}
|
||||
return res.Body(), nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) get(url string, callback base.ReqCallback, resp interface{}, isFamily ...bool) ([]byte, error) {
|
||||
return y.request(url, http.MethodGet, callback, nil, resp, isFamily...)
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) post(url string, callback base.ReqCallback, resp interface{}, isFamily ...bool) ([]byte, error) {
|
||||
return y.request(url, http.MethodPost, callback, nil, resp, isFamily...)
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) put(ctx context.Context, url string, headers map[string]string, sign bool, file io.Reader, isFamily bool) ([]byte, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, url, file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := req.URL.Query()
|
||||
for key, value := range clientSuffix() {
|
||||
query.Add(key, value)
|
||||
}
|
||||
req.URL.RawQuery = query.Encode()
|
||||
|
||||
for key, value := range headers {
|
||||
req.Header.Add(key, value)
|
||||
}
|
||||
|
||||
if sign {
|
||||
for key, value := range y.SignatureHeader(url, http.MethodPut, isFamily) {
|
||||
req.Header.Add(key, value)
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := base.HttpClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var erron RespErr
|
||||
jsoniter.Unmarshal(body, &erron)
|
||||
xml.Unmarshal(body, &erron)
|
||||
if erron.HasError() {
|
||||
return nil, &erron
|
||||
}
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return nil, errors.Errorf("put fail,err:%s", string(body))
|
||||
}
|
||||
return body, nil
|
||||
}
|
||||
func (y *Cloud189TV) getFiles(ctx context.Context, fileId string, isFamily bool) ([]model.Obj, error) {
|
||||
fullUrl := ApiUrl
|
||||
if isFamily {
|
||||
fullUrl += "/family/file"
|
||||
}
|
||||
fullUrl += "/listFiles.action"
|
||||
|
||||
res := make([]model.Obj, 0, 130)
|
||||
for pageNum := 1; ; pageNum++ {
|
||||
var resp Cloud189FilesResp
|
||||
_, err := y.get(fullUrl, func(r *resty.Request) {
|
||||
r.SetContext(ctx)
|
||||
r.SetQueryParams(map[string]string{
|
||||
"folderId": fileId,
|
||||
"fileType": "0",
|
||||
"mediaAttr": "0",
|
||||
"iconOption": "5",
|
||||
"pageNum": fmt.Sprint(pageNum),
|
||||
"pageSize": "130",
|
||||
})
|
||||
if isFamily {
|
||||
r.SetQueryParams(map[string]string{
|
||||
"familyId": y.FamilyID,
|
||||
"orderBy": toFamilyOrderBy(y.OrderBy),
|
||||
"descending": toDesc(y.OrderDirection),
|
||||
})
|
||||
} else {
|
||||
r.SetQueryParams(map[string]string{
|
||||
"recursive": "0",
|
||||
"orderBy": y.OrderBy,
|
||||
"descending": toDesc(y.OrderDirection),
|
||||
})
|
||||
}
|
||||
}, &resp, isFamily)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// 获取完毕跳出
|
||||
if resp.FileListAO.Count == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
for i := 0; i < len(resp.FileListAO.FolderList); i++ {
|
||||
res = append(res, &resp.FileListAO.FolderList[i])
|
||||
}
|
||||
for i := 0; i < len(resp.FileListAO.FileList); i++ {
|
||||
res = append(res, &resp.FileListAO.FileList[i])
|
||||
}
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) login() (err error) {
|
||||
req := y.client.R().SetQueryParams(clientSuffix())
|
||||
var erron RespErr
|
||||
var tokenInfo AppSessionResp
|
||||
if y.Addition.AccessToken == "" {
|
||||
if y.Addition.TempUuid == "" {
|
||||
// 获取登录参数
|
||||
var uuidInfo UuidInfoResp
|
||||
req.SetResult(&uuidInfo).SetError(&erron)
|
||||
// Signature
|
||||
req.SetHeaders(y.AppKeySignatureHeader(ApiUrl+"/family/manage/getQrCodeUUID.action",
|
||||
http.MethodGet))
|
||||
_, err = req.Execute(http.MethodGet, ApiUrl+"/family/manage/getQrCodeUUID.action")
|
||||
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if erron.HasError() {
|
||||
return &erron
|
||||
}
|
||||
|
||||
if uuidInfo.Uuid == "" {
|
||||
return errors.New("uuidInfo is empty")
|
||||
}
|
||||
y.Addition.TempUuid = uuidInfo.Uuid
|
||||
op.MustSaveDriverStorage(y)
|
||||
|
||||
// 展示二维码
|
||||
qrTemplate := `<body>
|
||||
<img src="data:image/jpeg;base64,%s"/>
|
||||
<br>Or Click here: <a href="%s">%s</a>
|
||||
</body>`
|
||||
|
||||
// Generate QR code
|
||||
qrCode, err := qrcode.Encode(uuidInfo.Uuid, qrcode.Medium, 256)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to generate QR code: %v", err)
|
||||
}
|
||||
|
||||
// Encode QR code to base64
|
||||
qrCodeBase64 := base64.StdEncoding.EncodeToString(qrCode)
|
||||
|
||||
// Create the HTML page
|
||||
qrPage := fmt.Sprintf(qrTemplate, qrCodeBase64, uuidInfo.Uuid, uuidInfo.Uuid)
|
||||
return fmt.Errorf("need verify: \n%s", qrPage)
|
||||
|
||||
} else {
|
||||
var accessTokenResp E189AccessTokenResp
|
||||
req.SetResult(&accessTokenResp).SetError(&erron)
|
||||
// Signature
|
||||
req.SetHeaders(y.AppKeySignatureHeader(ApiUrl+"/family/manage/qrcodeLoginResult.action",
|
||||
http.MethodGet))
|
||||
req.SetQueryParam("uuid", y.Addition.TempUuid)
|
||||
_, err = req.Execute(http.MethodGet, ApiUrl+"/family/manage/qrcodeLoginResult.action")
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if erron.HasError() {
|
||||
return &erron
|
||||
}
|
||||
if accessTokenResp.E189AccessToken == "" {
|
||||
return errors.New("E189AccessToken is empty")
|
||||
}
|
||||
y.Addition.AccessToken = accessTokenResp.E189AccessToken
|
||||
y.Addition.TempUuid = ""
|
||||
}
|
||||
}
|
||||
// 获取SessionKey 和 SessionSecret
|
||||
reqb := y.client.R().SetQueryParams(clientSuffix())
|
||||
reqb.SetResult(&tokenInfo).SetError(&erron)
|
||||
// Signature
|
||||
reqb.SetHeaders(y.AppKeySignatureHeader(ApiUrl+"/family/manage/loginFamilyMerge.action",
|
||||
http.MethodGet))
|
||||
reqb.SetQueryParam("e189AccessToken", y.Addition.AccessToken)
|
||||
_, err = reqb.Execute(http.MethodGet, ApiUrl+"/family/manage/loginFamilyMerge.action")
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if erron.HasError() {
|
||||
return &erron
|
||||
}
|
||||
|
||||
y.tokenInfo = &tokenInfo
|
||||
op.MustSaveDriverStorage(y)
|
||||
return
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) RapidUpload(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, isFamily bool, overwrite bool) (model.Obj, error) {
|
||||
fileMd5 := stream.GetHash().GetHash(utils.MD5)
|
||||
if len(fileMd5) < utils.MD5.Width {
|
||||
return nil, errors.New("invalid hash")
|
||||
}
|
||||
|
||||
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, stream.GetName(), fmt.Sprint(stream.GetSize()), isFamily)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if uploadInfo.FileDataExists != 1 {
|
||||
return nil, errors.New("rapid upload fail")
|
||||
}
|
||||
|
||||
return y.OldUploadCommit(ctx, uploadInfo.FileCommitUrl, uploadInfo.UploadFileId, isFamily, overwrite)
|
||||
}
|
||||
|
||||
// 旧版本上传,家庭云不支持覆盖
|
||||
func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
||||
tempFile, err := file.CacheFullInTempFile()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fileMd5, err := utils.HashFile(utils.MD5, tempFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// 创建上传会话
|
||||
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, file.GetName(), fmt.Sprint(file.GetSize()), isFamily)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// 网盘中不存在该文件,开始上传
|
||||
status := GetUploadFileStatusResp{CreateUploadFileResp: *uploadInfo}
|
||||
for status.GetSize() < file.GetSize() && status.FileDataExists != 1 {
|
||||
if utils.IsCanceled(ctx) {
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
|
||||
header := map[string]string{
|
||||
"ResumePolicy": "1",
|
||||
"Expect": "100-continue",
|
||||
}
|
||||
|
||||
if isFamily {
|
||||
header["FamilyId"] = fmt.Sprint(y.FamilyID)
|
||||
header["UploadFileId"] = fmt.Sprint(status.UploadFileId)
|
||||
} else {
|
||||
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
|
||||
}
|
||||
|
||||
_, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile), isFamily)
|
||||
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// 获取断点状态
|
||||
fullUrl := ApiUrl + "/getUploadFileStatus.action"
|
||||
if y.isFamily() {
|
||||
fullUrl = ApiUrl + "/family/file/getFamilyFileStatus.action"
|
||||
}
|
||||
_, err = y.get(fullUrl, func(req *resty.Request) {
|
||||
req.SetContext(ctx).SetQueryParams(map[string]string{
|
||||
"uploadFileId": fmt.Sprint(status.UploadFileId),
|
||||
"resumePolicy": "1",
|
||||
})
|
||||
if isFamily {
|
||||
req.SetQueryParam("familyId", fmt.Sprint(y.FamilyID))
|
||||
}
|
||||
}, &status, isFamily)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, err := tempFile.Seek(status.GetSize(), io.SeekStart); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
up(float64(status.GetSize()) / float64(file.GetSize()) * 100)
|
||||
}
|
||||
|
||||
return y.OldUploadCommit(ctx, status.FileCommitUrl, status.UploadFileId, isFamily, overwrite)
|
||||
}
|
||||
|
||||
// 创建上传会话
|
||||
func (y *Cloud189TV) OldUploadCreate(ctx context.Context, parentID string, fileMd5, fileName, fileSize string, isFamily bool) (*CreateUploadFileResp, error) {
|
||||
var uploadInfo CreateUploadFileResp
|
||||
|
||||
fullUrl := ApiUrl + "/createUploadFile.action"
|
||||
if isFamily {
|
||||
fullUrl = ApiUrl + "/family/file/createFamilyFile.action"
|
||||
}
|
||||
_, err := y.post(fullUrl, func(req *resty.Request) {
|
||||
req.SetContext(ctx)
|
||||
if isFamily {
|
||||
req.SetQueryParams(map[string]string{
|
||||
"familyId": y.FamilyID,
|
||||
"parentId": parentID,
|
||||
"fileMd5": fileMd5,
|
||||
"fileName": fileName,
|
||||
"fileSize": fileSize,
|
||||
"resumePolicy": "1",
|
||||
})
|
||||
} else {
|
||||
req.SetFormData(map[string]string{
|
||||
"parentFolderId": parentID,
|
||||
"fileName": fileName,
|
||||
"size": fileSize,
|
||||
"md5": fileMd5,
|
||||
"opertype": "3",
|
||||
"flag": "1",
|
||||
"resumePolicy": "1",
|
||||
"isLog": "0",
|
||||
})
|
||||
}
|
||||
}, &uploadInfo, isFamily)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &uploadInfo, nil
|
||||
}
|
||||
|
||||
// 提交上传文件
|
||||
func (y *Cloud189TV) OldUploadCommit(ctx context.Context, fileCommitUrl string, uploadFileID int64, isFamily bool, overwrite bool) (model.Obj, error) {
|
||||
var resp OldCommitUploadFileResp
|
||||
_, err := y.post(fileCommitUrl, func(req *resty.Request) {
|
||||
req.SetContext(ctx)
|
||||
if isFamily {
|
||||
req.SetHeaders(map[string]string{
|
||||
"ResumePolicy": "1",
|
||||
"UploadFileId": fmt.Sprint(uploadFileID),
|
||||
"FamilyId": fmt.Sprint(y.FamilyID),
|
||||
})
|
||||
} else {
|
||||
req.SetFormData(map[string]string{
|
||||
"opertype": IF(overwrite, "3", "1"),
|
||||
"resumePolicy": "1",
|
||||
"uploadFileId": fmt.Sprint(uploadFileID),
|
||||
"isLog": "0",
|
||||
})
|
||||
}
|
||||
}, &resp, isFamily)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return resp.toFile(), nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) isFamily() bool {
|
||||
return y.Type == "family"
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) isLogin() bool {
|
||||
if y.tokenInfo == nil {
|
||||
return false
|
||||
}
|
||||
_, err := y.get(ApiUrl+"/getUserInfo.action", nil, nil)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// 获取家庭云所有用户信息
|
||||
func (y *Cloud189TV) getFamilyInfoList() ([]FamilyInfoResp, error) {
|
||||
var resp FamilyInfoListResp
|
||||
_, err := y.get(ApiUrl+"/family/manage/getFamilyList.action", nil, &resp, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return resp.FamilyInfoResp, nil
|
||||
}
|
||||
|
||||
// 抽取家庭云ID
|
||||
func (y *Cloud189TV) getFamilyID() (string, error) {
|
||||
infos, err := y.getFamilyInfoList()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if len(infos) == 0 {
|
||||
return "", fmt.Errorf("cannot get automatically,please input family_id")
|
||||
}
|
||||
for _, info := range infos {
|
||||
if strings.Contains(y.tokenInfo.LoginName, info.RemarkName) {
|
||||
return fmt.Sprint(info.FamilyID), nil
|
||||
}
|
||||
}
|
||||
return fmt.Sprint(infos[0].FamilyID), nil
|
||||
}
|
||||
|
||||
func (y *Cloud189TV) CreateBatchTask(aType string, familyID string, targetFolderId string, other map[string]string, taskInfos ...BatchTaskInfo) (*CreateBatchTaskResp, error) {
|
||||
var resp CreateBatchTaskResp
|
||||
_, err := y.post(ApiUrl+"/batch/createBatchTask.action", func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"type": aType,
|
||||
"taskInfos": MustString(utils.Json.MarshalToString(taskInfos)),
|
||||
})
|
||||
if targetFolderId != "" {
|
||||
req.SetFormData(map[string]string{"targetFolderId": targetFolderId})
|
||||
}
|
||||
if familyID != "" {
|
||||
req.SetFormData(map[string]string{"familyId": familyID})
|
||||
}
|
||||
req.SetFormData(other)
|
||||
}, &resp, familyID != "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &resp, nil
|
||||
}
|
||||
|
||||
// 检测任务状态
|
||||
func (y *Cloud189TV) CheckBatchTask(aType string, taskID string) (*BatchTaskStateResp, error) {
|
||||
var resp BatchTaskStateResp
|
||||
_, err := y.post(ApiUrl+"/batch/checkBatchTask.action", func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"type": aType,
|
||||
"taskId": taskID,
|
||||
})
|
||||
}, &resp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &resp, nil
|
||||
}
|
||||
|
||||
// 获取冲突的任务信息
|
||||
func (y *Cloud189TV) GetConflictTaskInfo(aType string, taskID string) (*BatchTaskConflictTaskInfoResp, error) {
|
||||
var resp BatchTaskConflictTaskInfoResp
|
||||
_, err := y.post(ApiUrl+"/batch/getConflictTaskInfo.action", func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"type": aType,
|
||||
"taskId": taskID,
|
||||
})
|
||||
}, &resp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &resp, nil
|
||||
}
|
||||
|
||||
// 处理冲突
|
||||
func (y *Cloud189TV) ManageBatchTask(aType string, taskID string, targetFolderId string, taskInfos ...BatchTaskInfo) error {
|
||||
_, err := y.post(ApiUrl+"/batch/manageBatchTask.action", func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"targetFolderId": targetFolderId,
|
||||
"type": aType,
|
||||
"taskId": taskID,
|
||||
"taskInfos": MustString(utils.Json.MarshalToString(taskInfos)),
|
||||
})
|
||||
}, nil)
|
||||
return err
|
||||
}
|
||||
|
||||
var ErrIsConflict = errors.New("there is a conflict with the target object")
|
||||
|
||||
// 等待任务完成
|
||||
func (y *Cloud189TV) WaitBatchTask(aType string, taskID string, t time.Duration) error {
|
||||
for {
|
||||
state, err := y.CheckBatchTask(aType, taskID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch state.TaskStatus {
|
||||
case 2:
|
||||
return ErrIsConflict
|
||||
case 4:
|
||||
return nil
|
||||
}
|
||||
time.Sleep(t)
|
||||
}
|
||||
}
|
@ -322,7 +322,7 @@ func (y *Cloud189PC) login() (err error) {
|
||||
_, err = y.client.R().
|
||||
SetResult(&tokenInfo).SetError(&erron).
|
||||
SetQueryParams(clientSuffix()).
|
||||
SetQueryParam("redirectURL", loginresp.ToUrl).
|
||||
SetQueryParam("redirectURL", url.QueryEscape(loginresp.ToUrl)).
|
||||
Post(API_URL + "/getSessionForPC.action")
|
||||
if err != nil {
|
||||
return
|
||||
@ -504,6 +504,7 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
|
||||
retry.Attempts(3),
|
||||
retry.Delay(time.Second),
|
||||
retry.DelayType(retry.BackOffDelay))
|
||||
threadG.SetLimit(3)
|
||||
|
||||
count := int(size / sliceSize)
|
||||
lastPartSize := size % sliceSize
|
||||
|
@ -10,7 +10,6 @@ import (
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/123_share"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/139"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/189"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/189_tv"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/189pc"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/alias"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/aliyundrive"
|
||||
@ -19,6 +18,7 @@ import (
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/azure_blob"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/baidu_netdisk"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/baidu_photo"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/baidu_share"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/chaoxing"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/cloudreve"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/cloudreve_v4"
|
||||
@ -50,9 +50,9 @@ import (
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/openlist"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/pikpak"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/pikpak_share"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/quark_open"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/quark_uc"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/quark_uc_tv"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/quqi"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/s3"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/seafile"
|
||||
_ "github.com/OpenListTeam/OpenList/drivers/sftp"
|
||||
|
@ -295,6 +295,7 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
|
||||
retry.Attempts(1),
|
||||
retry.Delay(time.Second),
|
||||
retry.DelayType(retry.BackOffDelay))
|
||||
threadG.SetLimit(3)
|
||||
|
||||
for i, partseq := range precreateResp.BlockList {
|
||||
if utils.IsCanceled(upCtx) {
|
||||
|
@ -342,6 +342,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
|
||||
retry.Attempts(3),
|
||||
retry.Delay(time.Second),
|
||||
retry.DelayType(retry.BackOffDelay))
|
||||
threadG.SetLimit(3)
|
||||
|
||||
for i, partseq := range precreateResp.BlockList {
|
||||
if utils.IsCanceled(upCtx) {
|
||||
|
251
drivers/baidu_share/driver.go
Normal file
251
drivers/baidu_share/driver.go
Normal file
@ -0,0 +1,251 @@
|
||||
package baidu_share
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/go-resty/resty/v2"
|
||||
)
|
||||
|
||||
type BaiduShare struct {
|
||||
model.Storage
|
||||
Addition
|
||||
client *resty.Client
|
||||
info struct {
|
||||
Root string
|
||||
Seckey string
|
||||
Shareid string
|
||||
Uk string
|
||||
}
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Config() driver.Config {
|
||||
return config
|
||||
}
|
||||
|
||||
func (d *BaiduShare) GetAddition() driver.Additional {
|
||||
return &d.Addition
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Init(ctx context.Context) error {
|
||||
// TODO login / refresh token
|
||||
//op.MustSaveDriverStorage(d)
|
||||
d.client = resty.New().
|
||||
SetBaseURL("https://pan.baidu.com").
|
||||
SetHeader("User-Agent", "netdisk").
|
||||
SetCookie(&http.Cookie{Name: "BDUSS", Value: d.BDUSS}).
|
||||
SetCookie(&http.Cookie{Name: "ndut_fmt"})
|
||||
respJson := struct {
|
||||
Errno int64 `json:"errno"`
|
||||
Data struct {
|
||||
List [1]struct {
|
||||
Path string `json:"path"`
|
||||
} `json:"list"`
|
||||
Uk json.Number `json:"uk"`
|
||||
Shareid json.Number `json:"shareid"`
|
||||
Seckey string `json:"seckey"`
|
||||
} `json:"data"`
|
||||
}{}
|
||||
resp, err := d.client.R().
|
||||
SetBody(url.Values{
|
||||
"pwd": {d.Pwd},
|
||||
"root": {"1"},
|
||||
"shorturl": {d.Surl},
|
||||
}.Encode()).
|
||||
SetResult(&respJson).
|
||||
Post("share/wxlist?channel=weixin&version=2.2.2&clienttype=25&web=1")
|
||||
if err == nil {
|
||||
if resp.IsSuccess() && respJson.Errno == 0 {
|
||||
d.info.Root = path.Dir(respJson.Data.List[0].Path)
|
||||
d.info.Seckey = respJson.Data.Seckey
|
||||
d.info.Shareid = respJson.Data.Shareid.String()
|
||||
d.info.Uk = respJson.Data.Uk.String()
|
||||
} else {
|
||||
err = fmt.Errorf(" %s; %s; ", resp.Status(), resp.Body())
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Drop(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *BaiduShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||
// TODO return the files list, required
|
||||
reqDir := dir.GetPath()
|
||||
isRoot := "0"
|
||||
if reqDir == d.RootFolderPath {
|
||||
reqDir = path.Join(d.info.Root, reqDir)
|
||||
}
|
||||
if reqDir == d.info.Root {
|
||||
isRoot = "1"
|
||||
}
|
||||
objs := []model.Obj{}
|
||||
var err error
|
||||
var page uint64 = 1
|
||||
more := true
|
||||
for more && err == nil {
|
||||
respJson := struct {
|
||||
Errno int64 `json:"errno"`
|
||||
Data struct {
|
||||
More bool `json:"has_more"`
|
||||
List []struct {
|
||||
Fsid json.Number `json:"fs_id"`
|
||||
Isdir json.Number `json:"isdir"`
|
||||
Path string `json:"path"`
|
||||
Name string `json:"server_filename"`
|
||||
Mtime json.Number `json:"server_mtime"`
|
||||
Size json.Number `json:"size"`
|
||||
} `json:"list"`
|
||||
} `json:"data"`
|
||||
}{}
|
||||
resp, e := d.client.R().
|
||||
SetBody(url.Values{
|
||||
"dir": {reqDir},
|
||||
"num": {"1000"},
|
||||
"order": {"time"},
|
||||
"page": {fmt.Sprint(page)},
|
||||
"pwd": {d.Pwd},
|
||||
"root": {isRoot},
|
||||
"shorturl": {d.Surl},
|
||||
}.Encode()).
|
||||
SetResult(&respJson).
|
||||
Post("share/wxlist?channel=weixin&version=2.2.2&clienttype=25&web=1")
|
||||
err = e
|
||||
if err == nil {
|
||||
if resp.IsSuccess() && respJson.Errno == 0 {
|
||||
page++
|
||||
more = respJson.Data.More
|
||||
for _, v := range respJson.Data.List {
|
||||
size, _ := v.Size.Int64()
|
||||
mtime, _ := v.Mtime.Int64()
|
||||
objs = append(objs, &model.Object{
|
||||
ID: v.Fsid.String(),
|
||||
Path: v.Path,
|
||||
Name: v.Name,
|
||||
Size: size,
|
||||
Modified: time.Unix(mtime, 0),
|
||||
IsFolder: v.Isdir.String() == "1",
|
||||
})
|
||||
}
|
||||
} else {
|
||||
err = fmt.Errorf(" %s; %s; ", resp.Status(), resp.Body())
|
||||
}
|
||||
}
|
||||
}
|
||||
return objs, err
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||
// TODO return link of file, required
|
||||
link := model.Link{Header: d.client.Header}
|
||||
sign := ""
|
||||
stamp := ""
|
||||
signJson := struct {
|
||||
Errno int64 `json:"errno"`
|
||||
Data struct {
|
||||
Stamp json.Number `json:"timestamp"`
|
||||
Sign string `json:"sign"`
|
||||
} `json:"data"`
|
||||
}{}
|
||||
resp, err := d.client.R().
|
||||
SetQueryParam("surl", d.Surl).
|
||||
SetResult(&signJson).
|
||||
Get("share/tplconfig?fields=sign,timestamp&channel=chunlei&web=1&app_id=250528&clienttype=0")
|
||||
if err == nil {
|
||||
if resp.IsSuccess() && signJson.Errno == 0 {
|
||||
stamp = signJson.Data.Stamp.String()
|
||||
sign = signJson.Data.Sign
|
||||
} else {
|
||||
err = fmt.Errorf(" %s; %s; ", resp.Status(), resp.Body())
|
||||
}
|
||||
}
|
||||
if err == nil {
|
||||
respJson := struct {
|
||||
Errno int64 `json:"errno"`
|
||||
List [1]struct {
|
||||
Dlink string `json:"dlink"`
|
||||
} `json:"list"`
|
||||
}{}
|
||||
resp, err = d.client.R().
|
||||
SetQueryParam("sign", sign).
|
||||
SetQueryParam("timestamp", stamp).
|
||||
SetBody(url.Values{
|
||||
"encrypt": {"0"},
|
||||
"extra": {fmt.Sprintf(`{"sekey":"%s"}`, d.info.Seckey)},
|
||||
"fid_list": {fmt.Sprintf("[%s]", file.GetID())},
|
||||
"primaryid": {d.info.Shareid},
|
||||
"product": {"share"},
|
||||
"type": {"nolimit"},
|
||||
"uk": {d.info.Uk},
|
||||
}.Encode()).
|
||||
SetResult(&respJson).
|
||||
Post("api/sharedownload?app_id=250528&channel=chunlei&clienttype=12&web=1")
|
||||
if err == nil {
|
||||
if resp.IsSuccess() && respJson.Errno == 0 && respJson.List[0].Dlink != "" {
|
||||
link.URL = respJson.List[0].Dlink
|
||||
} else {
|
||||
err = fmt.Errorf(" %s; %s; ", resp.Status(), resp.Body())
|
||||
}
|
||||
}
|
||||
if err == nil {
|
||||
resp, err = d.client.R().
|
||||
SetDoNotParseResponse(true).
|
||||
Get(link.URL)
|
||||
if err == nil {
|
||||
defer resp.RawBody().Close()
|
||||
if resp.IsError() {
|
||||
byt, _ := io.ReadAll(resp.RawBody())
|
||||
err = fmt.Errorf(" %s; %s; ", resp.Status(), byt)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return &link, err
|
||||
}
|
||||
|
||||
func (d *BaiduShare) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||
// TODO create folder, optional
|
||||
return errs.NotSupport
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||
// TODO move obj, optional
|
||||
return errs.NotSupport
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||
// TODO rename obj, optional
|
||||
return errs.NotSupport
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||
// TODO copy obj, optional
|
||||
return errs.NotSupport
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Remove(ctx context.Context, obj model.Obj) error {
|
||||
// TODO remove obj, optional
|
||||
return errs.NotSupport
|
||||
}
|
||||
|
||||
func (d *BaiduShare) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||
// TODO upload file, optional
|
||||
return errs.NotSupport
|
||||
}
|
||||
|
||||
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
|
||||
// return nil, errs.NotSupport
|
||||
//}
|
||||
|
||||
var _ driver.Driver = (*BaiduShare)(nil)
|
37
drivers/baidu_share/meta.go
Normal file
37
drivers/baidu_share/meta.go
Normal file
@ -0,0 +1,37 @@
|
||||
package baidu_share
|
||||
|
||||
import (
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
)
|
||||
|
||||
type Addition struct {
|
||||
// Usually one of two
|
||||
driver.RootPath
|
||||
// driver.RootID
|
||||
// define other
|
||||
// Field string `json:"field" type:"select" required:"true" options:"a,b,c" default:"a"`
|
||||
Surl string `json:"surl"`
|
||||
Pwd string `json:"pwd"`
|
||||
BDUSS string `json:"BDUSS"`
|
||||
}
|
||||
|
||||
var config = driver.Config{
|
||||
Name: "BaiduShare",
|
||||
LocalSort: true,
|
||||
OnlyLocal: false,
|
||||
OnlyProxy: false,
|
||||
NoCache: false,
|
||||
NoUpload: true,
|
||||
NeedMs: false,
|
||||
DefaultRoot: "/",
|
||||
CheckStatus: false,
|
||||
Alert: "",
|
||||
NoOverwriteUpload: false,
|
||||
}
|
||||
|
||||
func init() {
|
||||
op.RegisterDriver(func() driver.Driver {
|
||||
return &BaiduShare{}
|
||||
})
|
||||
}
|
1
drivers/baidu_share/types.go
Normal file
1
drivers/baidu_share/types.go
Normal file
@ -0,0 +1 @@
|
||||
package baidu_share
|
3
drivers/baidu_share/util.go
Normal file
3
drivers/baidu_share/util.go
Normal file
@ -0,0 +1,3 @@
|
||||
package baidu_share
|
||||
|
||||
// do others that not defined in Driver interface
|
@ -173,12 +173,13 @@ func (d *CloudreveV4) Move(ctx context.Context, srcObj, dstDir model.Obj) error
|
||||
}
|
||||
|
||||
func (d *CloudreveV4) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||
return d.request(http.MethodPost, "/file/rename", func(req *resty.Request) {
|
||||
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
|
||||
req.SetBody(base.Json{
|
||||
"new_name": newName,
|
||||
"uri": srcObj.GetPath(),
|
||||
})
|
||||
}, nil)
|
||||
|
||||
}
|
||||
|
||||
func (d *CloudreveV4) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||
|
@ -175,7 +175,8 @@ func (d *CloudreveV4) doLogin(needCaptcha bool) error {
|
||||
}
|
||||
|
||||
func (d *CloudreveV4) refreshToken() error {
|
||||
if d.RefreshToken == "" {
|
||||
var token Token
|
||||
if token.RefreshToken == "" {
|
||||
if d.Username != "" {
|
||||
err := d.login()
|
||||
if err != nil {
|
||||
@ -184,7 +185,6 @@ func (d *CloudreveV4) refreshToken() error {
|
||||
}
|
||||
return nil
|
||||
}
|
||||
var token Token
|
||||
err := d.request(http.MethodPost, "/session/token/refresh", func(req *resty.Request) {
|
||||
req.SetBody(base.Json{
|
||||
"refresh_token": d.RefreshToken,
|
||||
@ -469,7 +469,7 @@ func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileU
|
||||
}
|
||||
|
||||
// 上传成功发送回调请求
|
||||
return d.request(http.MethodGet, "/callback/s3/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
|
||||
return d.request(http.MethodPost, "/callback/s3/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
|
||||
req.SetBody("{}")
|
||||
}, nil)
|
||||
}
|
||||
|
@ -145,7 +145,7 @@ func (d *Doubao) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
|
||||
}
|
||||
|
||||
// 生成标准的Content-Disposition
|
||||
contentDisposition := utils.GenerateContentDisposition(u.Name)
|
||||
contentDisposition := generateContentDisposition(u.Name)
|
||||
|
||||
return &model.Link{
|
||||
URL: downloadUrl,
|
||||
|
@ -926,6 +926,36 @@ func getSigningKey(secretKey, dateStamp, region, service string) []byte {
|
||||
return kSigning
|
||||
}
|
||||
|
||||
// generateContentDisposition 生成符合RFC 5987标准的Content-Disposition头部
|
||||
func generateContentDisposition(filename string) string {
|
||||
// 按照RFC 2047进行编码,用于filename部分
|
||||
encodedName := urlEncode(filename)
|
||||
|
||||
// 按照RFC 5987进行编码,用于filename*部分
|
||||
encodedNameRFC5987 := encodeRFC5987(filename)
|
||||
|
||||
return fmt.Sprintf("attachment; filename=\"%s\"; filename*=utf-8''%s",
|
||||
encodedName, encodedNameRFC5987)
|
||||
}
|
||||
|
||||
// encodeRFC5987 按照RFC 5987规范编码字符串,适用于HTTP头部参数中的非ASCII字符
|
||||
func encodeRFC5987(s string) string {
|
||||
var buf strings.Builder
|
||||
for _, r := range []byte(s) {
|
||||
// 根据RFC 5987,只有字母、数字和部分特殊符号可以不编码
|
||||
if (r >= 'a' && r <= 'z') ||
|
||||
(r >= 'A' && r <= 'Z') ||
|
||||
(r >= '0' && r <= '9') ||
|
||||
r == '-' || r == '.' || r == '_' || r == '~' {
|
||||
buf.WriteByte(r)
|
||||
} else {
|
||||
// 其他字符都需要百分号编码
|
||||
fmt.Fprintf(&buf, "%%%02X", r)
|
||||
}
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
func randomString() string {
|
||||
const charset = "0123456789abcdefghijklmnopqrstuvwxyz"
|
||||
const length = 11 // 11位随机字符串
|
||||
|
@ -9,7 +9,6 @@ import (
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils"
|
||||
"github.com/go-resty/resty/v2"
|
||||
)
|
||||
|
||||
@ -106,7 +105,7 @@ func (d *DoubaoShare) Link(ctx context.Context, file model.Obj, args model.LinkA
|
||||
}
|
||||
|
||||
// 生成标准的Content-Disposition
|
||||
contentDisposition := utils.GenerateContentDisposition(u.Name)
|
||||
contentDisposition := generateContentDisposition(u.Name)
|
||||
|
||||
return &model.Link{
|
||||
URL: downloadUrl,
|
||||
|
@ -5,6 +5,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
@ -706,3 +707,39 @@ func (d *DoubaoShare) listVirtualDirectoryContent(dir model.Obj) ([]model.Obj, e
|
||||
|
||||
return objects, nil
|
||||
}
|
||||
|
||||
// generateContentDisposition 生成符合RFC 5987标准的Content-Disposition头部
|
||||
func generateContentDisposition(filename string) string {
|
||||
// 按照RFC 2047进行编码,用于filename部分
|
||||
encodedName := urlEncode(filename)
|
||||
|
||||
// 按照RFC 5987进行编码,用于filename*部分
|
||||
encodedNameRFC5987 := encodeRFC5987(filename)
|
||||
|
||||
return fmt.Sprintf("attachment; filename=\"%s\"; filename*=utf-8''%s",
|
||||
encodedName, encodedNameRFC5987)
|
||||
}
|
||||
|
||||
// encodeRFC5987 按照RFC 5987规范编码字符串,适用于HTTP头部参数中的非ASCII字符
|
||||
func encodeRFC5987(s string) string {
|
||||
var buf strings.Builder
|
||||
for _, r := range []byte(s) {
|
||||
// 根据RFC 5987,只有字母、数字和部分特殊符号可以不编码
|
||||
if (r >= 'a' && r <= 'z') ||
|
||||
(r >= 'A' && r <= 'Z') ||
|
||||
(r >= '0' && r <= '9') ||
|
||||
r == '-' || r == '.' || r == '_' || r == '~' {
|
||||
buf.WriteByte(r)
|
||||
} else {
|
||||
// 其他字符都需要百分号编码
|
||||
fmt.Fprintf(&buf, "%%%02X", r)
|
||||
}
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
func urlEncode(s string) string {
|
||||
s = url.QueryEscape(s)
|
||||
s = strings.ReplaceAll(s, "+", "%20")
|
||||
return s
|
||||
}
|
||||
|
@ -5,14 +5,19 @@ import (
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultClientID = "76lrwrklhdn1icb"
|
||||
)
|
||||
|
||||
type Addition struct {
|
||||
RefreshToken string `json:"refresh_token" required:"true"`
|
||||
driver.RootPath
|
||||
UseOnlineAPI bool `json:"use_online_api" default:"false"`
|
||||
APIAddress string `json:"api_url_address" default:"https://api.oplist.org/dropboxs/renewapi"`
|
||||
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
|
||||
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
|
||||
|
||||
OauthTokenURL string `json:"oauth_token_url" default:"https://api.oplist.org/dropboxs/renewapi"` // TODO: replace
|
||||
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
|
||||
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
|
||||
|
||||
AccessToken string
|
||||
RefreshToken string `json:"refresh_token" required:"true"`
|
||||
RootNamespaceId string
|
||||
}
|
||||
|
||||
|
@ -15,37 +15,10 @@ import (
|
||||
)
|
||||
|
||||
func (d *Dropbox) refreshToken() error {
|
||||
// 使用在线API刷新Token,无需ClientID和ClientSecret
|
||||
if d.UseOnlineAPI && len(d.APIAddress) > 0 {
|
||||
u := d.APIAddress
|
||||
var resp struct {
|
||||
RefreshToken string `json:"refresh_token"`
|
||||
AccessToken string `json:"access_token"`
|
||||
ErrorMessage string `json:"text"`
|
||||
}
|
||||
_, err := base.RestyClient.R().
|
||||
SetResult(&resp).
|
||||
SetQueryParams(map[string]string{
|
||||
"refresh_ui": d.RefreshToken,
|
||||
"server_use": "true",
|
||||
"driver_txt": "dropboxs_go",
|
||||
}).
|
||||
Get(u)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if resp.RefreshToken == "" || resp.AccessToken == "" {
|
||||
if resp.ErrorMessage != "" {
|
||||
return fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage)
|
||||
}
|
||||
return fmt.Errorf("empty token returned from official API")
|
||||
}
|
||||
d.AccessToken = resp.AccessToken
|
||||
d.RefreshToken = resp.RefreshToken
|
||||
op.MustSaveDriverStorage(d)
|
||||
return nil
|
||||
}
|
||||
url := d.base + "/oauth2/token"
|
||||
if utils.SliceContains([]string{"", DefaultClientID}, d.ClientID) {
|
||||
url = d.OauthTokenURL
|
||||
}
|
||||
var tokenResp TokenResp
|
||||
resp, err := base.RestyClient.R().
|
||||
//ForceContentType("application/x-www-form-urlencoded").
|
||||
|
@ -90,15 +90,15 @@ func (d *GooglePhoto) getFakeRoot() ([]MediaItem, error) {
|
||||
return []MediaItem{
|
||||
{
|
||||
Id: FETCH_ALL,
|
||||
Title: FETCH_ALL,
|
||||
Title: "全部媒体",
|
||||
},
|
||||
{
|
||||
Id: FETCH_ALBUMS,
|
||||
Title: FETCH_ALBUMS,
|
||||
Title: "全部影集",
|
||||
},
|
||||
{
|
||||
Id: FETCH_SHARE_ALBUMS,
|
||||
Title: FETCH_SHARE_ALBUMS,
|
||||
Title: "共享影集",
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
@ -298,6 +298,7 @@ func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
|
||||
retry.Attempts(3),
|
||||
retry.Delay(time.Second),
|
||||
retry.DelayType(retry.BackOffDelay))
|
||||
threadG.SetLimit(3)
|
||||
|
||||
// step.3
|
||||
parts, err := d.client.GetAllMultiUploadUrls(initUpdload.UploadFileID, initUpdload.PartInfos)
|
||||
|
@ -1,234 +0,0 @@
|
||||
package quark_open
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/OpenListTeam/OpenList/drivers/base"
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
streamPkg "github.com/OpenListTeam/OpenList/internal/stream"
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils"
|
||||
"github.com/go-resty/resty/v2"
|
||||
"hash"
|
||||
"io"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
type QuarkOpen struct {
|
||||
model.Storage
|
||||
Addition
|
||||
config driver.Config
|
||||
conf Conf
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Config() driver.Config {
|
||||
return d.config
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) GetAddition() driver.Additional {
|
||||
return &d.Addition
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Init(ctx context.Context) error {
|
||||
var resp UserInfoResp
|
||||
|
||||
_, err := d.request(ctx, "/open/v1/user/info", http.MethodGet, nil, &resp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if resp.Data.UserID != "" {
|
||||
d.conf.userId = resp.Data.UserID
|
||||
} else {
|
||||
return errors.New("failed to get user ID")
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Drop(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||
files, err := d.GetFiles(ctx, dir.GetID())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
|
||||
return fileToObj(src), nil
|
||||
})
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||
data := base.Json{
|
||||
"fid": file.GetID(),
|
||||
}
|
||||
var resp FileLikeResp
|
||||
_, err := d.request(ctx, "/open/v1/file/get_download_url", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(data)
|
||||
}, &resp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &model.Link{
|
||||
URL: resp.Data.DownloadURL,
|
||||
Header: http.Header{
|
||||
"Cookie": []string{d.generateAuthCookie()},
|
||||
},
|
||||
Concurrency: 3,
|
||||
PartSize: 10 * utils.MB,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||
data := base.Json{
|
||||
"dir_path": dirName,
|
||||
"pdir_fid": parentDir.GetID(),
|
||||
}
|
||||
_, err := d.request(ctx, "/open/v1/dir", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(data)
|
||||
}, nil)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||
data := base.Json{
|
||||
"action_type": 1,
|
||||
"fid_list": []string{srcObj.GetID()},
|
||||
"to_pdir_fid": dstDir.GetID(),
|
||||
}
|
||||
_, err := d.request(ctx, "/open/v1/file/move", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(data)
|
||||
}, nil)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||
data := base.Json{
|
||||
"fid": srcObj.GetID(),
|
||||
"file_name": newName,
|
||||
"conflict_mode": "REUSE",
|
||||
}
|
||||
_, err := d.request(ctx, "/open/v1/file/rename", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(data)
|
||||
}, nil)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||
return errs.NotSupport
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Remove(ctx context.Context, obj model.Obj) error {
|
||||
data := base.Json{
|
||||
"action_type": 1,
|
||||
"fid_list": []string{obj.GetID()},
|
||||
}
|
||||
_, err := d.request(ctx, "/open/v1/file/delete", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(data)
|
||||
}, nil)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||
md5Str, sha1Str := stream.GetHash().GetHash(utils.MD5), stream.GetHash().GetHash(utils.SHA1)
|
||||
var (
|
||||
md5 hash.Hash
|
||||
sha1 hash.Hash
|
||||
)
|
||||
writers := []io.Writer{}
|
||||
if len(md5Str) != utils.MD5.Width {
|
||||
md5 = utils.MD5.NewFunc()
|
||||
writers = append(writers, md5)
|
||||
}
|
||||
if len(sha1Str) != utils.SHA1.Width {
|
||||
sha1 = utils.SHA1.NewFunc()
|
||||
writers = append(writers, sha1)
|
||||
}
|
||||
|
||||
if len(writers) > 0 {
|
||||
_, err := streamPkg.CacheFullInTempFileAndWriter(stream, io.MultiWriter(writers...))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if md5 != nil {
|
||||
md5Str = hex.EncodeToString(md5.Sum(nil))
|
||||
}
|
||||
if sha1 != nil {
|
||||
sha1Str = hex.EncodeToString(sha1.Sum(nil))
|
||||
}
|
||||
}
|
||||
// pre
|
||||
pre, err := d.upPre(ctx, stream, dstDir.GetID(), md5Str, sha1Str)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// 如果预上传已经完成,直接返回--秒传
|
||||
if pre.Data.Finish == true {
|
||||
up(100)
|
||||
return nil
|
||||
}
|
||||
|
||||
// get part info
|
||||
partInfo := d._getPartInfo(stream, pre.Data.PartSize)
|
||||
// get upload url info
|
||||
upUrlInfo, err := d.upUrl(ctx, pre, partInfo)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// part up
|
||||
total := stream.GetSize()
|
||||
left := total
|
||||
part := make([]byte, pre.Data.PartSize)
|
||||
// 用于存储每个分片的ETag,后续commit时需要
|
||||
etags := make([]string, len(partInfo))
|
||||
|
||||
// 遍历上传每个分片
|
||||
for i, urlInfo := range upUrlInfo.UploadUrls {
|
||||
if utils.IsCanceled(ctx) {
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
currentSize := int64(urlInfo.PartSize)
|
||||
if left < currentSize {
|
||||
part = part[:left]
|
||||
} else {
|
||||
part = part[:currentSize]
|
||||
}
|
||||
|
||||
// 读取分片数据
|
||||
n, err := io.ReadFull(stream, part)
|
||||
if err != nil && !errors.Is(err, io.ErrUnexpectedEOF) {
|
||||
return err
|
||||
}
|
||||
|
||||
// 准备上传分片
|
||||
reader := driver.NewLimitedUploadStream(ctx, bytes.NewReader(part))
|
||||
etag, err := d.upPart(ctx, upUrlInfo, i, reader)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to upload part %d: %w", i, err)
|
||||
}
|
||||
|
||||
// 保存ETag,用于后续commit
|
||||
etags[i] = etag
|
||||
|
||||
// 更新剩余大小和进度
|
||||
left -= int64(n)
|
||||
up(float64(total-left) / float64(total) * 100)
|
||||
}
|
||||
|
||||
return d.upFinish(ctx, pre, partInfo, etags)
|
||||
}
|
||||
|
||||
var _ driver.Driver = (*QuarkOpen)(nil)
|
@ -1,41 +0,0 @@
|
||||
package quark_open
|
||||
|
||||
import (
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
)
|
||||
|
||||
type Addition struct {
|
||||
driver.RootID
|
||||
OrderBy string `json:"order_by" type:"select" options:"none,file_type,file_name,updated_at,created_at" default:"none"`
|
||||
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
|
||||
UseOnlineAPI bool `json:"use_online_api" default:"true"`
|
||||
APIAddress string `json:"api_url_address" default:"https://api.oplist.org/quarkyun/renewapi"`
|
||||
AccessToken string `json:"access_token" required:"false" default:""`
|
||||
RefreshToken string `json:"refresh_token" required:"true"`
|
||||
AppID string `json:"app_id" required:"true" help:"Keep it empty if you don't have one"`
|
||||
SignKey string `json:"sign_key" required:"true" help:"Keep it empty if you don't have one"`
|
||||
}
|
||||
|
||||
type Conf struct {
|
||||
ua string
|
||||
api string
|
||||
userId string
|
||||
}
|
||||
|
||||
func init() {
|
||||
op.RegisterDriver(func() driver.Driver {
|
||||
return &QuarkOpen{
|
||||
config: driver.Config{
|
||||
Name: "QuarkOpen",
|
||||
OnlyLocal: true,
|
||||
DefaultRoot: "0",
|
||||
NoOverwriteUpload: true,
|
||||
},
|
||||
conf: Conf{
|
||||
ua: "go-resty/3.0.0-beta.1 (https://resty.dev)",
|
||||
api: "https://open-api-drive.quark.cn",
|
||||
},
|
||||
}
|
||||
})
|
||||
}
|
@ -1,145 +0,0 @@
|
||||
package quark_open
|
||||
|
||||
import (
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Resp struct {
|
||||
CommonRsp
|
||||
Errno int `json:"errno"`
|
||||
ErrorInfo string `json:"error_info"`
|
||||
}
|
||||
|
||||
type CommonRsp struct {
|
||||
Status int `json:"status"`
|
||||
ReqID string `json:"req_id"`
|
||||
}
|
||||
|
||||
type UserInfo struct {
|
||||
UserID string `json:"user_id"`
|
||||
Nickname string `json:"nickname"`
|
||||
AvatarURL string `json:"avatar_url"`
|
||||
}
|
||||
|
||||
type UserInfoResp struct {
|
||||
CommonRsp
|
||||
Data UserInfo `json:"data"`
|
||||
}
|
||||
|
||||
type RefreshTokenOnlineAPIResp struct {
|
||||
RefreshToken string `json:"refresh_token"`
|
||||
AccessToken string `json:"access_token"`
|
||||
AppID string `json:"app_id"`
|
||||
SignKey string `json:"sign_key"`
|
||||
ErrorMessage string `json:"text"`
|
||||
}
|
||||
|
||||
type File struct {
|
||||
Fid string `json:"fid"`
|
||||
ParentFid string `json:"parent_fid"`
|
||||
Category int64 `json:"category"`
|
||||
FileName string `json:"filename"`
|
||||
Size int64 `json:"size"`
|
||||
FileType string `json:"file_type"`
|
||||
ThumbnailURL string `json:"thumbnail_url"`
|
||||
ContentHash string `json:"content_hash"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
}
|
||||
|
||||
func fileToObj(f File) *model.ObjThumb {
|
||||
return &model.ObjThumb{
|
||||
Object: model.Object{
|
||||
ID: f.Fid,
|
||||
Name: f.FileName,
|
||||
Size: f.Size,
|
||||
Modified: time.UnixMilli(f.UpdatedAt),
|
||||
IsFolder: f.FileType == "0",
|
||||
Ctime: time.UnixMilli(f.CreatedAt),
|
||||
},
|
||||
Thumbnail: model.Thumbnail{Thumbnail: f.ThumbnailURL},
|
||||
}
|
||||
}
|
||||
|
||||
type QueryCursor struct {
|
||||
Version string `json:"version"`
|
||||
Token string `json:"token"`
|
||||
}
|
||||
|
||||
type FileListResp struct {
|
||||
CommonRsp
|
||||
Data struct {
|
||||
FileList []File `json:"file_list"`
|
||||
LastPage bool `json:"last_page"`
|
||||
NextQueryCursor QueryCursor `json:"next_query_cursor"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type FileLikeResp struct {
|
||||
CommonRsp
|
||||
Data struct {
|
||||
Fid string `json:"fid"`
|
||||
Size int `json:"size"`
|
||||
FileName string `json:"file_name"`
|
||||
DownloadURL string `json:"download_url"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type UpPreResp struct {
|
||||
CommonRsp
|
||||
Data struct {
|
||||
Finish bool `json:"finish"`
|
||||
TaskID string `json:"task_id"`
|
||||
Fid string `json:"fid"`
|
||||
CommonHeaders struct {
|
||||
XOssContentSha256 string `json:"X-Oss-Content-Sha256"`
|
||||
XOssDate string `json:"X-Oss-Date"`
|
||||
} `json:"common_headers"`
|
||||
UploadUrls []struct {
|
||||
PartNumber int `json:"part_number"`
|
||||
SignatureInfo struct {
|
||||
AuthType string `json:"auth_type"`
|
||||
Signature string `json:"signature"`
|
||||
} `json:"signature_info"`
|
||||
UploadURL string `json:"upload_url"`
|
||||
Expired int64 `json:"expired"`
|
||||
} `json:"upload_urls"`
|
||||
PartSize int64 `json:"part_size"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type UpUrlInfo struct {
|
||||
UploadUrls []struct {
|
||||
PartNumber int `json:"part_number"`
|
||||
PartSize int `json:"part_size"`
|
||||
SignatureInfo struct {
|
||||
AuthType string `json:"auth_type"`
|
||||
Signature string `json:"signature"`
|
||||
} `json:"signature_info"`
|
||||
UploadURL string `json:"upload_url"`
|
||||
} `json:"upload_urls"`
|
||||
CommonHeaders struct {
|
||||
XOssContentSha256 string `json:"X-Oss-Content-Sha256"`
|
||||
XOssDate string `json:"X-Oss-Date"`
|
||||
} `json:"common_headers"`
|
||||
UploadID string `json:"upload_id"`
|
||||
}
|
||||
|
||||
type UpUrlResp struct {
|
||||
CommonRsp
|
||||
Data UpUrlInfo `json:"data"`
|
||||
}
|
||||
|
||||
type UpFinishResp struct {
|
||||
CommonRsp
|
||||
Data struct {
|
||||
TaskID string `json:"task_id"`
|
||||
Fid string `json:"fid"`
|
||||
Finish bool `json:"finish"`
|
||||
PdirFid string `json:"pdir_fid"`
|
||||
Thumbnail string `json:"thumbnail"`
|
||||
FormatType string `json:"format_type"`
|
||||
Size int `json:"size"`
|
||||
} `json:"data"`
|
||||
}
|
@ -1,473 +0,0 @@
|
||||
package quark_open
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/md5"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/OpenListTeam/OpenList/pkg/http_range"
|
||||
"github.com/google/uuid"
|
||||
"io"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/drivers/base"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
"github.com/go-resty/resty/v2"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func (d *QuarkOpen) request(ctx context.Context, pathname string, method string, callback base.ReqCallback, resp interface{}, manualSign ...*ManualSign) ([]byte, error) {
|
||||
u := d.conf.api + pathname
|
||||
|
||||
var tm, token, reqID string
|
||||
|
||||
// 检查是否手动传入签名参数
|
||||
if len(manualSign) > 0 && manualSign[0] != nil {
|
||||
tm = manualSign[0].Tm
|
||||
token = manualSign[0].Token
|
||||
reqID = manualSign[0].ReqID
|
||||
} else {
|
||||
// 自动生成签名参数
|
||||
tm, token, reqID = d.generateReqSign(method, pathname, d.Addition.SignKey)
|
||||
}
|
||||
|
||||
req := base.RestyClient.R()
|
||||
req.SetContext(ctx)
|
||||
req.SetHeaders(map[string]string{
|
||||
"Accept": "application/json, text/plain, */*",
|
||||
"User-Agent": d.conf.ua,
|
||||
"x-pan-tm": tm,
|
||||
"x-pan-token": token,
|
||||
"x-pan-client-id": d.Addition.AppID,
|
||||
})
|
||||
req.SetQueryParams(map[string]string{
|
||||
"req_id": reqID,
|
||||
"access_token": d.Addition.AccessToken,
|
||||
})
|
||||
if callback != nil {
|
||||
callback(req)
|
||||
}
|
||||
if resp != nil {
|
||||
req.SetResult(resp)
|
||||
}
|
||||
var e Resp
|
||||
req.SetError(&e)
|
||||
res, err := req.Execute(method, u)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// 判断 是否需要 刷新 access_token
|
||||
if e.Status == -1 && (e.Errno == 11001 || (e.Errno == 14001 && strings.Contains(e.ErrorInfo, "access_token"))) {
|
||||
// token 过期
|
||||
err = d.refreshToken()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ctx1, cancelFunc := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancelFunc()
|
||||
return d.request(ctx1, pathname, method, callback, resp)
|
||||
}
|
||||
|
||||
if e.Status >= 400 || e.Errno != 0 {
|
||||
return nil, errors.New(e.ErrorInfo)
|
||||
}
|
||||
|
||||
return res.Body(), nil
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) GetFiles(ctx context.Context, parent string) ([]File, error) {
|
||||
files := make([]File, 0)
|
||||
var queryCursor QueryCursor
|
||||
|
||||
for {
|
||||
reqBody := map[string]interface{}{
|
||||
"parent_fid": parent,
|
||||
"size": 100, // 默认每页100个文件
|
||||
"sort": "file_name:asc", // 基本排序方式
|
||||
}
|
||||
// 如果有排序设置
|
||||
if d.OrderBy != "none" {
|
||||
reqBody["sort"] = d.OrderBy + ":" + d.OrderDirection
|
||||
}
|
||||
// 设置查询游标(用于分页)
|
||||
if queryCursor.Token != "" {
|
||||
reqBody["query_cursor"] = queryCursor
|
||||
}
|
||||
|
||||
var resp FileListResp
|
||||
_, err := d.request(ctx, "/open/v1/file/list", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(reqBody)
|
||||
}, &resp)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
files = append(files, resp.Data.FileList...)
|
||||
if resp.Data.LastPage {
|
||||
break
|
||||
}
|
||||
|
||||
queryCursor = resp.Data.NextQueryCursor
|
||||
}
|
||||
|
||||
return files, nil
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) upPre(ctx context.Context, file model.FileStreamer, parentId, md5, sha1 string) (UpPreResp, error) {
|
||||
// 获取当前时间
|
||||
now := time.Now()
|
||||
// 获取文件大小
|
||||
fileSize := file.GetSize()
|
||||
|
||||
// 手动生成 x-pan-token
|
||||
httpMethod := "POST"
|
||||
apiPath := "/open/v1/file/upload_pre"
|
||||
tm, xPanToken, reqID := d.generateReqSign(httpMethod, apiPath, d.Addition.SignKey)
|
||||
|
||||
// 生成proof相关字段,传入 x-pan-token
|
||||
proofVersion, proofSeed1, proofSeed2, proofCode1, proofCode2, err := d.generateProof(file, xPanToken)
|
||||
if err != nil {
|
||||
return UpPreResp{}, fmt.Errorf("failed to generate proof: %w", err)
|
||||
}
|
||||
|
||||
data := base.Json{
|
||||
"file_name": file.GetName(),
|
||||
"size": fileSize,
|
||||
"format_type": file.GetMimetype(),
|
||||
"md5": md5,
|
||||
"sha1": sha1,
|
||||
"l_created_at": now.UnixMilli(),
|
||||
"l_updated_at": now.UnixMilli(),
|
||||
"pdir_fid": parentId,
|
||||
"same_path_reuse": true,
|
||||
"proof_version": proofVersion,
|
||||
"proof_seed1": proofSeed1,
|
||||
"proof_seed2": proofSeed2,
|
||||
"proof_code1": proofCode1,
|
||||
"proof_code2": proofCode2,
|
||||
}
|
||||
|
||||
var resp UpPreResp
|
||||
|
||||
// 使用手动生成的签名参数
|
||||
manualSign := &ManualSign{
|
||||
Tm: tm,
|
||||
Token: xPanToken,
|
||||
ReqID: reqID,
|
||||
}
|
||||
|
||||
_, err = d.request(ctx, "/open/v1/file/upload_pre", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(data)
|
||||
}, &resp, manualSign)
|
||||
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// generateProof 生成夸克云盘文件上传的proof验证信息
|
||||
func (d *QuarkOpen) generateProof(file model.FileStreamer, xPanToken string) (proofVersion, proofSeed1, proofSeed2, proofCode1, proofCode2 string, err error) {
|
||||
// 获取文件大小
|
||||
fileSize := file.GetSize()
|
||||
// 设置proof_version (固定为"v1")
|
||||
proofVersion = "v1"
|
||||
// 生成proof_seed1 - 算法: md5(userid+x-pan-token)
|
||||
proofSeed1 = d.generateProofSeed1(xPanToken)
|
||||
// 生成proof_seed2 - 算法: md5(fileSize)
|
||||
proofSeed2 = d.generateProofSeed2(fileSize)
|
||||
// 生成proof_code1和proof_code2
|
||||
proofCode1, err = d.generateProofCode(file, proofSeed1, fileSize)
|
||||
if err != nil {
|
||||
return "", "", "", "", "", fmt.Errorf("failed to generate proof_code1: %w", err)
|
||||
}
|
||||
|
||||
proofCode2, err = d.generateProofCode(file, proofSeed2, fileSize)
|
||||
if err != nil {
|
||||
return "", "", "", "", "", fmt.Errorf("failed to generate proof_code2: %w", err)
|
||||
}
|
||||
|
||||
return proofVersion, proofSeed1, proofSeed2, proofCode1, proofCode2, nil
|
||||
}
|
||||
|
||||
// generateProofSeed1 生成proof_seed1,基于 userId、x-pan-token
|
||||
func (d *QuarkOpen) generateProofSeed1(xPanToken string) string {
|
||||
concatString := d.conf.userId + xPanToken
|
||||
md5Hash := md5.Sum([]byte(concatString))
|
||||
return hex.EncodeToString(md5Hash[:])
|
||||
}
|
||||
|
||||
// generateProofSeed2 生成proof_seed2,基于 fileSize
|
||||
func (d *QuarkOpen) generateProofSeed2(fileSize int64) string {
|
||||
md5Hash := md5.Sum([]byte(strconv.FormatInt(fileSize, 10)))
|
||||
return hex.EncodeToString(md5Hash[:])
|
||||
}
|
||||
|
||||
type ProofRange struct {
|
||||
Start int64
|
||||
End int64
|
||||
}
|
||||
|
||||
// generateProofCode 根据proof_seed和文件大小生成proof_code
|
||||
func (d *QuarkOpen) generateProofCode(file model.FileStreamer, proofSeed string, fileSize int64) (string, error) {
|
||||
// 获取读取范围
|
||||
proofRange, err := d.getProofRange(proofSeed, fileSize)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get proof range: %w", err)
|
||||
}
|
||||
|
||||
// 计算需要读取的长度
|
||||
length := proofRange.End - proofRange.Start
|
||||
if length == 0 {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// 使用FileStreamer的RangeRead方法读取特定范围的数据
|
||||
reader, err := file.RangeRead(http_range.Range{
|
||||
Start: proofRange.Start,
|
||||
Length: length,
|
||||
})
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to range read: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if closer, ok := reader.(io.Closer); ok {
|
||||
closer.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
// 读取数据
|
||||
buf := make([]byte, length)
|
||||
n, err := io.ReadFull(reader, buf)
|
||||
if errors.Is(err, io.ErrUnexpectedEOF) {
|
||||
return "", fmt.Errorf("can't read data, expected=%d, got=%d", length, n)
|
||||
}
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to read data: %w", err)
|
||||
}
|
||||
|
||||
// Base64编码
|
||||
return base64.StdEncoding.EncodeToString(buf), nil
|
||||
}
|
||||
|
||||
// getProofRange 根据proof_seed和文件大小计算需要读取的文件范围
|
||||
func (d *QuarkOpen) getProofRange(proofSeed string, fileSize int64) (*ProofRange, error) {
|
||||
if fileSize == 0 {
|
||||
return &ProofRange{}, nil
|
||||
}
|
||||
// 对 proofSeed 进行 MD5 处理,取前16个字符
|
||||
md5Hash := md5.Sum([]byte(proofSeed))
|
||||
tmpStr := hex.EncodeToString(md5Hash[:])[:16]
|
||||
// 转为 uint64
|
||||
tmpInt, err := strconv.ParseUint(tmpStr, 16, 64)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse hex string: %w", err)
|
||||
}
|
||||
// 计算索引位置
|
||||
index := tmpInt % uint64(fileSize)
|
||||
|
||||
pr := &ProofRange{
|
||||
Start: int64(index),
|
||||
End: int64(index) + 8,
|
||||
}
|
||||
// 确保 End 不超过文件大小
|
||||
if pr.End > fileSize {
|
||||
pr.End = fileSize
|
||||
}
|
||||
|
||||
return pr, nil
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) _getPartInfo(stream model.FileStreamer, partSize int64) []base.Json {
|
||||
// 计算分片信息
|
||||
partInfo := make([]base.Json, 0)
|
||||
total := stream.GetSize()
|
||||
left := total
|
||||
partNumber := 1
|
||||
|
||||
// 计算每个分片的大小和编号
|
||||
for left > 0 {
|
||||
size := partSize
|
||||
if left < partSize {
|
||||
size = left
|
||||
}
|
||||
|
||||
partInfo = append(partInfo, base.Json{
|
||||
"part_number": partNumber,
|
||||
"part_size": size,
|
||||
})
|
||||
|
||||
left -= size
|
||||
partNumber++
|
||||
}
|
||||
|
||||
return partInfo
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) upUrl(ctx context.Context, pre UpPreResp, partInfo []base.Json) (upUrlInfo UpUrlInfo, err error) {
|
||||
// 构建请求体
|
||||
data := base.Json{
|
||||
"task_id": pre.Data.TaskID,
|
||||
"part_info_list": partInfo,
|
||||
}
|
||||
var resp UpUrlResp
|
||||
|
||||
_, err = d.request(ctx, "/open/v1/file/get_upload_urls", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(data)
|
||||
}, &resp)
|
||||
|
||||
if err != nil {
|
||||
return upUrlInfo, err
|
||||
}
|
||||
|
||||
return resp.Data, nil
|
||||
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) upPart(ctx context.Context, upUrlInfo UpUrlInfo, partNumber int, bytes io.Reader) (string, error) {
|
||||
// 创建请求
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, upUrlInfo.UploadUrls[partNumber].UploadURL, bytes)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
req.Header.Set("Authorization", upUrlInfo.UploadUrls[partNumber].SignatureInfo.Signature)
|
||||
req.Header.Set("X-Oss-Date", upUrlInfo.CommonHeaders.XOssDate)
|
||||
req.Header.Set("X-Oss-Content-Sha256", upUrlInfo.CommonHeaders.XOssContentSha256)
|
||||
req.Header.Set("Accept-Encoding", "gzip")
|
||||
req.Header.Set("User-Agent", "Go-http-client/1.1")
|
||||
|
||||
// 发送请求
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return "", fmt.Errorf("up status: %d, error: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
// 返回 Etag 作为分片上传的标识
|
||||
return resp.Header.Get("Etag"), nil
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) upFinish(ctx context.Context, pre UpPreResp, partInfo []base.Json, etags []string) error {
|
||||
// 创建 part_info_list
|
||||
partInfoList := make([]base.Json, len(partInfo))
|
||||
// 确保 partInfo 和 etags 长度一致
|
||||
if len(partInfo) != len(etags) {
|
||||
return fmt.Errorf("part info count (%d) does not match etags count (%d)", len(partInfo), len(etags))
|
||||
}
|
||||
// 组合 part_info_list
|
||||
for i, part := range partInfo {
|
||||
partInfoList[i] = base.Json{
|
||||
"part_number": part["part_number"],
|
||||
"part_size": part["part_size"],
|
||||
"etag": etags[i],
|
||||
}
|
||||
}
|
||||
// 构建请求体
|
||||
data := base.Json{
|
||||
"task_id": pre.Data.TaskID,
|
||||
"part_info_list": partInfoList,
|
||||
}
|
||||
|
||||
// 发送请求
|
||||
var resp UpFinishResp
|
||||
_, err := d.request(ctx, "/open/v1/file/upload_finish", http.MethodPost, func(req *resty.Request) {
|
||||
req.SetBody(data)
|
||||
}, &resp)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if resp.Data.Finish != true {
|
||||
return fmt.Errorf("upload finish failed, task_id: %s", resp.Data.TaskID)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ManualSign 用于手动签名URL的结构体
|
||||
type ManualSign struct {
|
||||
Tm string
|
||||
Token string
|
||||
ReqID string
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) generateReqSign(method string, pathname string, signKey string) (string, string, string) {
|
||||
// 生成时间戳 (13位毫秒级)
|
||||
timestamp := strconv.FormatInt(time.Now().UnixNano()/int64(time.Millisecond), 10)
|
||||
|
||||
// 生成 x-pan-token token的组成是: method + "&" + pathname + "&" + timestamp + "&" + signKey
|
||||
tokenData := method + "&" + pathname + "&" + timestamp + "&" + signKey
|
||||
tokenHash := sha256.Sum256([]byte(tokenData))
|
||||
xPanToken := hex.EncodeToString(tokenHash[:])
|
||||
|
||||
// 生成 req_id
|
||||
reqUuid, _ := uuid.NewRandom()
|
||||
reqID := reqUuid.String()
|
||||
|
||||
return timestamp, xPanToken, reqID
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) refreshToken() error {
|
||||
refresh, access, err := d._refreshToken()
|
||||
for i := 0; i < 3; i++ {
|
||||
if err == nil {
|
||||
break
|
||||
} else {
|
||||
log.Errorf("[quark_open] failed to refresh token: %s", err)
|
||||
}
|
||||
refresh, access, err = d._refreshToken()
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Infof("[quark_open] token exchange: %s -> %s", d.RefreshToken, refresh)
|
||||
d.RefreshToken, d.AccessToken = refresh, access
|
||||
op.MustSaveDriverStorage(d)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *QuarkOpen) _refreshToken() (string, string, error) {
|
||||
if d.UseOnlineAPI && d.APIAddress != "" {
|
||||
u := d.APIAddress
|
||||
var resp RefreshTokenOnlineAPIResp
|
||||
_, err := base.RestyClient.R().
|
||||
SetResult(&resp).
|
||||
SetQueryParams(map[string]string{
|
||||
"refresh_ui": d.RefreshToken,
|
||||
"server_use": "true",
|
||||
"driver_txt": "quarkyun_oa",
|
||||
}).
|
||||
Get(u)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if resp.RefreshToken == "" || resp.AccessToken == "" {
|
||||
if resp.ErrorMessage != "" {
|
||||
return "", "", fmt.Errorf("failed to refresh token: %s", resp.ErrorMessage)
|
||||
}
|
||||
return "", "", fmt.Errorf("empty token returned from official API")
|
||||
}
|
||||
return resp.RefreshToken, resp.AccessToken, nil
|
||||
}
|
||||
|
||||
// TODO 本地刷新逻辑
|
||||
return "", "", fmt.Errorf("local refresh token logic is not implemented yet, please use online API or contact the developer")
|
||||
}
|
||||
|
||||
// 生成认证 Cookie
|
||||
func (d *QuarkOpen) generateAuthCookie() string {
|
||||
return fmt.Sprintf("x_pan_client_id=%s; x_pan_access_token=%s",
|
||||
d.Addition.AppID, d.Addition.AccessToken)
|
||||
}
|
452
drivers/quqi/driver.go
Normal file
452
drivers/quqi/driver.go
Normal file
@ -0,0 +1,452 @@
|
||||
package quqi
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils"
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils/random"
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/s3"
|
||||
"github.com/aws/aws-sdk-go/service/s3/s3manager"
|
||||
"github.com/go-resty/resty/v2"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
type Quqi struct {
|
||||
model.Storage
|
||||
Addition
|
||||
Cookie string // Cookie
|
||||
GroupID string // 私人云群组ID
|
||||
ClientID string // 随机生成客户端ID 经过测试,部分接口调用若不携带client id会出现错误
|
||||
}
|
||||
|
||||
func (d *Quqi) Config() driver.Config {
|
||||
return config
|
||||
}
|
||||
|
||||
func (d *Quqi) GetAddition() driver.Additional {
|
||||
return &d.Addition
|
||||
}
|
||||
|
||||
func (d *Quqi) Init(ctx context.Context) error {
|
||||
// 登录
|
||||
if err := d.login(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// 生成随机client id (与网页端生成逻辑一致)
|
||||
d.ClientID = "quqipc_" + random.String(10)
|
||||
|
||||
// 获取私人云ID (暂时仅获取私人云)
|
||||
groupResp := &GroupRes{}
|
||||
if _, err := d.request("group.quqi.com", "/v1/group/list", resty.MethodGet, nil, groupResp); err != nil {
|
||||
return err
|
||||
}
|
||||
for _, groupInfo := range groupResp.Data {
|
||||
if groupInfo == nil {
|
||||
continue
|
||||
}
|
||||
if groupInfo.Type == 2 {
|
||||
d.GroupID = strconv.Itoa(groupInfo.ID)
|
||||
break
|
||||
}
|
||||
}
|
||||
if d.GroupID == "" {
|
||||
return errs.StorageNotFound
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Quqi) Drop(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Quqi) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||
var (
|
||||
listResp = &ListRes{}
|
||||
files []model.Obj
|
||||
)
|
||||
|
||||
if _, err := d.request("", "/api/dir/ls", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"node_id": dir.GetID(),
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, listResp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if listResp.Data == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// dirs
|
||||
for _, dirInfo := range listResp.Data.Dir {
|
||||
if dirInfo == nil {
|
||||
continue
|
||||
}
|
||||
files = append(files, &model.Object{
|
||||
ID: strconv.FormatInt(dirInfo.NodeID, 10),
|
||||
Name: dirInfo.Name,
|
||||
Modified: time.Unix(dirInfo.UpdateTime, 0),
|
||||
Ctime: time.Unix(dirInfo.AddTime, 0),
|
||||
IsFolder: true,
|
||||
})
|
||||
}
|
||||
|
||||
// files
|
||||
for _, fileInfo := range listResp.Data.File {
|
||||
if fileInfo == nil {
|
||||
continue
|
||||
}
|
||||
if fileInfo.EXT != "" {
|
||||
fileInfo.Name = strings.Join([]string{fileInfo.Name, fileInfo.EXT}, ".")
|
||||
}
|
||||
|
||||
files = append(files, &model.Object{
|
||||
ID: strconv.FormatInt(fileInfo.NodeID, 10),
|
||||
Name: fileInfo.Name,
|
||||
Size: fileInfo.Size,
|
||||
Modified: time.Unix(fileInfo.UpdateTime, 0),
|
||||
Ctime: time.Unix(fileInfo.AddTime, 0),
|
||||
})
|
||||
}
|
||||
|
||||
return files, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||
if d.CDN {
|
||||
link, err := d.linkFromCDN(file.GetID())
|
||||
if err != nil {
|
||||
log.Warn(err)
|
||||
} else {
|
||||
return link, nil
|
||||
}
|
||||
}
|
||||
|
||||
link, err := d.linkFromPreview(file.GetID())
|
||||
if err != nil {
|
||||
log.Warn(err)
|
||||
} else {
|
||||
return link, nil
|
||||
}
|
||||
|
||||
link, err = d.linkFromDownload(file.GetID())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return link, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
|
||||
var (
|
||||
makeDirRes = &MakeDirRes{}
|
||||
timeNow = time.Now()
|
||||
)
|
||||
|
||||
if _, err := d.request("", "/api/dir/mkDir", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"parent_id": parentDir.GetID(),
|
||||
"name": dirName,
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, makeDirRes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &model.Object{
|
||||
ID: strconv.FormatInt(makeDirRes.Data.NodeID, 10),
|
||||
Name: dirName,
|
||||
Modified: timeNow,
|
||||
Ctime: timeNow,
|
||||
IsFolder: true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||
var moveRes = &MoveRes{}
|
||||
|
||||
if _, err := d.request("", "/api/dir/mvDir", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"node_id": dstDir.GetID(),
|
||||
"source_quqi_id": d.GroupID,
|
||||
"source_tree_id": "1",
|
||||
"source_node_id": srcObj.GetID(),
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, moveRes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &model.Object{
|
||||
ID: strconv.FormatInt(moveRes.Data.NodeID, 10),
|
||||
Name: moveRes.Data.NodeName,
|
||||
Size: srcObj.GetSize(),
|
||||
Modified: time.Now(),
|
||||
Ctime: srcObj.CreateTime(),
|
||||
IsFolder: srcObj.IsDir(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
|
||||
var realName = newName
|
||||
|
||||
if !srcObj.IsDir() {
|
||||
srcExt, newExt := utils.Ext(srcObj.GetName()), utils.Ext(newName)
|
||||
|
||||
// 曲奇网盘的文件名称由文件名和扩展名组成,若存在扩展名,则重命名时仅支持更改文件名,扩展名在曲奇服务端保留
|
||||
if srcExt != "" && srcExt == newExt {
|
||||
parts := strings.Split(newName, ".")
|
||||
if len(parts) > 1 {
|
||||
realName = strings.Join(parts[:len(parts)-1], ".")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := d.request("", "/api/dir/renameDir", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"node_id": srcObj.GetID(),
|
||||
"rename": realName,
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, nil); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &model.Object{
|
||||
ID: srcObj.GetID(),
|
||||
Name: newName,
|
||||
Size: srcObj.GetSize(),
|
||||
Modified: time.Now(),
|
||||
Ctime: srcObj.CreateTime(),
|
||||
IsFolder: srcObj.IsDir(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||
// 无法从曲奇接口响应中直接获取复制后的文件信息
|
||||
if _, err := d.request("", "/api/node/copy", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"node_id": dstDir.GetID(),
|
||||
"source_quqi_id": d.GroupID,
|
||||
"source_tree_id": "1",
|
||||
"source_node_id": srcObj.GetID(),
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, nil); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) Remove(ctx context.Context, obj model.Obj) error {
|
||||
// 暂时不做直接删除,默认都放到回收站。直接删除方法:先调用删除接口放入回收站,在通过回收站接口删除文件
|
||||
if _, err := d.request("", "/api/node/del", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"node_id": obj.GetID(),
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Quqi) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||
// base info
|
||||
sizeStr := strconv.FormatInt(stream.GetSize(), 10)
|
||||
f, err := stream.CacheFullInTempFile()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
md5, err := utils.HashFile(utils.MD5, f)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sha, err := utils.HashFile(utils.SHA256, f)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// init upload
|
||||
var uploadInitResp UploadInitResp
|
||||
_, err = d.request("", "/api/upload/v1/file/init", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"parent_id": dstDir.GetID(),
|
||||
"size": sizeStr,
|
||||
"file_name": stream.GetName(),
|
||||
"md5": md5,
|
||||
"sha": sha,
|
||||
"is_slice": "true",
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, &uploadInitResp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// check exist
|
||||
// if the file already exists in Quqi server, there is no need to actually upload it
|
||||
if uploadInitResp.Data.Exist {
|
||||
// the file name returned by Quqi does not include the extension name
|
||||
nodeName, nodeExt := uploadInitResp.Data.NodeName, utils.Ext(stream.GetName())
|
||||
if nodeExt != "" {
|
||||
nodeName = nodeName + "." + nodeExt
|
||||
}
|
||||
return &model.Object{
|
||||
ID: strconv.FormatInt(uploadInitResp.Data.NodeID, 10),
|
||||
Name: nodeName,
|
||||
Size: stream.GetSize(),
|
||||
Modified: stream.ModTime(),
|
||||
Ctime: stream.CreateTime(),
|
||||
}, nil
|
||||
}
|
||||
// listParts
|
||||
_, err = d.request("upload.quqi.com:20807", "/upload/v1/listParts", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"token": uploadInitResp.Data.Token,
|
||||
"task_id": uploadInitResp.Data.TaskID,
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// get temp key
|
||||
var tempKeyResp TempKeyResp
|
||||
_, err = d.request("upload.quqi.com:20807", "/upload/v1/tempKey", resty.MethodGet, func(req *resty.Request) {
|
||||
req.SetQueryParams(map[string]string{
|
||||
"token": uploadInitResp.Data.Token,
|
||||
"task_id": uploadInitResp.Data.TaskID,
|
||||
})
|
||||
}, &tempKeyResp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// upload
|
||||
// u, err := url.Parse(fmt.Sprintf("https://%s.cos.ap-shanghai.myqcloud.com", uploadInitResp.Data.Bucket))
|
||||
// b := &cos.BaseURL{BucketURL: u}
|
||||
// client := cos.NewClient(b, &http.Client{
|
||||
// Transport: &cos.CredentialTransport{
|
||||
// Credential: cos.NewTokenCredential(tempKeyResp.Data.Credentials.TmpSecretID, tempKeyResp.Data.Credentials.TmpSecretKey, tempKeyResp.Data.Credentials.SessionToken),
|
||||
// },
|
||||
// })
|
||||
// partSize := int64(1024 * 1024 * 2)
|
||||
// partCount := (stream.GetSize() + partSize - 1) / partSize
|
||||
// for i := 1; i <= int(partCount); i++ {
|
||||
// length := partSize
|
||||
// if i == int(partCount) {
|
||||
// length = stream.GetSize() - (int64(i)-1)*partSize
|
||||
// }
|
||||
// _, err := client.Object.UploadPart(
|
||||
// ctx, uploadInitResp.Data.Key, uploadInitResp.Data.UploadID, i, io.LimitReader(f, partSize), &cos.ObjectUploadPartOptions{
|
||||
// ContentLength: length,
|
||||
// },
|
||||
// )
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
// }
|
||||
|
||||
cfg := &aws.Config{
|
||||
Credentials: credentials.NewStaticCredentials(tempKeyResp.Data.Credentials.TmpSecretID, tempKeyResp.Data.Credentials.TmpSecretKey, tempKeyResp.Data.Credentials.SessionToken),
|
||||
Region: aws.String("ap-shanghai"),
|
||||
Endpoint: aws.String("cos.ap-shanghai.myqcloud.com"),
|
||||
}
|
||||
s, err := session.NewSession(cfg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
uploader := s3manager.NewUploader(s)
|
||||
buf := make([]byte, 1024*1024*2)
|
||||
fup := &driver.ReaderUpdatingProgress{
|
||||
Reader: &driver.SimpleReaderWithSize{
|
||||
Reader: f,
|
||||
Size: int64(len(buf)),
|
||||
},
|
||||
UpdateProgress: up,
|
||||
}
|
||||
for partNumber := int64(1); ; partNumber++ {
|
||||
n, err := io.ReadFull(fup, buf)
|
||||
if err != nil && !errors.Is(err, io.ErrUnexpectedEOF) {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
reader := bytes.NewReader(buf[:n])
|
||||
_, err = uploader.S3.UploadPartWithContext(ctx, &s3.UploadPartInput{
|
||||
UploadId: &uploadInitResp.Data.UploadID,
|
||||
Key: &uploadInitResp.Data.Key,
|
||||
Bucket: &uploadInitResp.Data.Bucket,
|
||||
PartNumber: aws.Int64(partNumber),
|
||||
Body: struct {
|
||||
*driver.RateLimitReader
|
||||
io.Seeker
|
||||
}{
|
||||
RateLimitReader: driver.NewLimitedUploadStream(ctx, reader),
|
||||
Seeker: reader,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
// finish upload
|
||||
var uploadFinishResp UploadFinishResp
|
||||
_, err = d.request("", "/api/upload/v1/file/finish", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"token": uploadInitResp.Data.Token,
|
||||
"task_id": uploadInitResp.Data.TaskID,
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, &uploadFinishResp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// the file name returned by Quqi does not include the extension name
|
||||
nodeName, nodeExt := uploadFinishResp.Data.NodeName, utils.Ext(stream.GetName())
|
||||
if nodeExt != "" {
|
||||
nodeName = nodeName + "." + nodeExt
|
||||
}
|
||||
return &model.Object{
|
||||
ID: strconv.FormatInt(uploadFinishResp.Data.NodeID, 10),
|
||||
Name: nodeName,
|
||||
Size: stream.GetSize(),
|
||||
Modified: stream.ModTime(),
|
||||
Ctime: stream.CreateTime(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
|
||||
// return nil, errs.NotSupport
|
||||
//}
|
||||
|
||||
var _ driver.Driver = (*Quqi)(nil)
|
28
drivers/quqi/meta.go
Normal file
28
drivers/quqi/meta.go
Normal file
@ -0,0 +1,28 @@
|
||||
package quqi
|
||||
|
||||
import (
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
)
|
||||
|
||||
type Addition struct {
|
||||
driver.RootID
|
||||
Phone string `json:"phone"`
|
||||
Password string `json:"password"`
|
||||
Cookie string `json:"cookie" help:"Cookie can be used on multiple clients at the same time"`
|
||||
CDN bool `json:"cdn" help:"If you enable this option, the download speed can be increased, but there will be some performance loss"`
|
||||
}
|
||||
|
||||
var config = driver.Config{
|
||||
Name: "Quqi",
|
||||
OnlyLocal: true,
|
||||
LocalSort: true,
|
||||
//NoUpload: true,
|
||||
DefaultRoot: "0",
|
||||
}
|
||||
|
||||
func init() {
|
||||
op.RegisterDriver(func() driver.Driver {
|
||||
return &Quqi{}
|
||||
})
|
||||
}
|
197
drivers/quqi/types.go
Normal file
197
drivers/quqi/types.go
Normal file
@ -0,0 +1,197 @@
|
||||
package quqi
|
||||
|
||||
type BaseReqQuery struct {
|
||||
ID string `json:"quqiid"`
|
||||
}
|
||||
|
||||
type BaseReq struct {
|
||||
GroupID string `json:"quqi_id"`
|
||||
}
|
||||
|
||||
type BaseRes struct {
|
||||
//Data interface{} `json:"data"`
|
||||
Code int `json:"err"`
|
||||
Message string `json:"msg"`
|
||||
}
|
||||
|
||||
type GroupRes struct {
|
||||
BaseRes
|
||||
Data []*Group `json:"data"`
|
||||
}
|
||||
|
||||
type ListRes struct {
|
||||
BaseRes
|
||||
Data *List `json:"data"`
|
||||
}
|
||||
|
||||
type GetDocRes struct {
|
||||
BaseRes
|
||||
Data struct {
|
||||
OriginPath string `json:"origin_path"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type GetDownloadResp struct {
|
||||
BaseRes
|
||||
Data struct {
|
||||
Url string `json:"url"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type MakeDirRes struct {
|
||||
BaseRes
|
||||
Data struct {
|
||||
IsRoot bool `json:"is_root"`
|
||||
NodeID int64 `json:"node_id"`
|
||||
ParentID int64 `json:"parent_id"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type MoveRes struct {
|
||||
BaseRes
|
||||
Data struct {
|
||||
NodeChildNum int64 `json:"node_child_num"`
|
||||
NodeID int64 `json:"node_id"`
|
||||
NodeName string `json:"node_name"`
|
||||
ParentID int64 `json:"parent_id"`
|
||||
GroupID int64 `json:"quqi_id"`
|
||||
TreeID int64 `json:"tree_id"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type RenameRes struct {
|
||||
BaseRes
|
||||
Data struct {
|
||||
NodeID int64 `json:"node_id"`
|
||||
GroupID int64 `json:"quqi_id"`
|
||||
Rename string `json:"rename"`
|
||||
TreeID int64 `json:"tree_id"`
|
||||
UpdateTime int64 `json:"updatetime"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type CopyRes struct {
|
||||
BaseRes
|
||||
}
|
||||
|
||||
type RemoveRes struct {
|
||||
BaseRes
|
||||
}
|
||||
|
||||
type Group struct {
|
||||
ID int `json:"quqi_id"`
|
||||
Type int `json:"type"`
|
||||
Name string `json:"name"`
|
||||
IsAdministrator int `json:"is_administrator"`
|
||||
Role int `json:"role"`
|
||||
Avatar string `json:"avatar_url"`
|
||||
IsStick int `json:"is_stick"`
|
||||
Nickname string `json:"nickname"`
|
||||
Status int `json:"status"`
|
||||
}
|
||||
|
||||
type List struct {
|
||||
ListDir
|
||||
Dir []*ListDir `json:"dir"`
|
||||
File []*ListFile `json:"file"`
|
||||
}
|
||||
|
||||
type ListItem struct {
|
||||
AddTime int64 `json:"add_time"`
|
||||
IsDir int `json:"is_dir"`
|
||||
IsExpand int `json:"is_expand"`
|
||||
IsFinalize int `json:"is_finalize"`
|
||||
LastEditorName string `json:"last_editor_name"`
|
||||
Name string `json:"name"`
|
||||
NodeID int64 `json:"nid"`
|
||||
ParentID int64 `json:"parent_id"`
|
||||
Permission int `json:"permission"`
|
||||
TreeID int64 `json:"tid"`
|
||||
UpdateCNT int64 `json:"update_cnt"`
|
||||
UpdateTime int64 `json:"update_time"`
|
||||
}
|
||||
|
||||
type ListDir struct {
|
||||
ListItem
|
||||
ChildDocNum int64 `json:"child_doc_num"`
|
||||
DirDetail string `json:"dir_detail"`
|
||||
DirType int `json:"dir_type"`
|
||||
}
|
||||
|
||||
type ListFile struct {
|
||||
ListItem
|
||||
BroadDocType string `json:"broad_doc_type"`
|
||||
CanDisplay bool `json:"can_display"`
|
||||
Detail string `json:"detail"`
|
||||
EXT string `json:"ext"`
|
||||
Filetype string `json:"filetype"`
|
||||
HasMobileThumbnail bool `json:"has_mobile_thumbnail"`
|
||||
HasThumbnail bool `json:"has_thumbnail"`
|
||||
Size int64 `json:"size"`
|
||||
Version int `json:"version"`
|
||||
}
|
||||
|
||||
type UploadInitResp struct {
|
||||
Data struct {
|
||||
Bucket string `json:"bucket"`
|
||||
Exist bool `json:"exist"`
|
||||
Key string `json:"key"`
|
||||
TaskID string `json:"task_id"`
|
||||
Token string `json:"token"`
|
||||
UploadID string `json:"upload_id"`
|
||||
URL string `json:"url"`
|
||||
NodeID int64 `json:"node_id"`
|
||||
NodeName string `json:"node_name"`
|
||||
ParentID int64 `json:"parent_id"`
|
||||
} `json:"data"`
|
||||
Err int `json:"err"`
|
||||
Msg string `json:"msg"`
|
||||
}
|
||||
|
||||
type TempKeyResp struct {
|
||||
Err int `json:"err"`
|
||||
Msg string `json:"msg"`
|
||||
Data struct {
|
||||
ExpiredTime int `json:"expiredTime"`
|
||||
Expiration string `json:"expiration"`
|
||||
Credentials struct {
|
||||
SessionToken string `json:"sessionToken"`
|
||||
TmpSecretID string `json:"tmpSecretId"`
|
||||
TmpSecretKey string `json:"tmpSecretKey"`
|
||||
} `json:"credentials"`
|
||||
RequestID string `json:"requestId"`
|
||||
StartTime int `json:"startTime"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
type UploadFinishResp struct {
|
||||
Data struct {
|
||||
NodeID int64 `json:"node_id"`
|
||||
NodeName string `json:"node_name"`
|
||||
ParentID int64 `json:"parent_id"`
|
||||
QuqiID int64 `json:"quqi_id"`
|
||||
TreeID int64 `json:"tree_id"`
|
||||
} `json:"data"`
|
||||
Err int `json:"err"`
|
||||
Msg string `json:"msg"`
|
||||
}
|
||||
|
||||
type UrlExchangeResp struct {
|
||||
BaseRes
|
||||
Data struct {
|
||||
Name string `json:"name"`
|
||||
Mime string `json:"mime"`
|
||||
Size int64 `json:"size"`
|
||||
DownloadType int `json:"download_type"`
|
||||
ChannelType int `json:"channel_type"`
|
||||
ChannelID int `json:"channel_id"`
|
||||
Url string `json:"url"`
|
||||
ExpiredTime int64 `json:"expired_time"`
|
||||
IsEncrypted bool `json:"is_encrypted"`
|
||||
EncryptedSize int64 `json:"encrypted_size"`
|
||||
EncryptedAlg string `json:"encrypted_alg"`
|
||||
EncryptedKey string `json:"encrypted_key"`
|
||||
PassportID int64 `json:"passport_id"`
|
||||
RequestExpiredTime int64 `json:"request_expired_time"`
|
||||
} `json:"data"`
|
||||
}
|
299
drivers/quqi/util.go
Normal file
299
drivers/quqi/util.go
Normal file
@ -0,0 +1,299 @@
|
||||
package quqi
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/drivers/base"
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/internal/stream"
|
||||
"github.com/OpenListTeam/OpenList/pkg/http_range"
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils"
|
||||
"github.com/go-resty/resty/v2"
|
||||
"github.com/minio/sio"
|
||||
)
|
||||
|
||||
// do others that not defined in Driver interface
|
||||
func (d *Quqi) request(host string, path string, method string, callback base.ReqCallback, resp interface{}) (*resty.Response, error) {
|
||||
var (
|
||||
reqUrl = url.URL{
|
||||
Scheme: "https",
|
||||
Host: "quqi.com",
|
||||
Path: path,
|
||||
}
|
||||
req = base.RestyClient.R()
|
||||
result BaseRes
|
||||
)
|
||||
|
||||
if host != "" {
|
||||
reqUrl.Host = host
|
||||
}
|
||||
req.SetHeaders(map[string]string{
|
||||
"Origin": "https://quqi.com",
|
||||
"Cookie": d.Cookie,
|
||||
})
|
||||
|
||||
if d.GroupID != "" {
|
||||
req.SetQueryParam("quqiid", d.GroupID)
|
||||
}
|
||||
|
||||
if callback != nil {
|
||||
callback(req)
|
||||
}
|
||||
|
||||
res, err := req.Execute(method, reqUrl.String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// resty.Request.SetResult cannot parse result correctly sometimes
|
||||
err = utils.Json.Unmarshal(res.Body(), &result)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if result.Code != 0 {
|
||||
return nil, errors.New(result.Message)
|
||||
}
|
||||
if resp != nil {
|
||||
err = utils.Json.Unmarshal(res.Body(), resp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) login() error {
|
||||
if d.Addition.Cookie != "" {
|
||||
d.Cookie = d.Addition.Cookie
|
||||
}
|
||||
if d.checkLogin() {
|
||||
return nil
|
||||
}
|
||||
if d.Cookie != "" {
|
||||
return errors.New("cookie is invalid")
|
||||
}
|
||||
if d.Phone == "" {
|
||||
return errors.New("phone number is empty")
|
||||
}
|
||||
if d.Password == "" {
|
||||
return errs.EmptyPassword
|
||||
}
|
||||
|
||||
resp, err := d.request("", "/auth/person/v2/login/password", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"phone": d.Phone,
|
||||
"password": base64.StdEncoding.EncodeToString([]byte(d.Password)),
|
||||
})
|
||||
}, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var cookies []string
|
||||
for _, cookie := range resp.RawResponse.Cookies() {
|
||||
cookies = append(cookies, fmt.Sprintf("%s=%s", cookie.Name, cookie.Value))
|
||||
}
|
||||
d.Cookie = strings.Join(cookies, ";")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *Quqi) checkLogin() bool {
|
||||
if _, err := d.request("", "/auth/account/baseInfo", resty.MethodGet, nil, nil); err != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// decryptKey 获取密码
|
||||
func decryptKey(encodeKey string) []byte {
|
||||
// 移除非法字符
|
||||
u := strings.ReplaceAll(encodeKey, "[^A-Za-z0-9+\\/]", "")
|
||||
|
||||
// 计算输出字节数组的长度
|
||||
o := len(u)
|
||||
a := 32
|
||||
|
||||
// 创建输出字节数组
|
||||
c := make([]byte, a)
|
||||
|
||||
// 编码循环
|
||||
s := uint32(0) // 累加器
|
||||
f := 0 // 输出数组索引
|
||||
for l := 0; l < o; l++ {
|
||||
r := l & 3 // 取模4,得到当前字符在四字节块中的位置
|
||||
i := u[l] // 当前字符的ASCII码
|
||||
|
||||
// 编码当前字符
|
||||
switch {
|
||||
case i >= 65 && i < 91: // 大写字母
|
||||
s |= uint32(i-65) << uint32(6*(3-r))
|
||||
case i >= 97 && i < 123: // 小写字母
|
||||
s |= uint32(i-71) << uint32(6*(3-r))
|
||||
case i >= 48 && i < 58: // 数字
|
||||
s |= uint32(i+4) << uint32(6*(3-r))
|
||||
case i == 43: // 加号
|
||||
s |= uint32(62) << uint32(6*(3-r))
|
||||
case i == 47: // 斜杠
|
||||
s |= uint32(63) << uint32(6*(3-r))
|
||||
}
|
||||
|
||||
// 如果累加器已经包含了四个字符,或者是最后一个字符,则写入输出数组
|
||||
if r == 3 || l == o-1 {
|
||||
for e := 0; e < 3 && f < a; e, f = e+1, f+1 {
|
||||
c[f] = byte(s >> (16 >> e & 24) & 255)
|
||||
}
|
||||
s = 0
|
||||
}
|
||||
}
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
func (d *Quqi) linkFromPreview(id string) (*model.Link, error) {
|
||||
var getDocResp GetDocRes
|
||||
if _, err := d.request("", "/api/doc/getDoc", resty.MethodPost, func(req *resty.Request) {
|
||||
req.SetFormData(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"node_id": id,
|
||||
"client_id": d.ClientID,
|
||||
})
|
||||
}, &getDocResp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if getDocResp.Data.OriginPath == "" {
|
||||
return nil, errors.New("cannot get link from preview")
|
||||
}
|
||||
return &model.Link{
|
||||
URL: getDocResp.Data.OriginPath,
|
||||
Header: http.Header{
|
||||
"Origin": []string{"https://quqi.com"},
|
||||
"Cookie": []string{d.Cookie},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) linkFromDownload(id string) (*model.Link, error) {
|
||||
var getDownloadResp GetDownloadResp
|
||||
if _, err := d.request("", "/api/doc/getDownload", resty.MethodGet, func(req *resty.Request) {
|
||||
req.SetQueryParams(map[string]string{
|
||||
"quqi_id": d.GroupID,
|
||||
"tree_id": "1",
|
||||
"node_id": id,
|
||||
"url_type": "undefined",
|
||||
"entry_type": "undefined",
|
||||
"client_id": d.ClientID,
|
||||
"no_redirect": "1",
|
||||
})
|
||||
}, &getDownloadResp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if getDownloadResp.Data.Url == "" {
|
||||
return nil, errors.New("cannot get link from download")
|
||||
}
|
||||
|
||||
return &model.Link{
|
||||
URL: getDownloadResp.Data.Url,
|
||||
Header: http.Header{
|
||||
"Origin": []string{"https://quqi.com"},
|
||||
"Cookie": []string{d.Cookie},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *Quqi) linkFromCDN(id string) (*model.Link, error) {
|
||||
downloadLink, err := d.linkFromDownload(id)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var urlExchangeResp UrlExchangeResp
|
||||
if _, err = d.request("api.quqi.com", "/preview/downloadInfo/url/exchange", resty.MethodGet, func(req *resty.Request) {
|
||||
req.SetQueryParam("url", downloadLink.URL)
|
||||
}, &urlExchangeResp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if urlExchangeResp.Data.Url == "" {
|
||||
return nil, errors.New("cannot get link from cdn")
|
||||
}
|
||||
|
||||
// 假设存在未加密的情况
|
||||
if !urlExchangeResp.Data.IsEncrypted {
|
||||
return &model.Link{
|
||||
URL: urlExchangeResp.Data.Url,
|
||||
Header: http.Header{
|
||||
"Origin": []string{"https://quqi.com"},
|
||||
"Cookie": []string{d.Cookie},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// 根据sio(https://github.com/minio/sio/blob/master/DARE.md)描述及实际测试,得出以下结论:
|
||||
// 1. 加密后大小(encrypted_size)-原始文件大小(size) = 加密包的头大小+身份验证标识 = (16+16) * N -> N为加密包的数量
|
||||
// 2. 原始文件大小(size)+64*1024-1 / (64*1024) = N -> 每个包的有效负载为64K
|
||||
remoteClosers := utils.EmptyClosers()
|
||||
payloadSize := int64(1 << 16)
|
||||
expiration := time.Until(time.Unix(urlExchangeResp.Data.ExpiredTime, 0))
|
||||
resultRangeReader := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
|
||||
encryptedOffset := httpRange.Start / payloadSize * (payloadSize + 32)
|
||||
decryptedOffset := httpRange.Start % payloadSize
|
||||
encryptedLength := (httpRange.Length+httpRange.Start+payloadSize-1)/payloadSize*(payloadSize+32) - encryptedOffset
|
||||
if httpRange.Length < 0 {
|
||||
encryptedLength = httpRange.Length
|
||||
} else {
|
||||
if httpRange.Length+httpRange.Start >= urlExchangeResp.Data.Size || encryptedLength+encryptedOffset >= urlExchangeResp.Data.EncryptedSize {
|
||||
encryptedLength = -1
|
||||
}
|
||||
}
|
||||
//log.Debugf("size: %d\tencrypted_size: %d", urlExchangeResp.Data.Size, urlExchangeResp.Data.EncryptedSize)
|
||||
//log.Debugf("http range offset: %d, length: %d", httpRange.Start, httpRange.Length)
|
||||
//log.Debugf("encrypted offset: %d, length: %d, decrypted offset: %d", encryptedOffset, encryptedLength, decryptedOffset)
|
||||
|
||||
rrc, err := stream.GetRangeReadCloserFromLink(urlExchangeResp.Data.EncryptedSize, &model.Link{
|
||||
URL: urlExchangeResp.Data.Url,
|
||||
Header: http.Header{
|
||||
"Origin": []string{"https://quqi.com"},
|
||||
"Cookie": []string{d.Cookie},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rc, err := rrc.RangeRead(ctx, http_range.Range{Start: encryptedOffset, Length: encryptedLength})
|
||||
remoteClosers.AddClosers(rrc.GetClosers())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
decryptReader, err := sio.DecryptReader(rc, sio.Config{
|
||||
MinVersion: sio.Version10,
|
||||
MaxVersion: sio.Version20,
|
||||
CipherSuites: []byte{sio.CHACHA20_POLY1305, sio.AES_256_GCM},
|
||||
Key: decryptKey(urlExchangeResp.Data.EncryptedKey),
|
||||
SequenceNumber: uint32(httpRange.Start / payloadSize),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bufferReader := bufio.NewReader(decryptReader)
|
||||
bufferReader.Discard(int(decryptedOffset))
|
||||
|
||||
return io.NopCloser(bufferReader), nil
|
||||
}
|
||||
|
||||
return &model.Link{
|
||||
RangeReadCloser: &model.RangeReadCloser{RangeReader: resultRangeReader, Closers: remoteClosers},
|
||||
Expiration: &expiration,
|
||||
}, nil
|
||||
}
|
@ -14,7 +14,6 @@ import (
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/internal/stream"
|
||||
"github.com/OpenListTeam/OpenList/pkg/cron"
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils"
|
||||
"github.com/OpenListTeam/OpenList/server/common"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/s3"
|
||||
@ -82,21 +81,19 @@ func (d *S3) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]mo
|
||||
|
||||
func (d *S3) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||
path := getKey(file.GetPath(), false)
|
||||
fileName := stdpath.Base(path)
|
||||
filename := stdpath.Base(path)
|
||||
disposition := fmt.Sprintf(`attachment; filename*=UTF-8''%s`, url.PathEscape(filename))
|
||||
if d.AddFilenameToDisposition {
|
||||
disposition = fmt.Sprintf(`attachment; filename="%s"; filename*=UTF-8''%s`, filename, url.PathEscape(filename))
|
||||
}
|
||||
input := &s3.GetObjectInput{
|
||||
Bucket: &d.Bucket,
|
||||
Key: &path,
|
||||
//ResponseContentDisposition: &disposition,
|
||||
}
|
||||
|
||||
if d.CustomHost == "" {
|
||||
disposition := fmt.Sprintf(`attachment; filename*=UTF-8''%s`, url.PathEscape(fileName))
|
||||
if d.AddFilenameToDisposition {
|
||||
disposition = utils.GenerateContentDisposition(fileName)
|
||||
}
|
||||
input.ResponseContentDisposition = &disposition
|
||||
}
|
||||
|
||||
req, _ := d.linkClient.GetObjectRequest(input)
|
||||
var link model.Link
|
||||
var err error
|
||||
@ -111,7 +108,7 @@ func (d *S3) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*mo
|
||||
link.URL = strings.Replace(link.URL, "/"+d.Bucket, "", 1)
|
||||
}
|
||||
} else {
|
||||
if common.ShouldProxy(d, fileName) {
|
||||
if common.ShouldProxy(d, filename) {
|
||||
err = req.Sign()
|
||||
link.URL = req.HTTPRequest.URL.String()
|
||||
link.Header = req.HTTPRequest.Header
|
||||
|
@ -58,7 +58,7 @@ func (x *Thunder) Init(ctx context.Context) (err error) {
|
||||
},
|
||||
DeviceID: func() string {
|
||||
if len(x.DeviceID) != 32 {
|
||||
return utils.GetMD5EncodeStr(x.Username + x.Password)
|
||||
return utils.GetMD5EncodeStr(x.DeviceID)
|
||||
}
|
||||
return x.DeviceID
|
||||
}(),
|
||||
|
@ -7,7 +7,6 @@ import (
|
||||
"io"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/drivers/base"
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
@ -66,7 +65,6 @@ func (x *ThunderBrowser) Init(ctx context.Context) (err error) {
|
||||
UserAgent: BuildCustomUserAgent(utils.GetMD5EncodeStr(x.Username+x.Password), PackageName, SdkVersion, ClientVersion, PackageName),
|
||||
DownloadUserAgent: DownloadUserAgent,
|
||||
UseVideoUrl: x.UseVideoUrl,
|
||||
UseFluentPlay: x.UseFluentPlay,
|
||||
RemoveWay: x.Addition.RemoveWay,
|
||||
refreshCTokenCk: func(token string) {
|
||||
x.CaptchaToken = token
|
||||
@ -83,8 +81,6 @@ func (x *ThunderBrowser) Init(ctx context.Context) (err error) {
|
||||
x.GetStorage().SetStatus(fmt.Sprintf("%+v", err.Error()))
|
||||
op.MustSaveDriverStorage(x)
|
||||
}
|
||||
// 清空 信任密钥
|
||||
x.Addition.CreditKey = ""
|
||||
}
|
||||
x.SetTokenResp(token)
|
||||
return err
|
||||
@ -97,20 +93,10 @@ func (x *ThunderBrowser) Init(ctx context.Context) (err error) {
|
||||
if ctoekn != "" {
|
||||
x.SetCaptchaToken(ctoekn)
|
||||
}
|
||||
|
||||
if x.Addition.CreditKey != "" {
|
||||
x.SetCreditKey(x.Addition.CreditKey)
|
||||
if x.DeviceID == "" {
|
||||
x.SetDeviceID(utils.GetMD5EncodeStr(x.Username + x.Password))
|
||||
}
|
||||
|
||||
if x.Addition.DeviceID != "" {
|
||||
x.Common.DeviceID = x.Addition.DeviceID
|
||||
} else {
|
||||
x.Addition.DeviceID = x.Common.DeviceID
|
||||
op.MustSaveDriverStorage(x)
|
||||
}
|
||||
|
||||
x.XunLeiBrowserCommon.UseVideoUrl = x.UseVideoUrl
|
||||
x.XunLeiBrowserCommon.UseFluentPlay = x.UseFluentPlay
|
||||
x.Addition.RootFolderID = x.RootFolderID
|
||||
// 防止重复登录
|
||||
identity := x.GetIdentity()
|
||||
@ -121,8 +107,6 @@ func (x *ThunderBrowser) Init(ctx context.Context) (err error) {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// 清空 信任密钥
|
||||
x.Addition.CreditKey = ""
|
||||
x.SetTokenResp(token)
|
||||
}
|
||||
|
||||
@ -203,9 +187,8 @@ func (x *ThunderBrowserExpert) Init(ctx context.Context) (err error) {
|
||||
}
|
||||
return DownloadUserAgent
|
||||
}(),
|
||||
UseVideoUrl: x.UseVideoUrl,
|
||||
UseFluentPlay: x.UseFluentPlay,
|
||||
RemoveWay: x.ExpertAddition.RemoveWay,
|
||||
UseVideoUrl: x.UseVideoUrl,
|
||||
RemoveWay: x.ExpertAddition.RemoveWay,
|
||||
refreshCTokenCk: func(token string) {
|
||||
x.CaptchaToken = token
|
||||
op.MustSaveDriverStorage(x)
|
||||
@ -217,13 +200,7 @@ func (x *ThunderBrowserExpert) Init(ctx context.Context) (err error) {
|
||||
x.SetCaptchaToken(x.ExpertAddition.CaptchaToken)
|
||||
op.MustSaveDriverStorage(x)
|
||||
}
|
||||
if x.ExpertAddition.CreditKey != "" {
|
||||
x.SetCreditKey(x.ExpertAddition.CreditKey)
|
||||
}
|
||||
|
||||
if x.ExpertAddition.DeviceID != "" {
|
||||
x.Common.DeviceID = x.ExpertAddition.DeviceID
|
||||
} else {
|
||||
if x.Common.DeviceID != "" {
|
||||
x.ExpertAddition.DeviceID = x.Common.DeviceID
|
||||
op.MustSaveDriverStorage(x)
|
||||
}
|
||||
@ -236,7 +213,6 @@ func (x *ThunderBrowserExpert) Init(ctx context.Context) (err error) {
|
||||
op.MustSaveDriverStorage(x)
|
||||
}
|
||||
x.XunLeiBrowserCommon.UseVideoUrl = x.UseVideoUrl
|
||||
x.XunLeiBrowserCommon.UseFluentPlay = x.UseFluentPlay
|
||||
x.ExpertAddition.RootFolderID = x.RootFolderID
|
||||
// 签名方法
|
||||
if x.SignType == "captcha_sign" {
|
||||
@ -277,8 +253,6 @@ func (x *ThunderBrowserExpert) Init(ctx context.Context) (err error) {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// 清空 信任密钥
|
||||
x.ExpertAddition.CreditKey = ""
|
||||
x.SetTokenResp(token)
|
||||
x.SetRefreshTokenFunc(func() error {
|
||||
token, err := x.XunLeiBrowserCommon.RefreshToken(x.TokenResp.RefreshToken)
|
||||
@ -287,8 +261,6 @@ func (x *ThunderBrowserExpert) Init(ctx context.Context) (err error) {
|
||||
if err != nil {
|
||||
x.GetStorage().SetStatus(fmt.Sprintf("%+v", err.Error()))
|
||||
}
|
||||
// 清空 信任密钥
|
||||
x.ExpertAddition.CreditKey = ""
|
||||
}
|
||||
x.SetTokenResp(token)
|
||||
op.MustSaveDriverStorage(x)
|
||||
@ -314,7 +286,6 @@ func (x *ThunderBrowserExpert) Init(ctx context.Context) (err error) {
|
||||
x.XunLeiBrowserCommon.UserAgent = x.UserAgent
|
||||
x.XunLeiBrowserCommon.DownloadUserAgent = x.DownloadUserAgent
|
||||
x.XunLeiBrowserCommon.UseVideoUrl = x.UseVideoUrl
|
||||
x.XunLeiBrowserCommon.UseFluentPlay = x.UseFluentPlay
|
||||
x.ExpertAddition.RootFolderID = x.RootFolderID
|
||||
}
|
||||
|
||||
@ -334,8 +305,7 @@ func (x *ThunderBrowserExpert) SetTokenResp(token *TokenResp) {
|
||||
|
||||
type XunLeiBrowserCommon struct {
|
||||
*Common
|
||||
*TokenResp // 登录信息
|
||||
*CoreLoginResp // core登录信息
|
||||
*TokenResp // 登录信息
|
||||
|
||||
refreshTokenFunc func() error
|
||||
}
|
||||
@ -553,8 +523,7 @@ func (xc *XunLeiBrowserCommon) getFiles(ctx context.Context, dir model.Obj, path
|
||||
folderSpace = dirF.GetSpace()
|
||||
default:
|
||||
// 处理 根目录的情况
|
||||
//folderSpace = ThunderBrowserDriveSpace
|
||||
folderSpace = ThunderDriveSpace // 迅雷浏览器已经合并到迅雷云盘,因此变更根目录
|
||||
folderSpace = ThunderBrowserDriveSpace
|
||||
}
|
||||
params := map[string]string{
|
||||
"parent_id": dir.GetID(),
|
||||
@ -600,11 +569,6 @@ func (xc *XunLeiBrowserCommon) SetTokenResp(tr *TokenResp) {
|
||||
xc.TokenResp = tr
|
||||
}
|
||||
|
||||
// SetCoreTokenResp 设置CoreToken
|
||||
func (xc *XunLeiBrowserCommon) SetCoreTokenResp(tr *CoreLoginResp) {
|
||||
xc.CoreLoginResp = tr
|
||||
}
|
||||
|
||||
// SetSpaceTokenResp 设置Token
|
||||
func (xc *XunLeiBrowserCommon) SetSpaceTokenResp(spaceToken string) {
|
||||
xc.TokenResp.Token = spaceToken
|
||||
@ -650,24 +614,14 @@ func (xc *XunLeiBrowserCommon) Request(url string, method string, callback base.
|
||||
}
|
||||
if errResp.ErrorMsg == "captcha_invalid" {
|
||||
// 验证码token过期
|
||||
if err = xc.RefreshCaptchaTokenAtLogin(GetAction(method, url), xc.TokenResp.UserID); err != nil {
|
||||
if err = xc.RefreshCaptchaTokenAtLogin(GetAction(method, url), xc.UserID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return nil, errors.New(errResp.ErrorMsg)
|
||||
return nil, err
|
||||
default:
|
||||
// 处理未捕获到的验证码错误
|
||||
if errResp.ErrorMsg == "captcha_invalid" {
|
||||
// 验证码token过期
|
||||
if err = xc.RefreshCaptchaTokenAtLogin(GetAction(method, url), xc.TokenResp.UserID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return xc.Request(url, method, callback, resp)
|
||||
}
|
||||
|
||||
@ -713,25 +667,20 @@ func (xc *XunLeiBrowserCommon) GetSafeAccessToken(safePassword string) (string,
|
||||
|
||||
// Login 登录
|
||||
func (xc *XunLeiBrowserCommon) Login(username, password string) (*TokenResp, error) {
|
||||
//v3 login拿到 sessionID
|
||||
sessionID, err := xc.CoreLogin(username, password)
|
||||
url := XLUSER_API_URL + "/auth/signin"
|
||||
err := xc.RefreshCaptchaTokenInLogin(GetAction(http.MethodPost, url), username)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
//v1 login拿到令牌
|
||||
url := XLUSER_API_URL + "/auth/signin/token"
|
||||
if err = xc.RefreshCaptchaTokenInLogin(GetAction(http.MethodPost, url), username); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var resp TokenResp
|
||||
_, err = xc.Common.Request(url, http.MethodPost, func(req *resty.Request) {
|
||||
req.SetPathParam("client_id", xc.ClientID)
|
||||
req.SetBody(&SignInRequest{
|
||||
CaptchaToken: xc.GetCaptchaToken(),
|
||||
ClientID: xc.ClientID,
|
||||
ClientSecret: xc.ClientSecret,
|
||||
Provider: SignProvider,
|
||||
SigninToken: sessionID,
|
||||
Username: username,
|
||||
Password: password,
|
||||
})
|
||||
}, &resp)
|
||||
if err != nil {
|
||||
@ -747,157 +696,3 @@ func (xc *XunLeiBrowserCommon) IsLogin() bool {
|
||||
_, err := xc.Request(XLUSER_API_URL+"/user/me", http.MethodGet, nil, nil)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// OfflineDownload 离线下载文件
|
||||
func (xc *XunLeiBrowserCommon) OfflineDownload(ctx context.Context, fileUrl string, parentDir model.Obj, fileName string) (*OfflineTask, error) {
|
||||
var resp OfflineDownloadResp
|
||||
|
||||
body := base.Json{}
|
||||
|
||||
from := "cloudadd/"
|
||||
|
||||
if xc.UseFluentPlay {
|
||||
body = base.Json{
|
||||
"kind": FILE,
|
||||
"name": fileName,
|
||||
// 流畅播接口 强制将文件放在 "SPACE_FAVORITE" 文件夹
|
||||
//"parent_id": parentDir.GetID(),
|
||||
"upload_type": UPLOAD_TYPE_URL,
|
||||
"url": base.Json{
|
||||
"url": fileUrl,
|
||||
//"files": []string{"0"}, // 0 表示只下载第一个文件
|
||||
},
|
||||
"params": base.Json{
|
||||
"cookie": "null",
|
||||
"web_title": "",
|
||||
"lastSession": "",
|
||||
"flags": "9",
|
||||
"scene": "smart_spot_panel",
|
||||
"referer": "https://x.xunlei.com",
|
||||
"dedup_index": "0",
|
||||
},
|
||||
"need_dedup": true,
|
||||
"folder_type": "FAVORITE",
|
||||
"space": ThunderBrowserDriveFluentPlayFolderType,
|
||||
}
|
||||
|
||||
from = "FLUENT_PLAY/sniff_ball/fluent_play/SPACE_FAVORITE"
|
||||
} else {
|
||||
body = base.Json{
|
||||
"kind": FILE,
|
||||
"name": fileName,
|
||||
"parent_id": parentDir.GetID(),
|
||||
"upload_type": UPLOAD_TYPE_URL,
|
||||
"url": base.Json{
|
||||
"url": fileUrl,
|
||||
},
|
||||
}
|
||||
|
||||
if files, ok := parentDir.(*Files); ok {
|
||||
body["space"] = files.GetSpace()
|
||||
} else {
|
||||
// 如果不是 Files 类型,则默认使用 ThunderDriveSpace
|
||||
body["space"] = ThunderDriveSpace
|
||||
}
|
||||
}
|
||||
|
||||
_, err := xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
|
||||
r.SetContext(ctx)
|
||||
r.SetQueryParam("_from", from)
|
||||
r.SetBody(&body)
|
||||
}, &resp)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &resp.Task, err
|
||||
}
|
||||
|
||||
// OfflineList 获取离线下载任务列表
|
||||
func (xc *XunLeiBrowserCommon) OfflineList(ctx context.Context, nextPageToken string) ([]OfflineTask, error) {
|
||||
res := make([]OfflineTask, 0)
|
||||
|
||||
var resp OfflineListResp
|
||||
_, err := xc.Request(TASK_API_URL, http.MethodGet, func(req *resty.Request) {
|
||||
req.SetContext(ctx).
|
||||
SetQueryParams(map[string]string{
|
||||
"type": "offline",
|
||||
"limit": "10000",
|
||||
"page_token": nextPageToken,
|
||||
"space": "default/*",
|
||||
})
|
||||
}, &resp)
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get offline list: %w", err)
|
||||
}
|
||||
res = append(res, resp.Tasks...)
|
||||
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (xc *XunLeiBrowserCommon) DeleteOfflineTasks(ctx context.Context, taskIDs []string) error {
|
||||
queryParams := map[string]string{
|
||||
"task_ids": strings.Join(taskIDs, ","),
|
||||
"_t": fmt.Sprintf("%d", time.Now().UnixMilli()),
|
||||
}
|
||||
if xc.UseFluentPlay {
|
||||
queryParams["space"] = ThunderBrowserDriveFluentPlayFolderType
|
||||
}
|
||||
|
||||
_, err := xc.Request(TASK_API_URL, http.MethodDelete, func(req *resty.Request) {
|
||||
req.SetContext(ctx).
|
||||
SetQueryParams(queryParams)
|
||||
}, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to delete tasks %v: %w", taskIDs, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (xc *XunLeiBrowserCommon) CoreLogin(username string, password string) (sessionID string, err error) {
|
||||
url := XLUSER_API_BASE_URL + "/xluser.core.login/v3/login"
|
||||
var resp CoreLoginResp
|
||||
res, err := xc.Common.Request(url, http.MethodPost, func(req *resty.Request) {
|
||||
req.SetHeader("User-Agent", "android-ok-http-client/xl-acc-sdk/version-5.0.9.509300")
|
||||
req.SetBody(&CoreLoginRequest{
|
||||
ProtocolVersion: "301",
|
||||
SequenceNo: "1000010",
|
||||
PlatformVersion: "10",
|
||||
IsCompressed: "0",
|
||||
Appid: APPID,
|
||||
ClientVersion: xc.Common.ClientVersion,
|
||||
PeerID: "00000000000000000000000000000000",
|
||||
AppName: "ANDROID-com.xunlei.browser",
|
||||
SdkVersion: "509300",
|
||||
Devicesign: generateDeviceSign(xc.DeviceID, xc.PackageName),
|
||||
NetWorkType: "WIFI",
|
||||
ProviderName: "NONE",
|
||||
DeviceModel: "M2004J7AC",
|
||||
DeviceName: "Xiaomi_M2004j7ac",
|
||||
OSVersion: "12",
|
||||
Creditkey: xc.GetCreditKey(),
|
||||
Hl: "zh-CN",
|
||||
UserName: username,
|
||||
PassWord: password,
|
||||
VerifyKey: "",
|
||||
VerifyCode: "",
|
||||
IsMd5Pwd: "0",
|
||||
})
|
||||
}, nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if err = utils.Json.Unmarshal(res, &resp); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
xc.SetCoreTokenResp(&resp)
|
||||
|
||||
sessionID = resp.SessionID
|
||||
|
||||
return sessionID, nil
|
||||
}
|
||||
|
@ -25,21 +25,19 @@ type ExpertAddition struct {
|
||||
SafePassword string `json:"safe_password" required:"true" help:"super safe password"` // 超级保险箱密码
|
||||
|
||||
// 签名方法1
|
||||
Algorithms string `json:"algorithms" required:"true" help:"sign type is algorithms,this is required" default:"Cw4kArmKJ/aOiFTxnQ0ES+D4mbbrIUsFn,HIGg0Qfbpm5ThZ/RJfjoao4YwgT9/M,u/PUD,OlAm8tPkOF1qO5bXxRN2iFttuDldrg,FFIiM6sFhWhU7tIMVUKOF7CUv/KzgwwV8FE,yN,4m5mglrIHksI6wYdq,LXEfS7,T+p+C+F2yjgsUtiXWU/cMNYEtJI4pq7GofW,14BrGIEMXkbvFvZ49nDUfVCRcHYFOJ1BP1Y,kWIH3Row,RAmRTKNCjucPWC"`
|
||||
Algorithms string `json:"algorithms" required:"true" help:"sign type is algorithms,this is required" default:"uWRwO7gPfdPB/0NfPtfQO+71,F93x+qPluYy6jdgNpq+lwdH1ap6WOM+nfz8/V,0HbpxvpXFsBK5CoTKam,dQhzbhzFRcawnsZqRETT9AuPAJ+wTQso82mRv,SAH98AmLZLRa6DB2u68sGhyiDh15guJpXhBzI,unqfo7Z64Rie9RNHMOB,7yxUdFADp3DOBvXdz0DPuKNVT35wqa5z0DEyEvf,RBG,ThTWPG5eC0UBqlbQ+04nZAptqGCdpv9o55A"`
|
||||
// 签名方法2
|
||||
CaptchaSign string `json:"captcha_sign" required:"true" help:"sign type is captcha_sign,this is required"`
|
||||
Timestamp string `json:"timestamp" required:"true" help:"sign type is captcha_sign,this is required"`
|
||||
|
||||
// 验证码
|
||||
CaptchaToken string `json:"captcha_token"`
|
||||
// 信任密钥
|
||||
CreditKey string `json:"credit_key" help:"credit key,used for login"`
|
||||
|
||||
// 必要且影响登录,由签名决定
|
||||
DeviceID string `json:"device_id" required:"false" default:""`
|
||||
ClientID string `json:"client_id" required:"true" default:"ZUBzD9J_XPXfn7f7"`
|
||||
ClientSecret string `json:"client_secret" required:"true" default:"yESVmHecEe6F0aou69vl-g"`
|
||||
ClientVersion string `json:"client_version" required:"true" default:"1.40.0.7208"`
|
||||
ClientVersion string `json:"client_version" required:"true" default:"1.10.0.2633"`
|
||||
PackageName string `json:"package_name" required:"true" default:"com.xunlei.browser"`
|
||||
|
||||
// 不影响登录,影响下载速度
|
||||
@ -48,8 +46,6 @@ type ExpertAddition struct {
|
||||
|
||||
// 优先使用视频链接代替下载链接
|
||||
UseVideoUrl bool `json:"use_video_url"`
|
||||
// 离线下载是否使用 流畅播(Fluent Play)接口
|
||||
UseFluentPlay bool `json:"use_fluent_play" default:"false" help:"use fluent play for offline download,only magnet links supported"`
|
||||
// 移除方式
|
||||
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`
|
||||
}
|
||||
@ -83,12 +79,8 @@ type Addition struct {
|
||||
Password string `json:"password" required:"true"`
|
||||
SafePassword string `json:"safe_password" required:"true"` // 超级保险箱密码
|
||||
CaptchaToken string `json:"captcha_token"`
|
||||
CreditKey string `json:"credit_key" help:"credit key,used for login"` // 信任密钥
|
||||
DeviceID string `json:"device_id" default:""` // 登录设备ID
|
||||
UseVideoUrl bool `json:"use_video_url" default:"false"`
|
||||
// 离线下载是否使用 流畅播(Fluent Play)接口
|
||||
UseFluentPlay bool `json:"use_fluent_play" default:"false" help:"use fluent play for offline download,only magnet links supported"`
|
||||
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`
|
||||
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`
|
||||
}
|
||||
|
||||
// GetIdentity 登录特征,用于判断是否重新登录
|
||||
|
@ -18,10 +18,6 @@ type ErrResp struct {
|
||||
}
|
||||
|
||||
func (e *ErrResp) IsError() bool {
|
||||
if e.ErrorMsg == "success" {
|
||||
return false
|
||||
}
|
||||
|
||||
return e.ErrorCode != 0 || e.ErrorMsg != "" || e.ErrorDescription != ""
|
||||
}
|
||||
|
||||
@ -72,78 +68,13 @@ func (t *TokenResp) GetSpaceToken() string {
|
||||
}
|
||||
|
||||
type SignInRequest struct {
|
||||
CaptchaToken string `json:"captcha_token"`
|
||||
|
||||
ClientID string `json:"client_id"`
|
||||
ClientSecret string `json:"client_secret"`
|
||||
|
||||
Provider string `json:"provider"`
|
||||
SigninToken string `json:"signin_token"`
|
||||
}
|
||||
type CoreLoginRequest struct {
|
||||
ProtocolVersion string `json:"protocolVersion"`
|
||||
SequenceNo string `json:"sequenceNo"`
|
||||
PlatformVersion string `json:"platformVersion"`
|
||||
IsCompressed string `json:"isCompressed"`
|
||||
Appid string `json:"appid"`
|
||||
ClientVersion string `json:"clientVersion"`
|
||||
PeerID string `json:"peerID"`
|
||||
AppName string `json:"appName"`
|
||||
SdkVersion string `json:"sdkVersion"`
|
||||
Devicesign string `json:"devicesign"`
|
||||
NetWorkType string `json:"netWorkType"`
|
||||
ProviderName string `json:"providerName"`
|
||||
DeviceModel string `json:"deviceModel"`
|
||||
DeviceName string `json:"deviceName"`
|
||||
OSVersion string `json:"OSVersion"`
|
||||
Creditkey string `json:"creditkey"`
|
||||
Hl string `json:"hl"`
|
||||
UserName string `json:"userName"`
|
||||
PassWord string `json:"passWord"`
|
||||
VerifyKey string `json:"verifyKey"`
|
||||
VerifyCode string `json:"verifyCode"`
|
||||
IsMd5Pwd string `json:"isMd5Pwd"`
|
||||
}
|
||||
|
||||
type CoreLoginResp struct {
|
||||
Account string `json:"account"`
|
||||
Creditkey string `json:"creditkey"`
|
||||
/* Error string `json:"error"`
|
||||
ErrorCode string `json:"errorCode"`
|
||||
ErrorDescription string `json:"error_description"`*/
|
||||
ExpiresIn int `json:"expires_in"`
|
||||
IsCompressed string `json:"isCompressed"`
|
||||
IsSetPassWord string `json:"isSetPassWord"`
|
||||
KeepAliveMinPeriod string `json:"keepAliveMinPeriod"`
|
||||
KeepAlivePeriod string `json:"keepAlivePeriod"`
|
||||
LoginKey string `json:"loginKey"`
|
||||
NickName string `json:"nickName"`
|
||||
PlatformVersion string `json:"platformVersion"`
|
||||
ProtocolVersion string `json:"protocolVersion"`
|
||||
SecureKey string `json:"secureKey"`
|
||||
SequenceNo string `json:"sequenceNo"`
|
||||
SessionID string `json:"sessionID"`
|
||||
Timestamp string `json:"timestamp"`
|
||||
UserID string `json:"userID"`
|
||||
UserName string `json:"userName"`
|
||||
UserNewNo string `json:"userNewNo"`
|
||||
Version string `json:"version"`
|
||||
/* VipList []struct {
|
||||
ExpireDate string `json:"expireDate"`
|
||||
IsAutoDeduct string `json:"isAutoDeduct"`
|
||||
IsVip string `json:"isVip"`
|
||||
IsYear string `json:"isYear"`
|
||||
PayID string `json:"payId"`
|
||||
PayName string `json:"payName"`
|
||||
Register string `json:"register"`
|
||||
Vasid string `json:"vasid"`
|
||||
VasType string `json:"vasType"`
|
||||
VipDayGrow string `json:"vipDayGrow"`
|
||||
VipGrow string `json:"vipGrow"`
|
||||
VipLevel string `json:"vipLevel"`
|
||||
Icon struct {
|
||||
General string `json:"general"`
|
||||
Small string `json:"small"`
|
||||
} `json:"icon"`
|
||||
} `json:"vipList"`*/
|
||||
Username string `json:"username"`
|
||||
Password string `json:"password"`
|
||||
}
|
||||
|
||||
/*
|
||||
@ -303,76 +234,3 @@ type UploadTaskResponse struct {
|
||||
|
||||
File Files `json:"file"`
|
||||
}
|
||||
|
||||
// OfflineDownloadResp 离线下载响应
|
||||
type OfflineDownloadResp struct {
|
||||
File *string `json:"file"`
|
||||
Task OfflineTask `json:"task"`
|
||||
UploadType string `json:"upload_type"`
|
||||
URL struct {
|
||||
Kind string `json:"kind"`
|
||||
} `json:"url"`
|
||||
}
|
||||
|
||||
// OfflineListResp 离线下载列表响应
|
||||
type OfflineListResp struct {
|
||||
ExpiresIn int64 `json:"expires_in"`
|
||||
NextPageToken string `json:"next_page_token"`
|
||||
Tasks []OfflineTask `json:"tasks"`
|
||||
}
|
||||
|
||||
// OfflineTask 离线下载任务响应
|
||||
type OfflineTask struct {
|
||||
Callback string `json:"callback"`
|
||||
CreatedTime string `json:"created_time"`
|
||||
FileID string `json:"file_id"`
|
||||
FileName string `json:"file_name"`
|
||||
FileSize string `json:"file_size"`
|
||||
IconLink string `json:"icon_link"`
|
||||
ID string `json:"id"`
|
||||
Kind string `json:"kind"`
|
||||
Message string `json:"message"`
|
||||
Name string `json:"name"`
|
||||
Params Params `json:"params"`
|
||||
Phase string `json:"phase"` // PHASE_TYPE_RUNNING, PHASE_TYPE_ERROR, PHASE_TYPE_COMPLETE, PHASE_TYPE_PENDING
|
||||
Progress int64 `json:"progress"`
|
||||
Space string `json:"space"`
|
||||
StatusSize int64 `json:"status_size"`
|
||||
Statuses []string `json:"statuses"`
|
||||
ThirdTaskID string `json:"third_task_id"`
|
||||
Type string `json:"type"`
|
||||
UpdatedTime string `json:"updated_time"`
|
||||
UserID string `json:"user_id"`
|
||||
}
|
||||
|
||||
type Params struct {
|
||||
FolderType string `json:"folder_type"`
|
||||
PredictSpeed string `json:"predict_speed"`
|
||||
PredictType string `json:"predict_type"`
|
||||
}
|
||||
|
||||
// LoginReviewResp 登录验证响应
|
||||
type LoginReviewResp struct {
|
||||
Creditkey string `json:"creditkey"`
|
||||
Error string `json:"error"`
|
||||
ErrorCode string `json:"errorCode"`
|
||||
ErrorDesc string `json:"errorDesc"`
|
||||
ErrorDescURL string `json:"errorDescUrl"`
|
||||
ErrorIsRetry int `json:"errorIsRetry"`
|
||||
ErrorDescription string `json:"error_description"`
|
||||
IsCompressed string `json:"isCompressed"`
|
||||
PlatformVersion string `json:"platformVersion"`
|
||||
ProtocolVersion string `json:"protocolVersion"`
|
||||
Reviewurl string `json:"reviewurl"`
|
||||
SequenceNo string `json:"sequenceNo"`
|
||||
UserID string `json:"userID"`
|
||||
VerifyType string `json:"verifyType"`
|
||||
}
|
||||
|
||||
// ReviewData 验证数据
|
||||
type ReviewData struct {
|
||||
Creditkey string `json:"creditkey"`
|
||||
Reviewurl string `json:"reviewurl"`
|
||||
Deviceid string `json:"deviceid"`
|
||||
Devicesign string `json:"devicesign"`
|
||||
}
|
||||
|
@ -4,7 +4,6 @@ import (
|
||||
"crypto/md5"
|
||||
"crypto/sha1"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
@ -18,35 +17,30 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
API_URL = "https://x-api-pan.xunlei.com/drive/v1"
|
||||
FILE_API_URL = API_URL + "/files"
|
||||
TASK_API_URL = API_URL + "/tasks"
|
||||
XLUSER_API_BASE_URL = "https://xluser-ssl.xunlei.com"
|
||||
XLUSER_API_URL = XLUSER_API_BASE_URL + "/v1"
|
||||
API_URL = "https://x-api-pan.xunlei.com/drive/v1"
|
||||
FILE_API_URL = API_URL + "/files"
|
||||
XLUSER_API_URL = "https://xluser-ssl.xunlei.com/v1"
|
||||
)
|
||||
|
||||
var Algorithms = []string{
|
||||
"Cw4kArmKJ/aOiFTxnQ0ES+D4mbbrIUsFn",
|
||||
"HIGg0Qfbpm5ThZ/RJfjoao4YwgT9/M",
|
||||
"u/PUD",
|
||||
"OlAm8tPkOF1qO5bXxRN2iFttuDldrg",
|
||||
"FFIiM6sFhWhU7tIMVUKOF7CUv/KzgwwV8FE",
|
||||
"yN",
|
||||
"4m5mglrIHksI6wYdq",
|
||||
"LXEfS7",
|
||||
"T+p+C+F2yjgsUtiXWU/cMNYEtJI4pq7GofW",
|
||||
"14BrGIEMXkbvFvZ49nDUfVCRcHYFOJ1BP1Y",
|
||||
"kWIH3Row",
|
||||
"RAmRTKNCjucPWC",
|
||||
"uWRwO7gPfdPB/0NfPtfQO+71",
|
||||
"F93x+qPluYy6jdgNpq+lwdH1ap6WOM+nfz8/V",
|
||||
"0HbpxvpXFsBK5CoTKam",
|
||||
"dQhzbhzFRcawnsZqRETT9AuPAJ+wTQso82mRv",
|
||||
"SAH98AmLZLRa6DB2u68sGhyiDh15guJpXhBzI",
|
||||
"unqfo7Z64Rie9RNHMOB",
|
||||
"7yxUdFADp3DOBvXdz0DPuKNVT35wqa5z0DEyEvf",
|
||||
"RBG",
|
||||
"ThTWPG5eC0UBqlbQ+04nZAptqGCdpv9o55A",
|
||||
}
|
||||
|
||||
const (
|
||||
ClientID = "ZUBzD9J_XPXfn7f7"
|
||||
ClientSecret = "yESVmHecEe6F0aou69vl-g"
|
||||
ClientVersion = "1.40.0.7208"
|
||||
ClientVersion = "1.10.0.2633"
|
||||
PackageName = "com.xunlei.browser"
|
||||
DownloadUserAgent = "AndroidDownloadManager/13 (Linux; U; Android 13; M2004J7AC Build/SP1A.210812.016)"
|
||||
SdkVersion = "509300"
|
||||
SdkVersion = "233100"
|
||||
)
|
||||
|
||||
const (
|
||||
@ -63,19 +57,12 @@ const (
|
||||
)
|
||||
|
||||
const (
|
||||
ThunderDriveSpace = ""
|
||||
ThunderDriveSafeSpace = "SPACE_SAFE"
|
||||
ThunderBrowserDriveSpace = "SPACE_BROWSER"
|
||||
ThunderBrowserDriveSafeSpace = "SPACE_BROWSER_SAFE"
|
||||
ThunderDriveFolderType = "DEFAULT_ROOT"
|
||||
ThunderBrowserDriveSafeFolderType = "BROWSER_SAFE"
|
||||
ThunderBrowserDriveFluentPlayFolderType = "SPACE_FAVORITE" // 流畅播文件夹标识
|
||||
)
|
||||
|
||||
const (
|
||||
SignProvider = "access_end_point_token"
|
||||
APPID = "22062"
|
||||
APPKey = "a5d7416858147a4ab99573872ffccef8"
|
||||
ThunderDriveSpace = ""
|
||||
ThunderDriveSafeSpace = "SPACE_SAFE"
|
||||
ThunderBrowserDriveSpace = "SPACE_BROWSER"
|
||||
ThunderBrowserDriveSafeSpace = "SPACE_BROWSER_SAFE"
|
||||
ThunderDriveFolderType = "DEFAULT_ROOT"
|
||||
ThunderBrowserDriveSafeFolderType = "BROWSER_SAFE"
|
||||
)
|
||||
|
||||
func GetAction(method string, url string) string {
|
||||
@ -88,8 +75,6 @@ type Common struct {
|
||||
|
||||
captchaToken string
|
||||
|
||||
creditKey string
|
||||
|
||||
// 签名相关,二选一
|
||||
Algorithms []string
|
||||
Timestamp, CaptchaSign string
|
||||
@ -103,7 +88,6 @@ type Common struct {
|
||||
UserAgent string
|
||||
DownloadUserAgent string
|
||||
UseVideoUrl bool
|
||||
UseFluentPlay bool
|
||||
RemoveWay string
|
||||
|
||||
// 验证码token刷新成功回调
|
||||
@ -121,13 +105,6 @@ func (c *Common) GetCaptchaToken() string {
|
||||
return c.captchaToken
|
||||
}
|
||||
|
||||
func (c *Common) SetCreditKey(creditKey string) {
|
||||
c.creditKey = creditKey
|
||||
}
|
||||
func (c *Common) GetCreditKey() string {
|
||||
return c.creditKey
|
||||
}
|
||||
|
||||
// RefreshCaptchaTokenAtLogin 刷新验证码token(登录后)
|
||||
func (c *Common) RefreshCaptchaTokenAtLogin(action, userID string) error {
|
||||
metas := map[string]string{
|
||||
@ -229,53 +206,12 @@ func (c *Common) Request(url, method string, callback base.ReqCallback, resp int
|
||||
var erron ErrResp
|
||||
utils.Json.Unmarshal(res.Body(), &erron)
|
||||
if erron.IsError() {
|
||||
// review_panel 表示需要短信验证码进行验证
|
||||
if erron.ErrorMsg == "review_panel" {
|
||||
return nil, c.getReviewData(res)
|
||||
}
|
||||
|
||||
return nil, &erron
|
||||
}
|
||||
|
||||
return res.Body(), nil
|
||||
}
|
||||
|
||||
// 获取验证所需内容
|
||||
func (c *Common) getReviewData(res *resty.Response) error {
|
||||
var reviewResp LoginReviewResp
|
||||
var reviewData ReviewData
|
||||
|
||||
if err := utils.Json.Unmarshal(res.Body(), &reviewResp); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
deviceSign := generateDeviceSign(c.DeviceID, c.PackageName)
|
||||
|
||||
reviewData = ReviewData{
|
||||
Creditkey: reviewResp.Creditkey,
|
||||
Reviewurl: reviewResp.Reviewurl + "&deviceid=" + deviceSign,
|
||||
Deviceid: deviceSign,
|
||||
Devicesign: deviceSign,
|
||||
}
|
||||
|
||||
// 将reviewData转为JSON字符串
|
||||
reviewDataJSON, _ := json.MarshalIndent(reviewData, "", " ")
|
||||
//reviewDataJSON, _ := json.Marshal(reviewData)
|
||||
|
||||
return fmt.Errorf(`
|
||||
<div style="font-family: Arial, sans-serif; padding: 15px; border-radius: 5px; border: 1px solid #e0e0e0;>
|
||||
<h3 style="color: #d9534f; margin-top: 0;">
|
||||
<span style="font-size: 16px;">🔒 本次登录需要验证</span><br>
|
||||
<span style="font-size: 14px; font-weight: normal; color: #666;">This login requires verification</span>
|
||||
</h3>
|
||||
<p style="font-size: 14px; margin-bottom: 15px;">下面是验证所需要的数据,具体使用方法请参照对应的驱动文档<br>
|
||||
<span style="color: #666; font-size: 13px;">Below are the relevant verification data. For specific usage methods, please refer to the corresponding driver documentation.</span></p>
|
||||
<div style="border: 1px solid #ddd; border-radius: 4px; padding: 10px; overflow-x: auto; font-family: 'Courier New', monospace; font-size: 13px;">
|
||||
<pre style="margin: 0; white-space: pre-wrap;"><code>%s</code></pre>
|
||||
</div>
|
||||
</div>`, string(reviewDataJSON))
|
||||
}
|
||||
|
||||
// 计算文件Gcid
|
||||
func getGcid(r io.Reader, size int64) (string, error) {
|
||||
calcBlockSize := func(j int64) int64 {
|
||||
@ -338,7 +274,7 @@ func EncryptPassword(password string) string {
|
||||
|
||||
func generateDeviceSign(deviceID, packageName string) string {
|
||||
|
||||
signatureBase := fmt.Sprintf("%s%s%s%s", deviceID, packageName, APPID, APPKey)
|
||||
signatureBase := fmt.Sprintf("%s%s%s%s", deviceID, packageName, "22062", "a5d7416858147a4ab99573872ffccef8")
|
||||
|
||||
sha1Hash := sha1.New()
|
||||
sha1Hash.Write([]byte(signatureBase))
|
||||
@ -363,7 +299,7 @@ func BuildCustomUserAgent(deviceID, appName, sdkVersion, clientVersion, packageN
|
||||
|
||||
sb.WriteString(fmt.Sprintf("ANDROID-%s/%s ", appName, clientVersion))
|
||||
sb.WriteString("networkType/WIFI ")
|
||||
sb.WriteString(fmt.Sprintf("appid/%s ", APPID))
|
||||
sb.WriteString(fmt.Sprintf("appid/%s ", "22062"))
|
||||
sb.WriteString(fmt.Sprintf("deviceName/Xiaomi_M2004j7ac "))
|
||||
sb.WriteString(fmt.Sprintf("deviceModel/M2004J7AC "))
|
||||
sb.WriteString(fmt.Sprintf("OSVersion/13 "))
|
||||
|
10
go.mod
10
go.mod
@ -46,6 +46,7 @@ require (
|
||||
github.com/maruel/natural v1.1.1
|
||||
github.com/meilisearch/meilisearch-go v0.27.2
|
||||
github.com/mholt/archives v0.1.0
|
||||
github.com/minio/sio v0.4.0
|
||||
github.com/natefinch/lumberjack v2.0.0+incompatible
|
||||
github.com/ncw/swift/v2 v2.0.3
|
||||
github.com/pkg/errors v0.9.1
|
||||
@ -79,11 +80,7 @@ require (
|
||||
gorm.io/gorm v1.25.11
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go/compute/metadata v0.7.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
)
|
||||
require github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
|
||||
|
||||
require (
|
||||
github.com/STARRY-S/zip v0.2.1 // indirect
|
||||
@ -109,6 +106,7 @@ require (
|
||||
github.com/ipfs/boxo v0.12.0 // indirect
|
||||
github.com/jackc/puddle/v2 v2.2.1 // indirect
|
||||
github.com/klauspost/pgzip v1.2.6 // indirect
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/matoous/go-nanoid/v2 v2.1.0 // indirect
|
||||
github.com/microcosm-cc/bluemonday v1.0.27
|
||||
github.com/nwaples/rardecode/v2 v2.0.0-beta.4.0.20241112120701-034e449c6e78
|
||||
@ -251,7 +249,7 @@ require (
|
||||
github.com/yusufpapurcu/wmi v1.2.4 // indirect
|
||||
go.etcd.io/bbolt v1.3.8 // indirect
|
||||
golang.org/x/arch v0.8.0 // indirect
|
||||
golang.org/x/sync v0.12.0 // indirect
|
||||
golang.org/x/sync v0.12.0
|
||||
golang.org/x/sys v0.33.0 // indirect
|
||||
golang.org/x/term v0.32.0 // indirect
|
||||
golang.org/x/text v0.23.0
|
||||
|
56
go.sum
56
go.sum
@ -7,10 +7,12 @@ cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTj
|
||||
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
|
||||
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
|
||||
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
|
||||
cloud.google.com/go v0.110.10 h1:LXy9GEO+timppncPIAZoOj3l58LIU9k+kn48AN7IO3Y=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||
cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeOCw78U8ytSU=
|
||||
cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
|
||||
cloud.google.com/go/compute v1.23.4 h1:EBT9Nw4q3zyE7G45Wvv3MzolIrCJEuHys5muLY0wvAw=
|
||||
cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc=
|
||||
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
||||
@ -19,25 +21,23 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 h1:g0EZJwz7xkXQiZAI5xi9f3WWFYBlX1CPTrR+NDToRkQ=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0/go.mod h1:XCW7KnZet0Opnr7HccfUw1PLc4CjHqpcaxW8DHklNkQ=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0 h1:B/dfvscEQtew9dVuoxqxrUKKv8Ih2f55PydknDamU+g=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0/go.mod h1:fiPSssYvltE08HJchL04dOy+RD4hgrjph0cwGGMntdI=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0 h1:PiSrjRPpkQNjrM8H0WwKMnZUdu1RGMtd/LdGKUrOo+c=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0/go.mod h1:oDrbWx4ewMylP7xHivfgixbfGBT6APAwsSoHRKotnIc=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0 h1:UXT0o77lXQrikd1kgwIPQOUect7EoR/+sbP4wQKdzxM=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0/go.mod h1:cTvi54pg19DoT07ekoeMgE/taAwNtCShVeZqA+Iv2xI=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.3.2 h1:kYRSnvJju5gYVyhkij+RTJ/VR6QIUaCfWeaFm2ycsjQ=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.3.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
|
||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd h1:nzE1YQBdx1bq9IlZinHa+HVffy+NmVRoKr+wHN8fpLE=
|
||||
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd/go.mod h1:C8yoIfvESpM3GD07OCHU7fqI7lhwyZ2Td1rbNbTAhnc=
|
||||
github.com/OpenListTeam/gofakes3 v0.0.7 h1:0cDGI7fLBrqumhCBto9T3ZYCL71AyGZ1l+xxJgjqe8s=
|
||||
github.com/OpenListTeam/gofakes3 v0.0.7/go.mod h1:6IyGtYGIX29fLvtXo+XZhtwX2P33KVYYj8uTgAHSu58=
|
||||
github.com/OpenListTeam/gofakes3 v0.1.0 h1:QVWIaso208bNc9L2gNZrkPiluAIg9jemZRxWPh4AVdY=
|
||||
github.com/OpenListTeam/gofakes3 v0.1.0/go.mod h1:mWMoLOLBX5qZFe1IQHsGXD4iTmIC7nFxxeTxpYvUu6Q=
|
||||
github.com/OpenListTeam/sftpd-openlist v1.0.1 h1:j4S3iPFOpnXCUKRPS7uCT4mF2VCl34GyqvH6lqwnkUU=
|
||||
github.com/OpenListTeam/sftpd-openlist v1.0.1/go.mod h1:uO/wKnbvbdq3rBLmClMTZXuCnw7XW4wlAq4dZe91a40=
|
||||
github.com/OpenListTeam/times v0.0.0-20240721124654-efa0c7d3ad92 h1:pIEI87zhv8ZzQcu65rTL7kqirrs8dR6HDiXrqWat2Fk=
|
||||
github.com/OpenListTeam/times v0.0.0-20240721124654-efa0c7d3ad92/go.mod h1:oPJwGY3sLmGgcJamGumz//0A35f4BwQRacyqLNcJTOU=
|
||||
github.com/OpenListTeam/times v0.1.0 h1:qknxw+qj5CYKgXAwydA102UEpPcpU8TYNGRmwRyPYpg=
|
||||
github.com/OpenListTeam/times v0.1.0/go.mod h1:Jx7qen5NCYzKk2w14YuvU48YYMcPa1P9a+EJePC15Pc=
|
||||
github.com/ProtonMail/go-crypto v1.0.0 h1:LRuvITjQWX+WIfr930YHG2HNfjR1uOfyf5vE0kC2U78=
|
||||
@ -174,6 +174,7 @@ github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03V
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 h1:HVTnpeuvF6Owjd5mniCL8DEXo7uYXdQEmOP4FJbV5tg=
|
||||
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3/go.mod h1:p1d6YEZWvFzEh4KLyvBcVSnrfNDDvK2zfK/4x2v/4pE=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
@ -199,8 +200,11 @@ github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.m
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
|
||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
|
||||
github.com/fclairamb/ftpserverlib v0.26.0/go.mod h1:XMm3NdvCvmBtoAVK86oERDVmoYo0GTNS5gdds4f9lpM=
|
||||
github.com/fclairamb/ftpserverlib v0.26.1-0.20250611192536-99cb646d0bbe h1:7hWzlndXJKF95RsWQ80bZmdPiBhoTIzedrp/VDGons8=
|
||||
github.com/fclairamb/ftpserverlib v0.26.1-0.20250611192536-99cb646d0bbe/go.mod h1:xaDvN9bHSdKbmM1oXkqpyyYM39S89uR2blbq571Zb00=
|
||||
github.com/fclairamb/go-log v0.5.0 h1:Gz9wSamEaA6lta4IU2cjJc2xSq5sV5VYSB5w/SUHhVc=
|
||||
github.com/fclairamb/go-log v0.5.0/go.mod h1:XoRO1dYezpsGmLLkZE9I+sHqpqY65p8JA+Vqblb7k40=
|
||||
github.com/fclairamb/go-log v0.6.0 h1:1V7BJ75P2PvanLHRyGBBFjncB6d4AgEmu+BPWKbMkaU=
|
||||
github.com/fclairamb/go-log v0.6.0/go.mod h1:cyXxOw4aJwO6lrZb8GRELSw+sxO6wwkLJdsjY5xYCWA=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
@ -228,8 +232,9 @@ github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9
|
||||
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
|
||||
github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU=
|
||||
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
|
||||
github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA=
|
||||
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
|
||||
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
|
||||
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
|
||||
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
|
||||
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
|
||||
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
@ -257,6 +262,7 @@ github.com/go-webauthn/x v0.1.12/go.mod h1:XlRcGkNH8PT45TfeJYc6gqpOtiOendHhVmnOx
|
||||
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
|
||||
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
|
||||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
|
||||
@ -292,9 +298,8 @@ github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/go-tpm v0.9.1 h1:0pGc4X//bAlmZzMKf8iz6IsDo1nYTbYJ6FZN/rg4zdM=
|
||||
github.com/google/go-tpm v0.9.1/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
@ -316,6 +321,7 @@ github.com/googleapis/gax-go/v2 v2.12.2 h1:mhN09QQW1jEWeMF74zGR81R30z4VJzjZsfkUh
|
||||
github.com/googleapis/gax-go/v2 v2.12.2/go.mod h1:61M8vcyyXR2kqKFxKrfA22jaA8JGF7Dc8App1U3H6jc=
|
||||
github.com/gorilla/css v1.0.1 h1:ntNaBIghp6JmvWnxbZKANoLyuXTPZ4cAMlo6RyhlbO8=
|
||||
github.com/gorilla/css v1.0.1/go.mod h1:BvnYkspnSzMmwRK+b8/xgNPLiIuNZr6vbZBTPQ2A3b0=
|
||||
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
|
||||
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
@ -377,6 +383,7 @@ github.com/jzelinskie/whirlpool v0.0.0-20201016144138-0675e54bb004/go.mod h1:KmH
|
||||
github.com/kdomanski/iso9660 v0.4.0 h1:BPKKdcINz3m0MdjIMwS0wx1nofsOjxOq8TOr45WGHFg=
|
||||
github.com/kdomanski/iso9660 v0.4.0/go.mod h1:OxUSupHsO9ceI8lBLPJKWBTphLemjrCQY8LPXM7qSzU=
|
||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.4.1/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
|
||||
github.com/klauspost/compress v1.15.0/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
@ -399,8 +406,6 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
||||
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
||||
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
|
||||
@ -420,8 +425,11 @@ github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo=
|
||||
github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg=
|
||||
github.com/matoous/go-nanoid/v2 v2.1.0 h1:P64+dmq21hhWdtvZfEAofnvJULaRR1Yib0+PnU669bE=
|
||||
github.com/matoous/go-nanoid/v2 v2.1.0/go.mod h1:KlbGNQ+FhrUNIHUxZdL63t7tl4LaPkZNpUULS8H4uVM=
|
||||
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
|
||||
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
|
||||
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
|
||||
@ -438,6 +446,8 @@ github.com/microcosm-cc/bluemonday v1.0.27 h1:MpEUotklkwCSLeH+Qdx1VJgNqLlpY2KXwX
|
||||
github.com/microcosm-cc/bluemonday v1.0.27/go.mod h1:jFi9vgW+H7c3V0lb6nR74Ib/DIB5OBs92Dimizgw2cA=
|
||||
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
|
||||
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
|
||||
github.com/minio/sio v0.4.0 h1:u4SWVEm5lXSqU42ZWawV0D9I5AZ5YMmo2RXpEQ/kRhc=
|
||||
github.com/minio/sio v0.4.0/go.mod h1:oBSjJeGbBdRMZZwna07sX9EFzZy+ywu5aofRiV1g79I=
|
||||
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
|
||||
@ -490,8 +500,6 @@ github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6
|
||||
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
|
||||
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
|
||||
github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
@ -555,6 +563,8 @@ github.com/sorairolake/lzip-go v0.3.5/go.mod h1:N0KYq5iWrMXI0ZEXKXaS9hCyOjZUQdBD
|
||||
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
||||
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
|
||||
github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
|
||||
github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
|
||||
github.com/spf13/afero v1.14.0 h1:9tH6MapGnn/j0eb0yIXiLjERO8RB6xIVZRDCX7PtqWA=
|
||||
github.com/spf13/afero v1.14.0/go.mod h1:acJQ8t0ohCGuMN3O+Pv0V0hgMxNYDlvdk+VTfyZmbYo=
|
||||
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
|
||||
@ -624,6 +634,8 @@ github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZ
|
||||
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
|
||||
github.com/yeka/zip v0.0.0-20231116150916-03d6312748a9 h1:K8gF0eekWPEX+57l30ixxzGhHH/qscI3JCnuhbN6V4M=
|
||||
github.com/yeka/zip v0.0.0-20231116150916-03d6312748a9/go.mod h1:9BnoKCcgJ/+SLhfAXj15352hTOuVmG5Gzo8xNRINfqI=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
github.com/yuin/goldmark v1.7.8 h1:iERMLn0/QJeHFhxSt3p6PeN9mGnvIKSpG9YYorDMnic=
|
||||
github.com/yuin/goldmark v1.7.8/go.mod h1:uzxRWxtg69N339t3louHJ7+O03ezfj6PlliRlaOzY1E=
|
||||
@ -701,6 +713,7 @@ golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKG
|
||||
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
@ -719,6 +732,8 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL
|
||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
@ -734,6 +749,8 @@ golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
|
||||
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
|
||||
golang.org/x/net v0.37.0 h1:1zLorHbz+LYj7MQlSf1+2tPIIgibq2eL5xkrGk6f+2c=
|
||||
golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
||||
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
|
||||
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
@ -749,6 +766,7 @@ golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJ
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
|
||||
@ -782,6 +800,7 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
@ -794,6 +813,8 @@ golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.19.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
|
||||
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
|
||||
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
|
||||
@ -808,6 +829,8 @@ golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
|
||||
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
|
||||
golang.org/x/term v0.22.0/go.mod h1:F3qCibpT5AMpCRfhfT53vVJwhLtIVHhB9XDjfFvnMI4=
|
||||
golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
|
||||
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
|
||||
golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg=
|
||||
golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
@ -859,6 +882,8 @@ golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapK
|
||||
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
|
||||
@ -868,6 +893,7 @@ golang.org/x/tools v0.24.0/go.mod h1:YhNqVBIfWHdzvTLs0d8LCuMhkKUgSUKldakyV7W/WDQ
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
||||
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
|
||||
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
|
@ -125,10 +125,10 @@ func InitialSettings() []model.SettingItem {
|
||||
"Google":"https://docs.google.com/gview?url=$e_url&embedded=true"
|
||||
},
|
||||
"pdf": {
|
||||
"PDF.js":"https://res.oplist.org/pdf.js/web/viewer.html?file=$e_url"
|
||||
"PDF.js":"//res.oplist.org/pdf.js/web/viewer.html?url=$e_url"
|
||||
},
|
||||
"epub": {
|
||||
"EPUB.js":"https://res.oplist.org/epub.js/viewer.html?url=$e_url"
|
||||
"EPUB.js":"//res.oplist.org/epub.js/viewer.html?url=$e_url"
|
||||
}
|
||||
}`, Type: conf.TypeText, Group: model.PREVIEW},
|
||||
// {Key: conf.OfficeViewers, Value: `{
|
||||
|
@ -69,9 +69,6 @@ const (
|
||||
// thunder
|
||||
ThunderTempDir = "thunder_temp_dir"
|
||||
|
||||
// thunder_browser
|
||||
ThunderBrowserTempDir = "thunder_browser_temp_dir"
|
||||
|
||||
// single
|
||||
Token = "token"
|
||||
IndexProgress = "index_progress"
|
||||
|
@ -82,14 +82,6 @@ func MoveWithTask(ctx context.Context, srcPath, dstDirPath string, lazyCache ...
|
||||
return res, err
|
||||
}
|
||||
|
||||
func MoveWithTaskAndValidation(ctx context.Context, srcPath, dstDirPath string, validateExistence bool, lazyCache ...bool) (task.TaskExtensionInfo, error) {
|
||||
res, err := _moveWithValidation(ctx, srcPath, dstDirPath, validateExistence, lazyCache...)
|
||||
if err != nil {
|
||||
log.Errorf("failed move %s to %s: %+v", srcPath, dstDirPath, err)
|
||||
}
|
||||
return res, err
|
||||
}
|
||||
|
||||
func Copy(ctx context.Context, srcObjPath, dstDirPath string, lazyCache ...bool) (task.TaskExtensionInfo, error) {
|
||||
res, err := _copy(ctx, srcObjPath, dstDirPath, lazyCache...)
|
||||
if err != nil {
|
||||
|
@ -3,16 +3,13 @@ package fs
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
stdpath "path"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/internal/driver"
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
"github.com/OpenListTeam/OpenList/internal/stream"
|
||||
"github.com/OpenListTeam/OpenList/internal/task"
|
||||
"github.com/OpenListTeam/OpenList/pkg/utils"
|
||||
"github.com/pkg/errors"
|
||||
@ -21,101 +18,28 @@ import (
|
||||
|
||||
type MoveTask struct {
|
||||
task.TaskExtension
|
||||
Status string `json:"-"`
|
||||
SrcObjPath string `json:"src_path"`
|
||||
DstDirPath string `json:"dst_path"`
|
||||
srcStorage driver.Driver `json:"-"`
|
||||
dstStorage driver.Driver `json:"-"`
|
||||
SrcStorageMp string `json:"src_storage_mp"`
|
||||
DstStorageMp string `json:"dst_storage_mp"`
|
||||
IsRootTask bool `json:"is_root_task"`
|
||||
RootTaskID string `json:"root_task_id"`
|
||||
TotalFiles int `json:"total_files"`
|
||||
CompletedFiles int `json:"completed_files"`
|
||||
Phase string `json:"phase"` // "copying", "verifying", "deleting", "completed"
|
||||
ValidateExistence bool `json:"validate_existence"`
|
||||
mu sync.RWMutex `json:"-"`
|
||||
Status string `json:"-"`
|
||||
SrcObjPath string `json:"src_path"`
|
||||
DstDirPath string `json:"dst_path"`
|
||||
srcStorage driver.Driver `json:"-"`
|
||||
dstStorage driver.Driver `json:"-"`
|
||||
SrcStorageMp string `json:"src_storage_mp"`
|
||||
DstStorageMp string `json:"dst_storage_mp"`
|
||||
}
|
||||
|
||||
type MoveProgress struct {
|
||||
TaskID string `json:"task_id"`
|
||||
Phase string `json:"phase"`
|
||||
TotalFiles int `json:"total_files"`
|
||||
CompletedFiles int `json:"completed_files"`
|
||||
CurrentFile string `json:"current_file"`
|
||||
Status string `json:"status"`
|
||||
Progress int `json:"progress"`
|
||||
}
|
||||
|
||||
var moveProgressMap = sync.Map{}
|
||||
|
||||
func (t *MoveTask) GetName() string {
|
||||
return fmt.Sprintf("move [%s](%s) to [%s](%s)", t.SrcStorageMp, t.SrcObjPath, t.DstStorageMp, t.DstDirPath)
|
||||
}
|
||||
|
||||
func (t *MoveTask) GetStatus() string {
|
||||
t.mu.RLock()
|
||||
defer t.mu.RUnlock()
|
||||
return t.Status
|
||||
}
|
||||
|
||||
func (t *MoveTask) GetProgress() float64 {
|
||||
t.mu.RLock()
|
||||
defer t.mu.RUnlock()
|
||||
|
||||
if t.TotalFiles == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
switch t.Phase {
|
||||
case "copying":
|
||||
return float64(t.CompletedFiles*60) / float64(t.TotalFiles)
|
||||
case "verifying":
|
||||
return 60 + float64(t.CompletedFiles*20)/float64(t.TotalFiles)
|
||||
case "deleting":
|
||||
return 80 + float64(t.CompletedFiles*20)/float64(t.TotalFiles)
|
||||
case "completed":
|
||||
return 100
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
func (t *MoveTask) GetMoveProgress() *MoveProgress {
|
||||
t.mu.RLock()
|
||||
defer t.mu.RUnlock()
|
||||
|
||||
progress := int(t.GetProgress())
|
||||
|
||||
return &MoveProgress{
|
||||
TaskID: t.GetID(),
|
||||
Phase: t.Phase,
|
||||
TotalFiles: t.TotalFiles,
|
||||
CompletedFiles: t.CompletedFiles,
|
||||
CurrentFile: t.SrcObjPath,
|
||||
Status: t.Status,
|
||||
Progress: progress,
|
||||
}
|
||||
}
|
||||
|
||||
func (t *MoveTask) updateProgress() {
|
||||
if t.IsRootTask {
|
||||
progress := t.GetMoveProgress()
|
||||
moveProgressMap.Store(t.GetID(), progress)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *MoveTask) Run() error {
|
||||
t.ReinitCtx()
|
||||
t.ClearEndTime()
|
||||
t.SetStartTime(time.Now())
|
||||
defer func() {
|
||||
t.SetEndTime(time.Now())
|
||||
if t.IsRootTask {
|
||||
moveProgressMap.Delete(t.GetID())
|
||||
}
|
||||
}()
|
||||
|
||||
defer func() { t.SetEndTime(time.Now()) }()
|
||||
var err error
|
||||
if t.srcStorage == nil {
|
||||
t.srcStorage, err = op.GetStorageByMountPath(t.SrcStorageMp)
|
||||
@ -127,337 +51,11 @@ func (t *MoveTask) Run() error {
|
||||
return errors.WithMessage(err, "failed get storage")
|
||||
}
|
||||
|
||||
// Phase 1: Async validation (all validation happens in background)
|
||||
t.mu.Lock()
|
||||
t.Status = "validating source and destination"
|
||||
t.mu.Unlock()
|
||||
|
||||
// Check if source exists
|
||||
srcObj, err := op.Get(t.Ctx(), t.srcStorage, t.SrcObjPath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "source file [%s] not found", stdpath.Base(t.SrcObjPath))
|
||||
}
|
||||
|
||||
// Check if destination already exists (if validation is required)
|
||||
if t.ValidateExistence {
|
||||
dstFilePath := stdpath.Join(t.DstDirPath, srcObj.GetName())
|
||||
if res, _ := op.Get(t.Ctx(), t.dstStorage, dstFilePath); res != nil {
|
||||
return errors.Errorf("destination file [%s] already exists", srcObj.GetName())
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 2: Execute move operation with proper sequencing
|
||||
// Determine if we should use batch optimization for directories
|
||||
if srcObj.IsDir() {
|
||||
t.mu.Lock()
|
||||
t.IsRootTask = true
|
||||
t.RootTaskID = t.GetID()
|
||||
t.mu.Unlock()
|
||||
return t.runRootMoveTask()
|
||||
}
|
||||
|
||||
// Use safe move logic for files
|
||||
return t.safeMoveOperation(srcObj)
|
||||
}
|
||||
|
||||
func (t *MoveTask) runRootMoveTask() error {
|
||||
// First check if source is actually a directory
|
||||
// If not, fall back to regular move logic
|
||||
srcObj, err := op.Get(t.Ctx(), t.srcStorage, t.SrcObjPath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed get src [%s] object", t.SrcObjPath)
|
||||
}
|
||||
|
||||
if !srcObj.IsDir() {
|
||||
// Source is not a directory, use regular move logic
|
||||
t.mu.Lock()
|
||||
t.IsRootTask = false
|
||||
t.mu.Unlock()
|
||||
return t.safeMoveOperation(srcObj)
|
||||
}
|
||||
|
||||
// Phase 1: Count total files and create directory structure
|
||||
t.mu.Lock()
|
||||
t.Phase = "preparing"
|
||||
t.Status = "counting files and preparing directory structure"
|
||||
t.mu.Unlock()
|
||||
t.updateProgress()
|
||||
|
||||
totalFiles, err := t.countFilesAndCreateDirs(t.srcStorage, t.dstStorage, t.SrcObjPath, t.DstDirPath)
|
||||
if err != nil {
|
||||
return errors.WithMessage(err, "failed to prepare directory structure")
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
t.TotalFiles = totalFiles
|
||||
t.Phase = "copying"
|
||||
t.Status = "copying files"
|
||||
t.mu.Unlock()
|
||||
t.updateProgress()
|
||||
|
||||
// Phase 2: Copy all files
|
||||
err = t.copyAllFiles(t.srcStorage, t.dstStorage, t.SrcObjPath, t.DstDirPath)
|
||||
if err != nil {
|
||||
return errors.WithMessage(err, "failed to copy files")
|
||||
}
|
||||
|
||||
// Phase 3: Verify directory structure
|
||||
t.mu.Lock()
|
||||
t.Phase = "verifying"
|
||||
t.Status = "verifying copied files"
|
||||
t.CompletedFiles = 0
|
||||
t.mu.Unlock()
|
||||
t.updateProgress()
|
||||
|
||||
err = t.verifyDirectoryStructure(t.srcStorage, t.dstStorage, t.SrcObjPath, t.DstDirPath)
|
||||
if err != nil {
|
||||
return errors.WithMessage(err, "verification failed")
|
||||
}
|
||||
|
||||
// Phase 4: Delete source files and directories
|
||||
t.mu.Lock()
|
||||
t.Phase = "deleting"
|
||||
t.Status = "deleting source files"
|
||||
t.CompletedFiles = 0
|
||||
t.mu.Unlock()
|
||||
t.updateProgress()
|
||||
|
||||
err = t.deleteSourceRecursively(t.srcStorage, t.SrcObjPath)
|
||||
if err != nil {
|
||||
return errors.WithMessage(err, "failed to delete source files")
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
t.Phase = "completed"
|
||||
t.Status = "completed"
|
||||
t.mu.Unlock()
|
||||
t.updateProgress()
|
||||
|
||||
return nil
|
||||
return moveBetween2Storages(t, t.srcStorage, t.dstStorage, t.SrcObjPath, t.DstDirPath)
|
||||
}
|
||||
|
||||
var MoveTaskManager *tache.Manager[*MoveTask]
|
||||
|
||||
// GetMoveProgress returns the progress of a move task by task ID
|
||||
func GetMoveProgress(taskID string) (*MoveProgress, bool) {
|
||||
if progress, ok := moveProgressMap.Load(taskID); ok {
|
||||
return progress.(*MoveProgress), true
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
// GetMoveTaskProgress returns the progress of a specific move task
|
||||
func GetMoveTaskProgress(task *MoveTask) *MoveProgress {
|
||||
return task.GetMoveProgress()
|
||||
}
|
||||
|
||||
// countFilesAndCreateDirs recursively counts files and creates directory structure
|
||||
func (t *MoveTask) countFilesAndCreateDirs(srcStorage, dstStorage driver.Driver, srcPath, dstPath string) (int, error) {
|
||||
srcObj, err := op.Get(t.Ctx(), srcStorage, srcPath)
|
||||
if err != nil {
|
||||
return 0, errors.WithMessagef(err, "failed get src [%s] object", srcPath)
|
||||
}
|
||||
|
||||
if !srcObj.IsDir() {
|
||||
return 1, nil
|
||||
}
|
||||
|
||||
// Create destination directory
|
||||
dstObjPath := stdpath.Join(dstPath, srcObj.GetName())
|
||||
err = op.MakeDir(t.Ctx(), dstStorage, dstObjPath)
|
||||
if err != nil {
|
||||
if errors.Is(err, errs.UploadNotSupported) {
|
||||
return 0, errors.WithMessagef(err, "destination storage [%s] does not support creating directories", dstStorage.GetStorage().MountPath)
|
||||
}
|
||||
return 0, errors.WithMessagef(err, "failed to create destination directory [%s] in storage [%s]", dstObjPath, dstStorage.GetStorage().MountPath)
|
||||
}
|
||||
|
||||
// List and count files recursively
|
||||
objs, err := op.List(t.Ctx(), srcStorage, srcPath, model.ListArgs{})
|
||||
if err != nil {
|
||||
return 0, errors.WithMessagef(err, "failed list src [%s] objs", srcPath)
|
||||
}
|
||||
|
||||
totalFiles := 0
|
||||
for _, obj := range objs {
|
||||
if utils.IsCanceled(t.Ctx()) {
|
||||
return 0, nil
|
||||
}
|
||||
srcSubPath := stdpath.Join(srcPath, obj.GetName())
|
||||
subCount, err := t.countFilesAndCreateDirs(srcStorage, dstStorage, srcSubPath, dstObjPath)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
totalFiles += subCount
|
||||
}
|
||||
|
||||
return totalFiles, nil
|
||||
}
|
||||
|
||||
// copyAllFiles recursively copies all files
|
||||
func (t *MoveTask) copyAllFiles(srcStorage, dstStorage driver.Driver, srcPath, dstPath string) error {
|
||||
srcObj, err := op.Get(t.Ctx(), srcStorage, srcPath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed get src [%s] object", srcPath)
|
||||
}
|
||||
|
||||
if !srcObj.IsDir() {
|
||||
// Copy single file
|
||||
err := t.copyFile(srcStorage, dstStorage, srcPath, dstPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
t.CompletedFiles++
|
||||
t.mu.Unlock()
|
||||
t.updateProgress()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Copy directory contents
|
||||
objs, err := op.List(t.Ctx(), srcStorage, srcPath, model.ListArgs{})
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed list src [%s] objs", srcPath)
|
||||
}
|
||||
|
||||
dstObjPath := stdpath.Join(dstPath, srcObj.GetName())
|
||||
for _, obj := range objs {
|
||||
if utils.IsCanceled(t.Ctx()) {
|
||||
return nil
|
||||
}
|
||||
srcSubPath := stdpath.Join(srcPath, obj.GetName())
|
||||
err := t.copyAllFiles(srcStorage, dstStorage, srcSubPath, dstObjPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// copyFile copies a single file between storages
|
||||
func (t *MoveTask) copyFile(srcStorage, dstStorage driver.Driver, srcFilePath, dstDirPath string) error {
|
||||
srcFile, err := op.Get(t.Ctx(), srcStorage, srcFilePath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed get src [%s] file", srcFilePath)
|
||||
}
|
||||
|
||||
link, _, err := op.Link(t.Ctx(), srcStorage, srcFilePath, model.LinkArgs{
|
||||
Header: http.Header{},
|
||||
})
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed get [%s] link", srcFilePath)
|
||||
}
|
||||
|
||||
fs := stream.FileStream{
|
||||
Obj: srcFile,
|
||||
Ctx: t.Ctx(),
|
||||
}
|
||||
|
||||
ss, err := stream.NewSeekableStream(fs, link)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed get [%s] stream", srcFilePath)
|
||||
}
|
||||
|
||||
return op.Put(t.Ctx(), dstStorage, dstDirPath, ss, nil, true)
|
||||
}
|
||||
|
||||
// verifyDirectoryStructure compares source and destination directory structures
|
||||
func (t *MoveTask) verifyDirectoryStructure(srcStorage, dstStorage driver.Driver, srcPath, dstPath string) error {
|
||||
srcObj, err := op.Get(t.Ctx(), srcStorage, srcPath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed get src [%s] object", srcPath)
|
||||
}
|
||||
|
||||
if !srcObj.IsDir() {
|
||||
// Verify single file
|
||||
dstFilePath := stdpath.Join(dstPath, srcObj.GetName())
|
||||
_, err := op.Get(t.Ctx(), dstStorage, dstFilePath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "verification failed: destination file [%s] not found", dstFilePath)
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
t.CompletedFiles++
|
||||
t.mu.Unlock()
|
||||
t.updateProgress()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify directory
|
||||
dstObjPath := stdpath.Join(dstPath, srcObj.GetName())
|
||||
_, err = op.Get(t.Ctx(), dstStorage, dstObjPath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "verification failed: destination directory [%s] not found", dstObjPath)
|
||||
}
|
||||
|
||||
// Verify directory contents
|
||||
srcObjs, err := op.List(t.Ctx(), srcStorage, srcPath, model.ListArgs{})
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed list src [%s] objs for verification", srcPath)
|
||||
}
|
||||
|
||||
for _, obj := range srcObjs {
|
||||
if utils.IsCanceled(t.Ctx()) {
|
||||
return nil
|
||||
}
|
||||
srcSubPath := stdpath.Join(srcPath, obj.GetName())
|
||||
err := t.verifyDirectoryStructure(srcStorage, dstStorage, srcSubPath, dstObjPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteSourceRecursively deletes source files and directories recursively
|
||||
func (t *MoveTask) deleteSourceRecursively(srcStorage driver.Driver, srcPath string) error {
|
||||
srcObj, err := op.Get(t.Ctx(), srcStorage, srcPath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed get src [%s] object for deletion", srcPath)
|
||||
}
|
||||
|
||||
if !srcObj.IsDir() {
|
||||
// Delete single file
|
||||
err := op.Remove(t.Ctx(), srcStorage, srcPath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed to delete src [%s] file", srcPath)
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
t.CompletedFiles++
|
||||
t.mu.Unlock()
|
||||
t.updateProgress()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete directory contents first
|
||||
objs, err := op.List(t.Ctx(), srcStorage, srcPath, model.ListArgs{})
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed list src [%s] objs for deletion", srcPath)
|
||||
}
|
||||
|
||||
for _, obj := range objs {
|
||||
if utils.IsCanceled(t.Ctx()) {
|
||||
return nil
|
||||
}
|
||||
srcSubPath := stdpath.Join(srcPath, obj.GetName())
|
||||
err := t.deleteSourceRecursively(srcStorage, srcSubPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Delete the directory itself
|
||||
err = op.Remove(t.Ctx(), srcStorage, srcPath)
|
||||
if err != nil {
|
||||
return errors.WithMessagef(err, "failed to delete src [%s] directory", srcPath)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func moveBetween2Storages(t *MoveTask, srcStorage, dstStorage driver.Driver, srcObjPath, dstDirPath string) error {
|
||||
t.Status = "getting src object"
|
||||
@ -558,22 +156,7 @@ func moveFileBetween2Storages(tsk *MoveTask, srcStorage, dstStorage driver.Drive
|
||||
}
|
||||
|
||||
|
||||
// safeMoveOperation ensures copy-then-delete sequence for safe move operations
|
||||
func (t *MoveTask) safeMoveOperation(srcObj model.Obj) error {
|
||||
if srcObj.IsDir() {
|
||||
// For directories, use the original logic but ensure proper sequencing
|
||||
return moveBetween2Storages(t, t.srcStorage, t.dstStorage, t.SrcObjPath, t.DstDirPath)
|
||||
} else {
|
||||
// For files, use the safe file move logic
|
||||
return moveFileBetween2Storages(t, t.srcStorage, t.dstStorage, t.SrcObjPath, t.DstDirPath)
|
||||
}
|
||||
}
|
||||
|
||||
func _move(ctx context.Context, srcObjPath, dstDirPath string, lazyCache ...bool) (task.TaskExtensionInfo, error) {
|
||||
return _moveWithValidation(ctx, srcObjPath, dstDirPath, false, lazyCache...)
|
||||
}
|
||||
|
||||
func _moveWithValidation(ctx context.Context, srcObjPath, dstDirPath string, validateExistence bool, lazyCache ...bool) (task.TaskExtensionInfo, error) {
|
||||
srcStorage, srcObjActualPath, err := op.GetStorageAndActualPath(srcObjPath)
|
||||
if err != nil {
|
||||
return nil, errors.WithMessage(err, "failed get src storage")
|
||||
@ -583,7 +166,6 @@ func _moveWithValidation(ctx context.Context, srcObjPath, dstDirPath string, val
|
||||
return nil, errors.WithMessage(err, "failed get dst storage")
|
||||
}
|
||||
|
||||
// Try native move first if in the same storage
|
||||
if srcStorage.GetStorage() == dstStorage.GetStorage() {
|
||||
err = op.Move(ctx, srcStorage, srcObjActualPath, dstDirActualPath, lazyCache...)
|
||||
if !errors.Is(err, errs.NotImplement) && !errors.Is(err, errs.NotSupport) {
|
||||
@ -592,23 +174,17 @@ func _moveWithValidation(ctx context.Context, srcObjPath, dstDirPath string, val
|
||||
}
|
||||
|
||||
taskCreator, _ := ctx.Value("user").(*model.User)
|
||||
|
||||
// Create task immediately without any synchronous checks to avoid blocking frontend
|
||||
// All validation and type checking will be done asynchronously in the Run method
|
||||
t := &MoveTask{
|
||||
TaskExtension: task.TaskExtension{
|
||||
Creator: taskCreator,
|
||||
},
|
||||
srcStorage: srcStorage,
|
||||
dstStorage: dstStorage,
|
||||
SrcObjPath: srcObjActualPath,
|
||||
DstDirPath: dstDirActualPath,
|
||||
SrcStorageMp: srcStorage.GetStorage().MountPath,
|
||||
DstStorageMp: dstStorage.GetStorage().MountPath,
|
||||
ValidateExistence: validateExistence,
|
||||
Phase: "initializing",
|
||||
srcStorage: srcStorage,
|
||||
dstStorage: dstStorage,
|
||||
SrcObjPath: srcObjActualPath,
|
||||
DstDirPath: dstDirActualPath,
|
||||
SrcStorageMp: srcStorage.GetStorage().MountPath,
|
||||
DstStorageMp: dstStorage.GetStorage().MountPath,
|
||||
}
|
||||
|
||||
MoveTaskManager.Add(t)
|
||||
return t, nil
|
||||
}
|
@ -165,10 +165,6 @@ func (d *downloader) download() (io.ReadCloser, error) {
|
||||
if maxPart < d.cfg.Concurrency {
|
||||
d.cfg.Concurrency = maxPart
|
||||
}
|
||||
if d.params.Range.Length == 0 {
|
||||
d.cfg.Concurrency = 1
|
||||
}
|
||||
|
||||
log.Debugf("cfgConcurrency:%d", d.cfg.Concurrency)
|
||||
|
||||
if d.cfg.Concurrency == 1 {
|
||||
@ -619,9 +615,6 @@ type Buf struct {
|
||||
ctx context.Context
|
||||
off int
|
||||
rw sync.Mutex
|
||||
|
||||
readSignal chan struct{}
|
||||
readPending bool
|
||||
}
|
||||
|
||||
// NewBuf is a buffer that can have 1 read & 1 write at the same time.
|
||||
@ -631,16 +624,9 @@ func NewBuf(ctx context.Context, maxSize int) *Buf {
|
||||
ctx: ctx,
|
||||
buffer: bytes.NewBuffer(make([]byte, 0, maxSize)),
|
||||
size: maxSize,
|
||||
|
||||
readSignal: make(chan struct{}, 1),
|
||||
}
|
||||
}
|
||||
func (br *Buf) Reset(size int) {
|
||||
br.rw.Lock()
|
||||
defer br.rw.Unlock()
|
||||
if br.buffer == nil {
|
||||
return
|
||||
}
|
||||
br.buffer.Reset()
|
||||
br.size = size
|
||||
br.off = 0
|
||||
@ -656,34 +642,27 @@ func (br *Buf) Read(p []byte) (n int, err error) {
|
||||
if br.off >= br.size {
|
||||
return 0, io.EOF
|
||||
}
|
||||
for {
|
||||
br.rw.Lock()
|
||||
if br.buffer != nil {
|
||||
n, err = br.buffer.Read(p)
|
||||
} else {
|
||||
err = io.ErrClosedPipe
|
||||
}
|
||||
br.rw.Unlock()
|
||||
if err != nil && err != io.EOF {
|
||||
return
|
||||
}
|
||||
if n > 0 {
|
||||
br.off += n
|
||||
return n, nil
|
||||
}
|
||||
br.rw.Lock()
|
||||
br.readPending = true
|
||||
br.rw.Unlock()
|
||||
// n==0, err==io.EOF
|
||||
select {
|
||||
case <-br.ctx.Done():
|
||||
return 0, br.ctx.Err()
|
||||
case _, ok := <-br.readSignal:
|
||||
if !ok {
|
||||
return 0, io.ErrClosedPipe
|
||||
}
|
||||
continue
|
||||
}
|
||||
br.rw.Lock()
|
||||
n, err = br.buffer.Read(p)
|
||||
br.rw.Unlock()
|
||||
if err == nil {
|
||||
br.off += n
|
||||
return n, err
|
||||
}
|
||||
if err != io.EOF {
|
||||
return n, err
|
||||
}
|
||||
if n != 0 {
|
||||
br.off += n
|
||||
return n, nil
|
||||
}
|
||||
// n==0, err==io.EOF
|
||||
// wait for new write for 200ms
|
||||
select {
|
||||
case <-br.ctx.Done():
|
||||
return 0, br.ctx.Err()
|
||||
case <-time.After(time.Millisecond * 200):
|
||||
return 0, nil
|
||||
}
|
||||
}
|
||||
|
||||
@ -693,23 +672,10 @@ func (br *Buf) Write(p []byte) (n int, err error) {
|
||||
}
|
||||
br.rw.Lock()
|
||||
defer br.rw.Unlock()
|
||||
if br.buffer == nil {
|
||||
return 0, io.ErrClosedPipe
|
||||
}
|
||||
n, err = br.buffer.Write(p)
|
||||
if br.readPending {
|
||||
br.readPending = false
|
||||
select {
|
||||
case br.readSignal <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (br *Buf) Close() {
|
||||
br.rw.Lock()
|
||||
defer br.rw.Unlock()
|
||||
br.buffer = nil
|
||||
close(br.readSignal)
|
||||
}
|
||||
|
@ -7,6 +7,5 @@ import (
|
||||
_ "github.com/OpenListTeam/OpenList/internal/offline_download/pikpak"
|
||||
_ "github.com/OpenListTeam/OpenList/internal/offline_download/qbit"
|
||||
_ "github.com/OpenListTeam/OpenList/internal/offline_download/thunder"
|
||||
_ "github.com/OpenListTeam/OpenList/internal/offline_download/thunder_browser"
|
||||
_ "github.com/OpenListTeam/OpenList/internal/offline_download/transmission"
|
||||
)
|
||||
|
@ -1,171 +0,0 @@
|
||||
package thunder_browser
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/OpenListTeam/OpenList/drivers/thunder_browser"
|
||||
"github.com/OpenListTeam/OpenList/internal/conf"
|
||||
"github.com/OpenListTeam/OpenList/internal/setting"
|
||||
"strconv"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/internal/offline_download/tool"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
)
|
||||
|
||||
type ThunderBrowser struct {
|
||||
refreshTaskCache bool
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) Name() string {
|
||||
return "ThunderBrowser"
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) Items() []model.SettingItem {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) Run(task *tool.DownloadTask) error {
|
||||
return errs.NotSupport
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) Init() (string, error) {
|
||||
t.refreshTaskCache = false
|
||||
return "ok", nil
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) IsReady() bool {
|
||||
tempDir := setting.GetStr(conf.ThunderBrowserTempDir)
|
||||
if tempDir == "" {
|
||||
return false
|
||||
}
|
||||
storage, _, err := op.GetStorageAndActualPath(tempDir)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
switch storage.(type) {
|
||||
case *thunder_browser.ThunderBrowser, *thunder_browser.ThunderBrowserExpert:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) AddURL(args *tool.AddUrlArgs) (string, error) {
|
||||
// 添加新任务刷新缓存
|
||||
t.refreshTaskCache = true
|
||||
storage, actualPath, err := op.GetStorageAndActualPath(args.TempDir)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
if err := op.MakeDir(ctx, storage, actualPath); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
parentDir, err := op.GetUnwrap(ctx, storage, actualPath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var task *thunder_browser.OfflineTask
|
||||
switch v := storage.(type) {
|
||||
case *thunder_browser.ThunderBrowser:
|
||||
task, err = v.OfflineDownload(ctx, args.Url, parentDir, "")
|
||||
case *thunder_browser.ThunderBrowserExpert:
|
||||
task, err = v.OfflineDownload(ctx, args.Url, parentDir, "")
|
||||
default:
|
||||
return "", fmt.Errorf("unsupported storage driver for offline download, only ThunderBrowser is supported")
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to add offline download task: %w", err)
|
||||
}
|
||||
|
||||
if task == nil {
|
||||
return "", fmt.Errorf("failed to add offline download task: task is nil")
|
||||
}
|
||||
|
||||
return task.ID, nil
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) Remove(task *tool.DownloadTask) error {
|
||||
storage, _, err := op.GetStorageAndActualPath(task.TempDir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
switch v := storage.(type) {
|
||||
case *thunder_browser.ThunderBrowser:
|
||||
err = v.DeleteOfflineTasks(ctx, []string{task.GID})
|
||||
case *thunder_browser.ThunderBrowserExpert:
|
||||
err = v.DeleteOfflineTasks(ctx, []string{task.GID})
|
||||
default:
|
||||
return fmt.Errorf("unsupported storage driver for offline download, only ThunderBrowser is supported")
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) Status(task *tool.DownloadTask) (*tool.Status, error) {
|
||||
storage, _, err := op.GetStorageAndActualPath(task.TempDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var tasks []thunder_browser.OfflineTask
|
||||
|
||||
switch v := storage.(type) {
|
||||
case *thunder_browser.ThunderBrowser:
|
||||
tasks, err = t.GetTasks(v)
|
||||
case *thunder_browser.ThunderBrowserExpert:
|
||||
tasks, err = t.GetTasksExpert(v)
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported storage driver for offline download, only ThunderBrowser is supported")
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s := &tool.Status{
|
||||
Progress: 0,
|
||||
NewGID: "",
|
||||
Completed: false,
|
||||
Status: "the task has been deleted",
|
||||
Err: nil,
|
||||
}
|
||||
|
||||
for _, t := range tasks {
|
||||
if t.ID == task.GID {
|
||||
s.Progress = float64(t.Progress)
|
||||
s.Status = t.Message
|
||||
s.Completed = t.Phase == "PHASE_TYPE_COMPLETE"
|
||||
s.TotalBytes, err = strconv.ParseInt(t.FileSize, 10, 64)
|
||||
if err != nil {
|
||||
s.TotalBytes = 0
|
||||
}
|
||||
if t.Phase == "PHASE_TYPE_ERROR" {
|
||||
s.Err = errors.New(t.Message)
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
}
|
||||
|
||||
s.Err = fmt.Errorf("the task has been deleted")
|
||||
return s, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
tool.Tools.Add(&ThunderBrowser{})
|
||||
}
|
@ -1,70 +0,0 @@
|
||||
package thunder_browser
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/drivers/thunder_browser"
|
||||
"github.com/OpenListTeam/OpenList/internal/op"
|
||||
"github.com/OpenListTeam/OpenList/pkg/singleflight"
|
||||
"github.com/Xhofe/go-cache"
|
||||
)
|
||||
|
||||
var taskCache = cache.NewMemCache(cache.WithShards[[]thunder_browser.OfflineTask](16))
|
||||
var taskG singleflight.Group[[]thunder_browser.OfflineTask]
|
||||
|
||||
func (t *ThunderBrowser) GetTasks(thunderDriver *thunder_browser.ThunderBrowser) ([]thunder_browser.OfflineTask, error) {
|
||||
key := op.Key(thunderDriver, "/drive/v1/task")
|
||||
if !t.refreshTaskCache {
|
||||
if tasks, ok := taskCache.Get(key); ok {
|
||||
return tasks, nil
|
||||
}
|
||||
}
|
||||
t.refreshTaskCache = false
|
||||
tasks, err, _ := taskG.Do(key, func() ([]thunder_browser.OfflineTask, error) {
|
||||
ctx := context.Background()
|
||||
tasks, err := thunderDriver.OfflineList(ctx, "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// 添加缓存 10s
|
||||
if len(tasks) > 0 {
|
||||
taskCache.Set(key, tasks, cache.WithEx[[]thunder_browser.OfflineTask](time.Second*10))
|
||||
} else {
|
||||
taskCache.Del(key)
|
||||
}
|
||||
return tasks, nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return tasks, nil
|
||||
}
|
||||
|
||||
func (t *ThunderBrowser) GetTasksExpert(thunderDriver *thunder_browser.ThunderBrowserExpert) ([]thunder_browser.OfflineTask, error) {
|
||||
key := op.Key(thunderDriver, "/drive/v1/task")
|
||||
if !t.refreshTaskCache {
|
||||
if tasks, ok := taskCache.Get(key); ok {
|
||||
return tasks, nil
|
||||
}
|
||||
}
|
||||
t.refreshTaskCache = false
|
||||
tasks, err, _ := taskG.Do(key, func() ([]thunder_browser.OfflineTask, error) {
|
||||
ctx := context.Background()
|
||||
tasks, err := thunderDriver.OfflineList(ctx, "")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// 添加缓存 10s
|
||||
if len(tasks) > 0 {
|
||||
taskCache.Set(key, tasks, cache.WithEx[[]thunder_browser.OfflineTask](time.Second*10))
|
||||
} else {
|
||||
taskCache.Del(key)
|
||||
}
|
||||
return tasks, nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return tasks, nil
|
||||
}
|
@ -2,7 +2,6 @@ package tool
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"net/url"
|
||||
stdpath "path"
|
||||
"path/filepath"
|
||||
@ -10,7 +9,6 @@ import (
|
||||
_115 "github.com/OpenListTeam/OpenList/drivers/115"
|
||||
"github.com/OpenListTeam/OpenList/drivers/pikpak"
|
||||
"github.com/OpenListTeam/OpenList/drivers/thunder"
|
||||
"github.com/OpenListTeam/OpenList/drivers/thunder_browser"
|
||||
"github.com/OpenListTeam/OpenList/internal/conf"
|
||||
"github.com/OpenListTeam/OpenList/internal/errs"
|
||||
"github.com/OpenListTeam/OpenList/internal/fs"
|
||||
@ -105,13 +103,6 @@ func AddURL(ctx context.Context, args *AddURLArgs) (task.TaskExtensionInfo, erro
|
||||
} else {
|
||||
tempDir = filepath.Join(setting.GetStr(conf.ThunderTempDir), uid)
|
||||
}
|
||||
case "ThunderBrowser":
|
||||
switch storage.(type) {
|
||||
case *thunder_browser.ThunderBrowser, *thunder_browser.ThunderBrowserExpert:
|
||||
tempDir = args.DstDirPath
|
||||
default:
|
||||
tempDir = filepath.Join(setting.GetStr(conf.ThunderBrowserTempDir), uid)
|
||||
}
|
||||
}
|
||||
|
||||
taskCreator, _ := ctx.Value("user").(*model.User) // taskCreator is nil when convert failed
|
||||
|
@ -87,9 +87,6 @@ outer:
|
||||
if t.tool.Name() == "Thunder" {
|
||||
return nil
|
||||
}
|
||||
if t.tool.Name() == "ThunderBrowser" {
|
||||
return nil
|
||||
}
|
||||
if t.tool.Name() == "115 Cloud" {
|
||||
// hack for 115
|
||||
<-time.After(time.Second * 1)
|
||||
@ -162,7 +159,7 @@ func (t *DownloadTask) Update() (bool, error) {
|
||||
|
||||
func (t *DownloadTask) Transfer() error {
|
||||
toolName := t.tool.Name()
|
||||
if toolName == "115 Cloud" || toolName == "PikPak" || toolName == "Thunder" || toolName == "ThunderBrowser" {
|
||||
if toolName == "115 Cloud" || toolName == "PikPak" || toolName == "Thunder" {
|
||||
// 如果不是直接下载到目标路径,则进行转存
|
||||
if t.TempDir != t.DstDirPath {
|
||||
return transferObj(t.Ctx(), t.TempDir, t.DstDirPath, t.DeletePolicy)
|
||||
|
@ -1,43 +0,0 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// GenerateContentDisposition 生成符合RFC 5987标准的Content-Disposition头部
|
||||
func GenerateContentDisposition(fileName string) string {
|
||||
// 按照RFC 2047进行编码,用于filename部分
|
||||
encodedName := urlEncode(fileName)
|
||||
|
||||
// 按照RFC 5987进行编码,用于filename*部分
|
||||
encodedNameRFC5987 := encodeRFC5987(fileName)
|
||||
|
||||
return fmt.Sprintf("attachment; filename=\"%s\"; filename*=utf-8''%s",
|
||||
encodedName, encodedNameRFC5987)
|
||||
}
|
||||
|
||||
// encodeRFC5987 按照RFC 5987规范编码字符串,适用于HTTP头部参数中的非ASCII字符
|
||||
func encodeRFC5987(s string) string {
|
||||
var buf strings.Builder
|
||||
for _, r := range []byte(s) {
|
||||
// 根据RFC 5987,只有字母、数字和部分特殊符号可以不编码
|
||||
if (r >= 'a' && r <= 'z') ||
|
||||
(r >= 'A' && r <= 'Z') ||
|
||||
(r >= '0' && r <= '9') ||
|
||||
r == '-' || r == '.' || r == '_' || r == '~' {
|
||||
buf.WriteByte(r)
|
||||
} else {
|
||||
// 其他字符都需要百分号编码
|
||||
fmt.Fprintf(&buf, "%%%02X", r)
|
||||
}
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
func urlEncode(s string) string {
|
||||
s = url.QueryEscape(s)
|
||||
s = strings.ReplaceAll(s, "+", "%20")
|
||||
return s
|
||||
}
|
@ -5,6 +5,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
@ -92,7 +93,7 @@ func Proxy(w http.ResponseWriter, r *http.Request, link *model.Link, file model.
|
||||
}
|
||||
func attachHeader(w http.ResponseWriter, file model.Obj) {
|
||||
fileName := file.GetName()
|
||||
w.Header().Set("Content-Disposition", utils.GenerateContentDisposition(fileName))
|
||||
w.Header().Set("Content-Disposition", fmt.Sprintf(`attachment; filename="%s"; filename*=UTF-8''%s`, fileName, url.PathEscape(fileName)))
|
||||
w.Header().Set("Content-Type", utils.GetMimeType(fileName))
|
||||
w.Header().Set("Etag", GetEtag(file))
|
||||
}
|
||||
|
@ -3,6 +3,7 @@ package handles
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
stdpath "path"
|
||||
|
||||
"github.com/OpenListTeam/OpenList/internal/task"
|
||||
@ -391,11 +392,11 @@ func ArchiveInternalExtract(c *gin.Context) {
|
||||
"Referrer-Policy": "no-referrer",
|
||||
"Cache-Control": "max-age=0, no-cache, no-store, must-revalidate",
|
||||
}
|
||||
fileName := stdpath.Base(innerPath)
|
||||
headers["Content-Disposition"] = utils.GenerateContentDisposition(fileName)
|
||||
filename := stdpath.Base(innerPath)
|
||||
headers["Content-Disposition"] = fmt.Sprintf(`attachment; filename="%s"; filename*=UTF-8''%s`, filename, url.PathEscape(filename))
|
||||
contentType := c.Request.Header.Get("Content-Type")
|
||||
if contentType == "" {
|
||||
contentType = utils.GetMimeType(fileName)
|
||||
contentType = utils.GetMimeType(filename)
|
||||
}
|
||||
c.DataFromReader(200, size, contentType, rc, headers)
|
||||
}
|
||||
|
@ -88,12 +88,17 @@ func FsMove(c *gin.Context) {
|
||||
common.ErrorResp(c, err, 403)
|
||||
return
|
||||
}
|
||||
|
||||
// Create all tasks immediately without any synchronous validation
|
||||
// All validation will be done asynchronously in the background
|
||||
if !req.Overwrite {
|
||||
for _, name := range req.Names {
|
||||
if res, _ := fs.Get(c, stdpath.Join(dstDir, name), &fs.GetArgs{NoLog: true}); res != nil {
|
||||
common.ErrorStrResp(c, fmt.Sprintf("file [%s] exists", name), 403)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
var addedTasks []task.TaskExtensionInfo
|
||||
for i, name := range req.Names {
|
||||
t, err := fs.MoveWithTaskAndValidation(c, stdpath.Join(srcDir, name), dstDir, !req.Overwrite, len(req.Names) > i+1)
|
||||
t, err := fs.MoveWithTask(c, stdpath.Join(srcDir, name), dstDir, len(req.Names) > i+1)
|
||||
if t != nil {
|
||||
addedTasks = append(addedTasks, t)
|
||||
}
|
||||
@ -102,17 +107,12 @@ func FsMove(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Return immediately with task information
|
||||
if len(addedTasks) > 0 {
|
||||
common.SuccessResp(c, gin.H{
|
||||
"message": fmt.Sprintf("Successfully created %d move task(s)", len(addedTasks)),
|
||||
"tasks": getTaskInfos(addedTasks),
|
||||
})
|
||||
} else {
|
||||
common.SuccessResp(c, gin.H{
|
||||
"message": "Move operations completed immediately",
|
||||
})
|
||||
common.SuccessResp(c)
|
||||
}
|
||||
}
|
||||
|
||||
@ -141,9 +141,14 @@ func FsCopy(c *gin.Context) {
|
||||
common.ErrorResp(c, err, 403)
|
||||
return
|
||||
}
|
||||
|
||||
// Create all tasks immediately without any synchronous validation
|
||||
// All validation will be done asynchronously in the background
|
||||
if !req.Overwrite {
|
||||
for _, name := range req.Names {
|
||||
if res, _ := fs.Get(c, stdpath.Join(dstDir, name), &fs.GetArgs{NoLog: true}); res != nil {
|
||||
common.ErrorStrResp(c, fmt.Sprintf("file [%s] exists", name), 403)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
var addedTasks []task.TaskExtensionInfo
|
||||
for i, name := range req.Names {
|
||||
t, err := fs.Copy(c, stdpath.Join(srcDir, name), dstDir, len(req.Names) > i+1)
|
||||
@ -155,18 +160,9 @@ func FsCopy(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Return immediately with task information
|
||||
if len(addedTasks) > 0 {
|
||||
common.SuccessResp(c, gin.H{
|
||||
"message": fmt.Sprintf("Successfully created %d copy task(s)", len(addedTasks)),
|
||||
"tasks": getTaskInfos(addedTasks),
|
||||
})
|
||||
} else {
|
||||
common.SuccessResp(c, gin.H{
|
||||
"message": "Copy operations completed immediately",
|
||||
})
|
||||
}
|
||||
common.SuccessResp(c, gin.H{
|
||||
"tasks": getTaskInfos(addedTasks),
|
||||
})
|
||||
}
|
||||
|
||||
type RenameReq struct {
|
||||
|
@ -4,7 +4,6 @@ import (
|
||||
_115 "github.com/OpenListTeam/OpenList/drivers/115"
|
||||
"github.com/OpenListTeam/OpenList/drivers/pikpak"
|
||||
"github.com/OpenListTeam/OpenList/drivers/thunder"
|
||||
"github.com/OpenListTeam/OpenList/drivers/thunder_browser"
|
||||
"github.com/OpenListTeam/OpenList/internal/conf"
|
||||
"github.com/OpenListTeam/OpenList/internal/model"
|
||||
"github.com/OpenListTeam/OpenList/internal/offline_download/tool"
|
||||
@ -240,51 +239,6 @@ func SetThunder(c *gin.Context) {
|
||||
common.SuccessResp(c, "ok")
|
||||
}
|
||||
|
||||
type SetThunderBrowserReq struct {
|
||||
TempDir string `json:"temp_dir" form:"temp_dir"`
|
||||
}
|
||||
|
||||
func SetThunderBrowser(c *gin.Context) {
|
||||
var req SetThunderBrowserReq
|
||||
if err := c.ShouldBind(&req); err != nil {
|
||||
common.ErrorResp(c, err, 400)
|
||||
return
|
||||
}
|
||||
if req.TempDir != "" {
|
||||
storage, _, err := op.GetStorageAndActualPath(req.TempDir)
|
||||
if err != nil {
|
||||
common.ErrorStrResp(c, "storage does not exists", 400)
|
||||
return
|
||||
}
|
||||
if storage.Config().CheckStatus && storage.GetStorage().Status != op.WORK {
|
||||
common.ErrorStrResp(c, "storage not init: "+storage.GetStorage().Status, 400)
|
||||
return
|
||||
}
|
||||
switch storage.(type) {
|
||||
case *thunder_browser.ThunderBrowser, *thunder_browser.ThunderBrowserExpert:
|
||||
default:
|
||||
common.ErrorStrResp(c, "unsupported storage driver for offline download, only ThunderBrowser is supported", 400)
|
||||
}
|
||||
}
|
||||
items := []model.SettingItem{
|
||||
{Key: conf.ThunderBrowserTempDir, Value: req.TempDir, Type: conf.TypeString, Group: model.OFFLINE_DOWNLOAD, Flag: model.PRIVATE},
|
||||
}
|
||||
if err := op.SaveSettingItems(items); err != nil {
|
||||
common.ErrorResp(c, err, 500)
|
||||
return
|
||||
}
|
||||
_tool, err := tool.Tools.Get("ThunderBrowser")
|
||||
if err != nil {
|
||||
common.ErrorResp(c, err, 500)
|
||||
return
|
||||
}
|
||||
if _, err := _tool.Init(); err != nil {
|
||||
common.ErrorResp(c, err, 500)
|
||||
return
|
||||
}
|
||||
common.SuccessResp(c, "ok")
|
||||
}
|
||||
|
||||
func OfflineDownloadTools(c *gin.Context) {
|
||||
tools := tool.Tools.Names()
|
||||
common.SuccessResp(c, tools)
|
||||
|
@ -147,7 +147,6 @@ func admin(g *gin.RouterGroup) {
|
||||
setting.POST("/set_115", handles.Set115)
|
||||
setting.POST("/set_pikpak", handles.SetPikPak)
|
||||
setting.POST("/set_thunder", handles.SetThunder)
|
||||
setting.POST("/set_thunder_browser", handles.SetThunderBrowser)
|
||||
|
||||
// retain /admin/task API to ensure compatibility with legacy automation scripts
|
||||
_task(g.Group("/task"))
|
||||
|
@ -213,9 +213,8 @@ func (b *s3Backend) GetObject(ctx context.Context, bucketName, objectName string
|
||||
}
|
||||
|
||||
meta := map[string]string{
|
||||
"Last-Modified": node.ModTime().Format(timeFormat),
|
||||
"Content-Disposition": utils.GenerateContentDisposition(file.GetName()),
|
||||
"Content-Type": utils.GetMimeType(fp),
|
||||
"Last-Modified": node.ModTime().Format(timeFormat),
|
||||
"Content-Type": utils.GetMimeType(fp),
|
||||
}
|
||||
|
||||
if val, ok := b.meta.Load(fp); ok {
|
||||
@ -329,7 +328,7 @@ func (b *s3Backend) PutObject(
|
||||
func (b *s3Backend) DeleteMulti(ctx context.Context, bucketName string, objects ...string) (result gofakes3.MultiDeleteResult, rerr error) {
|
||||
for _, object := range objects {
|
||||
if err := b.deleteObject(ctx, bucketName, object); err != nil {
|
||||
log.Errorf("delete object failed: %v", err)
|
||||
utils.Log.Errorf("serve s3", "delete object failed: %v", err)
|
||||
result.Error = append(result.Error, gofakes3.ErrorResult{
|
||||
Code: gofakes3.ErrInternal,
|
||||
Message: gofakes3.ErrInternal.Message(),
|
||||
|
@ -15,7 +15,7 @@ type SiteConfig struct {
|
||||
func getSiteConfig() SiteConfig {
|
||||
siteConfig := SiteConfig{
|
||||
BasePath: conf.URL.Path,
|
||||
Cdn: strings.ReplaceAll(strings.TrimSuffix(conf.Conf.Cdn, "/"), "$version", strings.TrimPrefix(conf.WebVersion, "v"),),
|
||||
Cdn: strings.ReplaceAll(strings.TrimSuffix(conf.Conf.Cdn, "/"), "$version", conf.WebVersion),
|
||||
}
|
||||
if siteConfig.BasePath != "" {
|
||||
siteConfig.BasePath = utils.FixAndCleanPath(siteConfig.BasePath)
|
||||
|
Reference in New Issue
Block a user