mirror of
https://github.com/OpenListTeam/OpenList.git
synced 2025-09-20 20:56:20 +08:00
Compare commits
18 Commits
fix-user
...
renovate/g
Author | SHA1 | Date | |
---|---|---|---|
7350b44036 | |||
87cf95f50b | |||
8ab26cb823 | |||
5880c8e1af | |||
14bf4ecb4c | |||
04a5e58781 | |||
bbd4389345 | |||
f350ccdf95 | |||
4f2de9395e | |||
b0dbbebfb0 | |||
0c27b4bd47 | |||
736cd9e5f2 | |||
c7a603c926 | |||
a28d6d5693 | |||
e59d2233e2 | |||
01914a06ef | |||
6499374d1c | |||
b054919d5c |
56
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
56
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
<!--
|
||||||
|
Provide a general summary of your changes in the Title above.
|
||||||
|
The PR title must start with `feat(): `, `docs(): `, `fix(): `, `style(): `, or `refactor(): `, `chore(): `. For example: `feat(component): add new feature`.
|
||||||
|
If it spans multiple components, use the main component as the prefix and enumerate in the title, describe in the body.
|
||||||
|
-->
|
||||||
|
<!--
|
||||||
|
在上方标题中提供您更改的总体摘要。
|
||||||
|
PR 标题需以 `feat(): `, `docs(): `, `fix(): `, `style(): `, `refactor(): `, `chore(): ` 其中之一开头,例如:`feat(component): 新增功能`。
|
||||||
|
如果跨多个组件,请使用主要组件作为前缀,并在标题中枚举、描述中说明。
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Description / 描述
|
||||||
|
|
||||||
|
<!-- Describe your changes in detail -->
|
||||||
|
<!-- 详细描述您的更改 -->
|
||||||
|
|
||||||
|
## Motivation and Context / 背景
|
||||||
|
|
||||||
|
<!-- Why is this change required? What problem does it solve? -->
|
||||||
|
<!-- 为什么需要此更改?它解决了什么问题? -->
|
||||||
|
|
||||||
|
<!-- If it fixes an open issue, please link to the issue here. -->
|
||||||
|
<!-- 如果修复了一个打开的issue,请在此处链接到该issue -->
|
||||||
|
|
||||||
|
Closes #XXXX
|
||||||
|
|
||||||
|
<!-- or -->
|
||||||
|
<!-- 或者 -->
|
||||||
|
|
||||||
|
Relates to #XXXX
|
||||||
|
|
||||||
|
## How Has This Been Tested? / 测试
|
||||||
|
|
||||||
|
<!-- Please describe in detail how you tested your changes. -->
|
||||||
|
<!-- 请详细描述您如何测试更改 -->
|
||||||
|
|
||||||
|
## Checklist / 检查清单
|
||||||
|
|
||||||
|
<!-- Go over all the following points, and put an `x` in all the boxes that apply. -->
|
||||||
|
<!-- 检查以下所有要点,并在所有适用的框中打`x` -->
|
||||||
|
|
||||||
|
<!-- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
|
||||||
|
<!-- 如果您对其中任何一项不确定,请不要犹豫提问。我们会帮助您! -->
|
||||||
|
|
||||||
|
- [ ] I have read the [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) document.
|
||||||
|
我已阅读 [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) 文档。
|
||||||
|
- [ ] I have formatted my code with `go fmt` or [prettier](https://prettier.io/).
|
||||||
|
我已使用 `go fmt` 或 [prettier](https://prettier.io/) 格式化提交的代码。
|
||||||
|
- [ ] I have added appropriate labels to this PR (or mentioned needed labels in the description if lacking permissions).
|
||||||
|
我已为此 PR 添加了适当的标签(如无权限或需要的标签不存在,请在描述中说明,管理员将后续处理)。
|
||||||
|
- [ ] I have requested review from relevant code authors using the "Request review" feature when applicable.
|
||||||
|
我已在适当情况下使用"Request review"功能请求相关代码作者进行审查。
|
||||||
|
- [ ] I have updated the repository accordingly (If it’s needed).
|
||||||
|
我已相应更新了相关仓库(若适用)。
|
||||||
|
- [ ] [OpenList-Frontend](https://github.com/OpenListTeam/OpenList-Frontend) #XXXX
|
||||||
|
- [ ] [OpenList-Docs](https://github.com/OpenListTeam/OpenList-Docs) #XXXX
|
110
CONTRIBUTING.md
110
CONTRIBUTING.md
@ -2,106 +2,76 @@
|
|||||||
|
|
||||||
## Setup your machine
|
## Setup your machine
|
||||||
|
|
||||||
`OpenList` is written in [Go](https://golang.org/) and [React](https://reactjs.org/).
|
`OpenList` is written in [Go](https://golang.org/) and [SolidJS](https://www.solidjs.com/).
|
||||||
|
|
||||||
Prerequisites:
|
Prerequisites:
|
||||||
|
|
||||||
- [git](https://git-scm.com)
|
- [git](https://git-scm.com)
|
||||||
- [Go 1.20+](https://golang.org/doc/install)
|
- [Go 1.24+](https://golang.org/doc/install)
|
||||||
- [gcc](https://gcc.gnu.org/)
|
- [gcc](https://gcc.gnu.org/)
|
||||||
- [nodejs](https://nodejs.org/)
|
- [nodejs](https://nodejs.org/)
|
||||||
|
|
||||||
Clone `OpenList` and `OpenList-Frontend` anywhere:
|
## Cloning a fork
|
||||||
|
|
||||||
|
Fork and clone `OpenList` and `OpenList-Frontend` anywhere:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ git clone https://github.com/OpenListTeam/OpenList.git
|
$ git clone https://github.com/<your-username>/OpenList.git
|
||||||
$ git clone --recurse-submodules https://github.com/OpenListTeam/OpenList-Frontend.git
|
$ git clone --recurse-submodules https://github.com/<your-username>/OpenList-Frontend.git
|
||||||
|
```
|
||||||
|
|
||||||
|
## Creating a branch
|
||||||
|
|
||||||
|
Create a new branch from the `main` branch, with an appropriate name.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ git checkout -b <branch-name>
|
||||||
```
|
```
|
||||||
You should switch to the `main` branch for development.
|
|
||||||
|
|
||||||
## Preview your change
|
## Preview your change
|
||||||
|
|
||||||
### backend
|
### backend
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ go run main.go
|
$ go run main.go
|
||||||
```
|
```
|
||||||
|
|
||||||
### frontend
|
### frontend
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ pnpm dev
|
$ pnpm dev
|
||||||
```
|
```
|
||||||
|
|
||||||
## Add a new driver
|
## Add a new driver
|
||||||
|
|
||||||
Copy `drivers/template` folder and rename it, and follow the comments in it.
|
Copy `drivers/template` folder and rename it, and follow the comments in it.
|
||||||
|
|
||||||
## Create a commit
|
## Create a commit
|
||||||
|
|
||||||
Commit messages should be well formatted, and to make that "standardized".
|
Commit messages should be well formatted, and to make that "standardized".
|
||||||
|
|
||||||
### Commit Message Format
|
Submit your pull request. For PR titles, follow [Conventional Commits](https://www.conventionalcommits.org).
|
||||||
Each commit message consists of a **header**, a **body** and a **footer**. The header has a special
|
|
||||||
format that includes a **type**, a **scope** and a **subject**:
|
|
||||||
|
|
||||||
```
|
https://github.com/OpenListTeam/OpenList/issues/376
|
||||||
<type>(<scope>): <subject>
|
|
||||||
<BLANK LINE>
|
|
||||||
<body>
|
|
||||||
<BLANK LINE>
|
|
||||||
<footer>
|
|
||||||
```
|
|
||||||
|
|
||||||
The **header** is mandatory and the **scope** of the header is optional.
|
It's suggested to sign your commits. See: [How to sign commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits)
|
||||||
|
|
||||||
Any line of the commit message cannot be longer than 100 characters! This allows the message to be easier
|
|
||||||
to read on GitHub as well as in various git tools.
|
|
||||||
|
|
||||||
### Revert
|
|
||||||
If the commit reverts a previous commit, it should begin with `revert: `, followed by the header
|
|
||||||
of the reverted commit.
|
|
||||||
In the body it should say: `This reverts commit <hash>.`, where the hash is the SHA of the commit
|
|
||||||
being reverted.
|
|
||||||
|
|
||||||
### Type
|
|
||||||
Must be one of the following:
|
|
||||||
|
|
||||||
* **feat**: A new feature
|
|
||||||
* **fix**: A bug fix
|
|
||||||
* **docs**: Documentation only changes
|
|
||||||
* **style**: Changes that do not affect the meaning of the code (white-space, formatting, missing
|
|
||||||
semi-colons, etc)
|
|
||||||
* **refactor**: A code change that neither fixes a bug nor adds a feature
|
|
||||||
* **perf**: A code change that improves performance
|
|
||||||
* **test**: Adding missing or correcting existing tests
|
|
||||||
* **build**: Affects project builds or dependency modifications
|
|
||||||
* **revert**: Restore the previous commit
|
|
||||||
* **ci**: Continuous integration of related file modifications
|
|
||||||
* **chore**: Changes to the build process or auxiliary tools and libraries such as documentation
|
|
||||||
generation
|
|
||||||
* **release**: Release a new version
|
|
||||||
|
|
||||||
### Scope
|
|
||||||
The scope could be anything specifying place of the commit change. For example `$location`,
|
|
||||||
`$browser`, `$compile`, `$rootScope`, `ngHref`, `ngClick`, `ngView`, etc...
|
|
||||||
|
|
||||||
You can use `*` when the change affects more than a single scope.
|
|
||||||
|
|
||||||
### Subject
|
|
||||||
The subject contains succinct description of the change:
|
|
||||||
|
|
||||||
* use the imperative, present tense: "change" not "changed" nor "changes"
|
|
||||||
* don't capitalize first letter
|
|
||||||
* no dot (.) at the end
|
|
||||||
|
|
||||||
### Body
|
|
||||||
Just as in the **subject**, use the imperative, present tense: "change" not "changed" nor "changes".
|
|
||||||
The body should include the motivation for the change and contrast this with previous behavior.
|
|
||||||
|
|
||||||
### Footer
|
|
||||||
The footer should contain any information about **Breaking Changes** and is also the place to
|
|
||||||
[reference GitHub issues that this commit closes](https://help.github.com/articles/closing-issues-via-commit-messages/).
|
|
||||||
|
|
||||||
**Breaking Changes** should start with the word `BREAKING CHANGE:` with a space or two newlines.
|
|
||||||
The rest of the commit message is then used for this.
|
|
||||||
|
|
||||||
## Submit a pull request
|
## Submit a pull request
|
||||||
|
|
||||||
Push your branch to your `openlist` fork and open a pull request against the
|
Please make sure your code has been formatted with `go fmt` or [prettier](https://prettier.io/) before submitting.
|
||||||
`main` branch.
|
|
||||||
|
Push your branch to your `openlist` fork and open a pull request against the `main` branch.
|
||||||
|
|
||||||
|
## Merge your pull request
|
||||||
|
|
||||||
|
Your pull request will be merged after review. Please wait for the maintainer to merge your pull request after review.
|
||||||
|
|
||||||
|
At least 1 approving review is required by reviewers with write access. You can also request a review from maintainers.
|
||||||
|
|
||||||
|
## Delete your branch
|
||||||
|
|
||||||
|
(Optional) After your pull request is merged, you can delete your branch.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Thank you for your contribution! Let's make OpenList better together!
|
||||||
|
13
Dockerfile
13
Dockerfile
@ -14,17 +14,20 @@ FROM openlistteam/openlist-base-image:${BASE_IMAGE_TAG}
|
|||||||
LABEL MAINTAINER="OpenList"
|
LABEL MAINTAINER="OpenList"
|
||||||
ARG INSTALL_FFMPEG=false
|
ARG INSTALL_FFMPEG=false
|
||||||
ARG INSTALL_ARIA2=false
|
ARG INSTALL_ARIA2=false
|
||||||
|
ARG USER=openlist
|
||||||
|
ARG UID=1001
|
||||||
|
ARG GID=1001
|
||||||
|
|
||||||
WORKDIR /opt/openlist/
|
WORKDIR /opt/openlist/
|
||||||
|
|
||||||
RUN addgroup -g 1001 openlist && \
|
RUN addgroup -g ${GID} ${USER} && \
|
||||||
adduser -D -u 1001 -G openlist openlist && \
|
adduser -D -u ${UID} -G ${USER} ${USER} && \
|
||||||
mkdir -p /opt/openlist/data
|
mkdir -p /opt/openlist/data
|
||||||
|
|
||||||
COPY --from=builder --chmod=755 --chown=1001:1001 /app/bin/openlist ./
|
COPY --from=builder --chmod=755 --chown=${UID}:${GID} /app/bin/openlist ./
|
||||||
COPY --chmod=755 --chown=1001:1001 entrypoint.sh /entrypoint.sh
|
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh
|
||||||
|
|
||||||
USER openlist
|
USER ${USER}
|
||||||
RUN /entrypoint.sh version
|
RUN /entrypoint.sh version
|
||||||
|
|
||||||
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
|
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
|
||||||
|
@ -4,17 +4,20 @@ LABEL MAINTAINER="OpenList"
|
|||||||
ARG TARGETPLATFORM
|
ARG TARGETPLATFORM
|
||||||
ARG INSTALL_FFMPEG=false
|
ARG INSTALL_FFMPEG=false
|
||||||
ARG INSTALL_ARIA2=false
|
ARG INSTALL_ARIA2=false
|
||||||
|
ARG USER=openlist
|
||||||
|
ARG UID=1001
|
||||||
|
ARG GID=1001
|
||||||
|
|
||||||
WORKDIR /opt/openlist/
|
WORKDIR /opt/openlist/
|
||||||
|
|
||||||
RUN addgroup -g 1001 openlist && \
|
RUN addgroup -g ${GID} ${USER} && \
|
||||||
adduser -D -u 1001 -G openlist openlist && \
|
adduser -D -u ${UID} -G ${USER} ${USER} && \
|
||||||
mkdir -p /opt/openlist/data
|
mkdir -p /opt/openlist/data
|
||||||
|
|
||||||
COPY --chmod=755 --chown=1001:1001 /build/${TARGETPLATFORM}/openlist ./
|
COPY --chmod=755 --chown=${UID}:${GID} /build/${TARGETPLATFORM}/openlist ./
|
||||||
COPY --chmod=755 --chown=1001:1001 entrypoint.sh /entrypoint.sh
|
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh
|
||||||
|
|
||||||
USER openlist
|
USER ${USER}
|
||||||
RUN /entrypoint.sh version
|
RUN /entrypoint.sh version
|
||||||
|
|
||||||
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
|
ENV UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
|
||||||
|
@ -1,43 +1,60 @@
|
|||||||
package _115
|
package _115
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
driver115 "github.com/SheltonZhu/115driver/pkg/driver"
|
driver115 "github.com/SheltonZhu/115driver/pkg/driver"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
md5Salt = "Qclm8MGWUv59TnrR0XPg"
|
md5Salt = "Qclm8MGWUv59TnrR0XPg"
|
||||||
appVer = "27.0.5.7"
|
appVer = "35.6.0.3"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (d *Pan115) getAppVersion() ([]driver115.AppVersion, error) {
|
func (d *Pan115) getAppVersion() (string, error) {
|
||||||
result := driver115.VersionResp{}
|
result := VersionResp{}
|
||||||
resp, err := base.RestyClient.R().Get(driver115.ApiGetVersion)
|
res, err := base.RestyClient.R().Get(driver115.ApiGetVersion)
|
||||||
|
|
||||||
err = driver115.CheckErr(err, &result, resp)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return "", err
|
||||||
}
|
}
|
||||||
|
err = utils.Json.Unmarshal(res.Body(), &result)
|
||||||
return result.Data.GetAppVersions(), nil
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
if len(result.Error) > 0 {
|
||||||
|
return "", errors.New(result.Error)
|
||||||
|
}
|
||||||
|
return result.Data.Win.Version, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Pan115) getAppVer() string {
|
func (d *Pan115) getAppVer() string {
|
||||||
// todo add some cache?
|
ver, err := d.getAppVersion()
|
||||||
vers, err := d.getAppVersion()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Warnf("[115] get app version failed: %v", err)
|
log.Warnf("[115] get app version failed: %v", err)
|
||||||
return appVer
|
return appVer
|
||||||
}
|
}
|
||||||
for _, ver := range vers {
|
if len(ver) > 0 {
|
||||||
if ver.AppName == "win" {
|
return ver
|
||||||
return ver.Version
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return appVer
|
return appVer
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Pan115) initAppVer() {
|
func (d *Pan115) initAppVer() {
|
||||||
appVer = d.getAppVer()
|
appVer = d.getAppVer()
|
||||||
|
log.Debugf("use app version: %v", appVer)
|
||||||
|
}
|
||||||
|
|
||||||
|
type VersionResp struct {
|
||||||
|
Error string `json:"error,omitempty"`
|
||||||
|
Data Versions `json:"data"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Versions struct {
|
||||||
|
Win Version `json:"win"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Version struct {
|
||||||
|
Version string `json:"version_code"`
|
||||||
}
|
}
|
||||||
|
@ -24,7 +24,7 @@ type Addition struct {
|
|||||||
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
|
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
|
||||||
|
|
||||||
// 使用直链
|
// 使用直链
|
||||||
DirectLink bool `json:"DirectLink" type:"boolean" default:"false" required:"false" help:"use direct link when download file"`
|
DirectLink bool `json:"DirectLink" type:"bool" default:"false" required:"false" help:"use direct link when download file"`
|
||||||
DirectLinkPrivateKey string `json:"DirectLinkPrivateKey" required:"false" help:"private key for direct link, if URL authentication is enabled"`
|
DirectLinkPrivateKey string `json:"DirectLinkPrivateKey" required:"false" help:"private key for direct link, if URL authentication is enabled"`
|
||||||
DirectLinkValidDuration int64 `json:"DirectLinkValidDuration" type:"number" default:"30" required:"false" help:"minutes, if URL authentication is enabled"`
|
DirectLinkValidDuration int64 `json:"DirectLinkValidDuration" type:"number" default:"30" required:"false" help:"minutes, if URL authentication is enabled"`
|
||||||
|
|
||||||
|
@ -86,8 +86,24 @@ func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCall
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Open123) flushAccessToken() error {
|
func (d *Open123) flushAccessToken() error {
|
||||||
if d.Addition.ClientID != "" {
|
if d.ClientID != "" {
|
||||||
if d.Addition.ClientSecret != "" {
|
if d.RefreshToken != "" {
|
||||||
|
var resp RefreshTokenResp
|
||||||
|
_, err := d.Request(RefreshToken, http.MethodPost, func(req *resty.Request) {
|
||||||
|
req.SetQueryParam("client_id", d.ClientID)
|
||||||
|
if d.ClientSecret != "" {
|
||||||
|
req.SetQueryParam("client_secret", d.ClientSecret)
|
||||||
|
}
|
||||||
|
req.SetQueryParam("grant_type", "refresh_token")
|
||||||
|
req.SetQueryParam("refresh_token", d.RefreshToken)
|
||||||
|
}, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
d.AccessToken = resp.AccessToken
|
||||||
|
d.RefreshToken = resp.RefreshToken
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
} else if d.ClientSecret != "" {
|
||||||
var resp AccessTokenResp
|
var resp AccessTokenResp
|
||||||
_, err := d.Request(AccessToken, http.MethodPost, func(req *resty.Request) {
|
_, err := d.Request(AccessToken, http.MethodPost, func(req *resty.Request) {
|
||||||
req.SetBody(base.Json{
|
req.SetBody(base.Json{
|
||||||
@ -100,19 +116,6 @@ func (d *Open123) flushAccessToken() error {
|
|||||||
}
|
}
|
||||||
d.AccessToken = resp.Data.AccessToken
|
d.AccessToken = resp.Data.AccessToken
|
||||||
op.MustSaveDriverStorage(d)
|
op.MustSaveDriverStorage(d)
|
||||||
} else if d.Addition.RefreshToken != "" {
|
|
||||||
var resp RefreshTokenResp
|
|
||||||
_, err := d.Request(RefreshToken, http.MethodPost, func(req *resty.Request) {
|
|
||||||
req.SetQueryParam("client_id", d.ClientID)
|
|
||||||
req.SetQueryParam("grant_type", "refresh_token")
|
|
||||||
req.SetQueryParam("refresh_token", d.Addition.RefreshToken)
|
|
||||||
}, &resp)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
d.AccessToken = resp.AccessToken
|
|
||||||
d.RefreshToken = resp.RefreshToken
|
|
||||||
op.MustSaveDriverStorage(d)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
|
@ -534,16 +534,15 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
if size > partSize {
|
if size > partSize {
|
||||||
part = (size + partSize - 1) / partSize
|
part = (size + partSize - 1) / partSize
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 生成所有 partInfos
|
||||||
partInfos := make([]PartInfo, 0, part)
|
partInfos := make([]PartInfo, 0, part)
|
||||||
for i := int64(0); i < part; i++ {
|
for i := int64(0); i < part; i++ {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
}
|
}
|
||||||
start := i * partSize
|
start := i * partSize
|
||||||
byteSize := size - start
|
byteSize := min(size-start, partSize)
|
||||||
if byteSize > partSize {
|
|
||||||
byteSize = partSize
|
|
||||||
}
|
|
||||||
partNumber := i + 1
|
partNumber := i + 1
|
||||||
partInfo := PartInfo{
|
partInfo := PartInfo{
|
||||||
PartNumber: partNumber,
|
PartNumber: partNumber,
|
||||||
@ -591,17 +590,20 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
// resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址
|
// resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址
|
||||||
// 快传的情况下同样需要手动处理冲突
|
// 快传的情况下同样需要手动处理冲突
|
||||||
if resp.Data.PartInfos != nil {
|
if resp.Data.PartInfos != nil {
|
||||||
// 读取前100个分片的上传地址
|
// Progress
|
||||||
uploadPartInfos := resp.Data.PartInfos
|
p := driver.NewProgress(size, up)
|
||||||
|
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
|
||||||
|
|
||||||
// 获取后续分片的上传地址
|
// 先上传前100个分片
|
||||||
for i := 101; i < len(partInfos); i += 100 {
|
err = d.uploadPersonalParts(ctx, partInfos, resp.Data.PartInfos, rateLimited, p)
|
||||||
end := i + 100
|
if err != nil {
|
||||||
if end > len(partInfos) {
|
return err
|
||||||
end = len(partInfos)
|
|
||||||
}
|
}
|
||||||
batchPartInfos := partInfos[i:end]
|
|
||||||
|
|
||||||
|
// 如果还有剩余分片,分批获取上传地址并上传
|
||||||
|
for i := 100; i < len(partInfos); i += 100 {
|
||||||
|
end := min(i+100, len(partInfos))
|
||||||
|
batchPartInfos := partInfos[i:end]
|
||||||
moredata := base.Json{
|
moredata := base.Json{
|
||||||
"fileId": resp.Data.FileId,
|
"fileId": resp.Data.FileId,
|
||||||
"uploadId": resp.Data.UploadId,
|
"uploadId": resp.Data.UploadId,
|
||||||
@ -617,44 +619,13 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
uploadPartInfos = append(uploadPartInfos, moreresp.Data.PartInfos...)
|
err = d.uploadPersonalParts(ctx, partInfos, moreresp.Data.PartInfos, rateLimited, p)
|
||||||
}
|
|
||||||
|
|
||||||
// Progress
|
|
||||||
p := driver.NewProgress(size, up)
|
|
||||||
|
|
||||||
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
|
|
||||||
// 上传所有分片
|
|
||||||
for _, uploadPartInfo := range uploadPartInfos {
|
|
||||||
index := uploadPartInfo.PartNumber - 1
|
|
||||||
partSize := partInfos[index].PartSize
|
|
||||||
log.Debugf("[139] uploading part %+v/%+v", index, len(uploadPartInfos))
|
|
||||||
limitReader := io.LimitReader(rateLimited, partSize)
|
|
||||||
|
|
||||||
// Update Progress
|
|
||||||
r := io.TeeReader(limitReader, p)
|
|
||||||
|
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadPartInfo.UploadUrl, r)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
req.Header.Set("Content-Type", "application/octet-stream")
|
|
||||||
req.Header.Set("Content-Length", fmt.Sprint(partSize))
|
|
||||||
req.Header.Set("Origin", "https://yun.139.com")
|
|
||||||
req.Header.Set("Referer", "https://yun.139.com/")
|
|
||||||
req.ContentLength = partSize
|
|
||||||
|
|
||||||
res, err := base.HttpClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
_ = res.Body.Close()
|
|
||||||
log.Debugf("[139] uploaded: %+v", res)
|
|
||||||
if res.StatusCode != http.StatusOK {
|
|
||||||
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 全部分片上传完毕后,complete
|
||||||
data = base.Json{
|
data = base.Json{
|
||||||
"contentHash": fullHash,
|
"contentHash": fullHash,
|
||||||
"contentHashAlgorithm": "SHA256",
|
"contentHashAlgorithm": "SHA256",
|
||||||
|
@ -1,9 +1,11 @@
|
|||||||
package _139
|
package _139
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"path"
|
"path"
|
||||||
@ -13,6 +15,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
@ -623,3 +626,47 @@ func (d *Yun139) getPersonalCloudHost() string {
|
|||||||
}
|
}
|
||||||
return d.PersonalCloudHost
|
return d.PersonalCloudHost
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (d *Yun139) uploadPersonalParts(ctx context.Context, partInfos []PartInfo, uploadPartInfos []PersonalPartInfo, rateLimited *driver.RateLimitReader, p *driver.Progress) error {
|
||||||
|
// 确保数组以 PartNumber 从小到大排序
|
||||||
|
sort.Slice(uploadPartInfos, func(i, j int) bool {
|
||||||
|
return uploadPartInfos[i].PartNumber < uploadPartInfos[j].PartNumber
|
||||||
|
})
|
||||||
|
|
||||||
|
for _, uploadPartInfo := range uploadPartInfos {
|
||||||
|
index := uploadPartInfo.PartNumber - 1
|
||||||
|
if index < 0 || index >= len(partInfos) {
|
||||||
|
return fmt.Errorf("invalid PartNumber %d: index out of bounds (partInfos length: %d)", uploadPartInfo.PartNumber, len(partInfos))
|
||||||
|
}
|
||||||
|
partSize := partInfos[index].PartSize
|
||||||
|
log.Debugf("[139] uploading part %+v/%+v", index, len(partInfos))
|
||||||
|
limitReader := io.LimitReader(rateLimited, partSize)
|
||||||
|
r := io.TeeReader(limitReader, p)
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadPartInfo.UploadUrl, r)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
req.Header.Set("Content-Type", "application/octet-stream")
|
||||||
|
req.Header.Set("Content-Length", fmt.Sprint(partSize))
|
||||||
|
req.Header.Set("Origin", "https://yun.139.com")
|
||||||
|
req.Header.Set("Referer", "https://yun.139.com/")
|
||||||
|
req.ContentLength = partSize
|
||||||
|
err = func() error {
|
||||||
|
res, err := base.HttpClient.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
log.Debugf("[139] uploaded: %+v", res)
|
||||||
|
if res.StatusCode != http.StatusOK {
|
||||||
|
body, _ := io.ReadAll(res.Body)
|
||||||
|
return fmt.Errorf("unexpected status code: %d, body: %s", res.StatusCode, string(body))
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
@ -131,6 +131,7 @@ func (y *Cloud189TV) put(ctx context.Context, url string, headers map[string]str
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 请求完成后http.Client会Close Request.Body
|
||||||
resp, err := base.HttpClient.Do(req)
|
resp, err := base.HttpClient.Do(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@ -333,6 +334,10 @@ func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
|
|
||||||
// 网盘中不存在该文件,开始上传
|
// 网盘中不存在该文件,开始上传
|
||||||
status := GetUploadFileStatusResp{CreateUploadFileResp: *uploadInfo}
|
status := GetUploadFileStatusResp{CreateUploadFileResp: *uploadInfo}
|
||||||
|
// driver.RateLimitReader会尝试Close底层的reader
|
||||||
|
// 但这里的tempFile是一个*os.File,Close后就没法继续读了
|
||||||
|
// 所以这里用io.NopCloser包一层
|
||||||
|
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.NopCloser(tempFile))
|
||||||
for status.GetSize() < file.GetSize() && status.FileDataExists != 1 {
|
for status.GetSize() < file.GetSize() && status.FileDataExists != 1 {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return nil, ctx.Err()
|
return nil, ctx.Err()
|
||||||
@ -350,7 +355,7 @@ func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
|
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err := y.put(ctx, status.FileUploadUrl, header, true, tempFile, isFamily)
|
_, err := y.put(ctx, status.FileUploadUrl, header, true, rateLimitedRd, isFamily)
|
||||||
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
|
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -472,14 +472,16 @@ func (y *Cloud189PC) refreshSession() (err error) {
|
|||||||
// 普通上传
|
// 普通上传
|
||||||
// 无法上传大小为0的文件
|
// 无法上传大小为0的文件
|
||||||
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
|
||||||
size := file.GetSize()
|
// 文件大小
|
||||||
sliceSize := min(size, partSize(size))
|
fileSize := file.GetSize()
|
||||||
|
// 分片大小,不得为文件大小
|
||||||
|
sliceSize := partSize(fileSize)
|
||||||
|
|
||||||
params := Params{
|
params := Params{
|
||||||
"parentFolderId": dstDir.GetID(),
|
"parentFolderId": dstDir.GetID(),
|
||||||
"fileName": url.QueryEscape(file.GetName()),
|
"fileName": url.QueryEscape(file.GetName()),
|
||||||
"fileSize": fmt.Sprint(file.GetSize()),
|
"fileSize": fmt.Sprint(fileSize),
|
||||||
"sliceSize": fmt.Sprint(sliceSize),
|
"sliceSize": fmt.Sprint(sliceSize), // 必须为特定分片大小
|
||||||
"lazyCheck": "1",
|
"lazyCheck": "1",
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -512,10 +514,10 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
retry.DelayType(retry.BackOffDelay))
|
retry.DelayType(retry.BackOffDelay))
|
||||||
|
|
||||||
count := 1
|
count := 1
|
||||||
if size > sliceSize {
|
if fileSize > sliceSize {
|
||||||
count = int((size + sliceSize - 1) / sliceSize)
|
count = int((fileSize + sliceSize - 1) / sliceSize)
|
||||||
}
|
}
|
||||||
lastPartSize := size % sliceSize
|
lastPartSize := fileSize % sliceSize
|
||||||
if lastPartSize == 0 {
|
if lastPartSize == 0 {
|
||||||
lastPartSize = sliceSize
|
lastPartSize = sliceSize
|
||||||
}
|
}
|
||||||
@ -535,9 +537,9 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
offset := int64((i)-1) * sliceSize
|
offset := int64((i)-1) * sliceSize
|
||||||
size := sliceSize
|
partSize := sliceSize
|
||||||
if i == count {
|
if i == count {
|
||||||
size = lastPartSize
|
partSize = lastPartSize
|
||||||
}
|
}
|
||||||
partInfo := ""
|
partInfo := ""
|
||||||
var reader *stream.SectionReader
|
var reader *stream.SectionReader
|
||||||
@ -546,14 +548,14 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
Before: func(ctx context.Context) error {
|
Before: func(ctx context.Context) error {
|
||||||
if reader == nil {
|
if reader == nil {
|
||||||
var err error
|
var err error
|
||||||
reader, err = ss.GetSectionReader(offset, size)
|
reader, err = ss.GetSectionReader(offset, partSize)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
silceMd5.Reset()
|
silceMd5.Reset()
|
||||||
w, err := utils.CopyWithBuffer(writers, reader)
|
w, err := utils.CopyWithBuffer(writers, reader)
|
||||||
if w != size {
|
if w != partSize {
|
||||||
return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", size, w, err)
|
return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", partSize, w, err)
|
||||||
}
|
}
|
||||||
// 计算块md5并进行hex和base64编码
|
// 计算块md5并进行hex和base64编码
|
||||||
md5Bytes := silceMd5.Sum(nil)
|
md5Bytes := silceMd5.Sum(nil)
|
||||||
@ -595,7 +597,7 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
fileMd5Hex = strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
|
fileMd5Hex = strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
|
||||||
}
|
}
|
||||||
sliceMd5Hex := fileMd5Hex
|
sliceMd5Hex := fileMd5Hex
|
||||||
if file.GetSize() > sliceSize {
|
if fileSize > sliceSize {
|
||||||
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
|
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -23,6 +23,7 @@ import (
|
|||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve_v4"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve_v4"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/crypt"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/crypt"
|
||||||
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/degoo"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao_share"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao_share"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/dropbox"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/dropbox"
|
||||||
@ -60,6 +61,7 @@ import (
|
|||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/smb"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/smb"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/strm"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/strm"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/teambition"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/teambition"
|
||||||
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/teldrive"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/terabox"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/terabox"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder"
|
||||||
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder_browser"
|
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder_browser"
|
||||||
|
203
drivers/degoo/driver.go
Normal file
203
drivers/degoo/driver.go
Normal file
@ -0,0 +1,203 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Degoo struct {
|
||||||
|
model.Storage
|
||||||
|
Addition
|
||||||
|
client *http.Client
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Config() driver.Config {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) GetAddition() driver.Additional {
|
||||||
|
return &d.Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Init(ctx context.Context) error {
|
||||||
|
|
||||||
|
d.client = base.HttpClient
|
||||||
|
|
||||||
|
// Ensure we have a valid token (will login if needed or refresh if expired)
|
||||||
|
if err := d.ensureValidToken(ctx); err != nil {
|
||||||
|
return fmt.Errorf("failed to initialize token: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return d.getDevices(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Drop(ctx context.Context) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
|
items, err := d.getAllFileChildren5(ctx, dir.GetID())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return utils.MustSliceConvert(items, func(s DegooFileItem) model.Obj {
|
||||||
|
isFolder := s.Category == 2 || s.Category == 1 || s.Category == 10
|
||||||
|
|
||||||
|
createTime, modTime, _ := humanReadableTimes(s.CreationTime, s.LastModificationTime, s.LastUploadTime)
|
||||||
|
|
||||||
|
size, err := strconv.ParseInt(s.Size, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
size = 0 // Default to 0 if size parsing fails
|
||||||
|
}
|
||||||
|
|
||||||
|
return &model.Object{
|
||||||
|
ID: s.ID,
|
||||||
|
Path: s.FilePath,
|
||||||
|
Name: s.Name,
|
||||||
|
Size: size,
|
||||||
|
Modified: modTime,
|
||||||
|
Ctime: createTime,
|
||||||
|
IsFolder: isFolder,
|
||||||
|
}
|
||||||
|
}), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
|
item, err := d.getOverlay4(ctx, file.GetID())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &model.Link{URL: item.URL}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||||
|
// This is done by calling the setUploadFile3 API with a special checksum and size.
|
||||||
|
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) { setUploadFile3(Token: $Token, FileInfos: $FileInfos) }`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"FileInfos": []map[string]interface{}{
|
||||||
|
{
|
||||||
|
"Checksum": folderChecksum,
|
||||||
|
"Name": dirName,
|
||||||
|
"CreationTime": time.Now().UnixMilli(),
|
||||||
|
"ParentID": parentDir.GetID(),
|
||||||
|
"Size": 0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
|
const query = `mutation SetMoveFile($Token: String!, $Copy: Boolean, $NewParentID: String!, $FileIDs: [String]!) { setMoveFile(Token: $Token, Copy: $Copy, NewParentID: $NewParentID, FileIDs: $FileIDs) }`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"Copy": false,
|
||||||
|
"NewParentID": dstDir.GetID(),
|
||||||
|
"FileIDs": []string{srcObj.GetID()},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.apiCall(ctx, "SetMoveFile", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return srcObj, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||||
|
const query = `mutation SetRenameFile($Token: String!, $FileRenames: [FileRenameInfo]!) { setRenameFile(Token: $Token, FileRenames: $FileRenames) }`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"FileRenames": []DegooFileRenameInfo{
|
||||||
|
{
|
||||||
|
ID: srcObj.GetID(),
|
||||||
|
NewName: newName,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.apiCall(ctx, "SetRenameFile", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
|
// Copy is not implemented, Degoo API does not support direct copy.
|
||||||
|
return nil, errs.NotImplement
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
|
// Remove deletes a file or folder (moves to trash).
|
||||||
|
const query = `mutation SetDeleteFile5($Token: String!, $IsInRecycleBin: Boolean!, $IDs: [IDType]!) { setDeleteFile5(Token: $Token, IsInRecycleBin: $IsInRecycleBin, IDs: $IDs) }`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"IsInRecycleBin": false,
|
||||||
|
"IDs": []map[string]string{{"FileID": obj.GetID()}},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := d.apiCall(ctx, "SetDeleteFile5", query, variables)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
|
tmpF, err := file.CacheFullAndWriter(&up, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
parentID := dstDir.GetID()
|
||||||
|
|
||||||
|
// Calculate the checksum for the file.
|
||||||
|
checksum, err := d.checkSum(tmpF)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// 1. Get upload authorization via getBucketWriteAuth4.
|
||||||
|
auths, err := d.getBucketWriteAuth4(ctx, file, parentID, checksum)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Upload file.
|
||||||
|
// support rapid upload
|
||||||
|
if auths.GetBucketWriteAuth4[0].Error != "Already exist!" {
|
||||||
|
err = d.uploadS3(ctx, auths, tmpF, file, checksum)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Register metadata with setUploadFile3.
|
||||||
|
data, err := d.SetUploadFile3(ctx, file, parentID, checksum)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if !data.SetUploadFile3 {
|
||||||
|
return fmt.Errorf("setUploadFile3 failed: %v", data)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
27
drivers/degoo/meta.go
Normal file
27
drivers/degoo/meta.go
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Addition struct {
|
||||||
|
driver.RootID
|
||||||
|
Username string `json:"username" help:"Your Degoo account email"`
|
||||||
|
Password string `json:"password" help:"Your Degoo account password"`
|
||||||
|
RefreshToken string `json:"refresh_token" help:"Refresh token for automatic token renewal, obtained automatically"`
|
||||||
|
AccessToken string `json:"access_token" help:"Access token for Degoo API, obtained automatically"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var config = driver.Config{
|
||||||
|
Name: "Degoo",
|
||||||
|
LocalSort: true,
|
||||||
|
DefaultRoot: "0",
|
||||||
|
NoOverwriteUpload: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
op.RegisterDriver(func() driver.Driver {
|
||||||
|
return &Degoo{}
|
||||||
|
})
|
||||||
|
}
|
110
drivers/degoo/types.go
Normal file
110
drivers/degoo/types.go
Normal file
@ -0,0 +1,110 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DegooLoginRequest represents the login request body.
|
||||||
|
type DegooLoginRequest struct {
|
||||||
|
GenerateToken bool `json:"GenerateToken"`
|
||||||
|
Username string `json:"Username"`
|
||||||
|
Password string `json:"Password"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooLoginResponse represents a successful login response.
|
||||||
|
type DegooLoginResponse struct {
|
||||||
|
Token string `json:"Token"`
|
||||||
|
RefreshToken string `json:"RefreshToken"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooAccessTokenRequest represents the token refresh request body.
|
||||||
|
type DegooAccessTokenRequest struct {
|
||||||
|
RefreshToken string `json:"RefreshToken"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooAccessTokenResponse represents the token refresh response.
|
||||||
|
type DegooAccessTokenResponse struct {
|
||||||
|
AccessToken string `json:"AccessToken"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooFileItem represents a Degoo file or folder.
|
||||||
|
type DegooFileItem struct {
|
||||||
|
ID string `json:"ID"`
|
||||||
|
ParentID string `json:"ParentID"`
|
||||||
|
Name string `json:"Name"`
|
||||||
|
Category int `json:"Category"`
|
||||||
|
Size string `json:"Size"`
|
||||||
|
URL string `json:"URL"`
|
||||||
|
CreationTime string `json:"CreationTime"`
|
||||||
|
LastModificationTime string `json:"LastModificationTime"`
|
||||||
|
LastUploadTime string `json:"LastUploadTime"`
|
||||||
|
MetadataID string `json:"MetadataID"`
|
||||||
|
DeviceID int64 `json:"DeviceID"`
|
||||||
|
FilePath string `json:"FilePath"`
|
||||||
|
IsInRecycleBin bool `json:"IsInRecycleBin"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type DegooErrors struct {
|
||||||
|
Path []string `json:"path"`
|
||||||
|
Data interface{} `json:"data"`
|
||||||
|
ErrorType string `json:"errorType"`
|
||||||
|
ErrorInfo interface{} `json:"errorInfo"`
|
||||||
|
Message string `json:"message"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooGraphqlResponse is the common structure for GraphQL API responses.
|
||||||
|
type DegooGraphqlResponse struct {
|
||||||
|
Data json.RawMessage `json:"data"`
|
||||||
|
Errors []DegooErrors `json:"errors,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooGetChildren5Data is the data field for getFileChildren5.
|
||||||
|
type DegooGetChildren5Data struct {
|
||||||
|
GetFileChildren5 struct {
|
||||||
|
Items []DegooFileItem `json:"Items"`
|
||||||
|
NextToken string `json:"NextToken"`
|
||||||
|
} `json:"getFileChildren5"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooGetOverlay4Data is the data field for getOverlay4.
|
||||||
|
type DegooGetOverlay4Data struct {
|
||||||
|
GetOverlay4 DegooFileItem `json:"getOverlay4"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooFileRenameInfo represents a file rename operation.
|
||||||
|
type DegooFileRenameInfo struct {
|
||||||
|
ID string `json:"ID"`
|
||||||
|
NewName string `json:"NewName"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooFileIDs represents a list of file IDs for move operations.
|
||||||
|
type DegooFileIDs struct {
|
||||||
|
FileIDs []string `json:"FileIDs"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooGetBucketWriteAuth4Data is the data field for GetBucketWriteAuth4.
|
||||||
|
type DegooGetBucketWriteAuth4Data struct {
|
||||||
|
GetBucketWriteAuth4 []struct {
|
||||||
|
AuthData struct {
|
||||||
|
PolicyBase64 string `json:"PolicyBase64"`
|
||||||
|
Signature string `json:"Signature"`
|
||||||
|
BaseURL string `json:"BaseURL"`
|
||||||
|
KeyPrefix string `json:"KeyPrefix"`
|
||||||
|
AccessKey struct {
|
||||||
|
Key string `json:"Key"`
|
||||||
|
Value string `json:"Value"`
|
||||||
|
} `json:"AccessKey"`
|
||||||
|
ACL string `json:"ACL"`
|
||||||
|
AdditionalBody []struct {
|
||||||
|
Key string `json:"Key"`
|
||||||
|
Value string `json:"Value"`
|
||||||
|
} `json:"AdditionalBody"`
|
||||||
|
} `json:"AuthData"`
|
||||||
|
Error interface{} `json:"Error"`
|
||||||
|
} `json:"getBucketWriteAuth4"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// DegooSetUploadFile3Data is the data field for SetUploadFile3.
|
||||||
|
type DegooSetUploadFile3Data struct {
|
||||||
|
SetUploadFile3 bool `json:"setUploadFile3"`
|
||||||
|
}
|
198
drivers/degoo/upload.go
Normal file
198
drivers/degoo/upload.go
Normal file
@ -0,0 +1,198 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"crypto/sha1"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"mime/multipart"
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (d *Degoo) getBucketWriteAuth4(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooGetBucketWriteAuth4Data, error) {
|
||||||
|
const query = `query GetBucketWriteAuth4(
|
||||||
|
$Token: String!
|
||||||
|
$ParentID: String!
|
||||||
|
$StorageUploadInfos: [StorageUploadInfo2]
|
||||||
|
) {
|
||||||
|
getBucketWriteAuth4(
|
||||||
|
Token: $Token
|
||||||
|
ParentID: $ParentID
|
||||||
|
StorageUploadInfos: $StorageUploadInfos
|
||||||
|
) {
|
||||||
|
AuthData {
|
||||||
|
PolicyBase64
|
||||||
|
Signature
|
||||||
|
BaseURL
|
||||||
|
KeyPrefix
|
||||||
|
AccessKey {
|
||||||
|
Key
|
||||||
|
Value
|
||||||
|
}
|
||||||
|
ACL
|
||||||
|
AdditionalBody {
|
||||||
|
Key
|
||||||
|
Value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Error
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"ParentID": parentID,
|
||||||
|
"StorageUploadInfos": []map[string]string{{
|
||||||
|
"FileName": file.GetName(),
|
||||||
|
"Checksum": checksum,
|
||||||
|
"Size": strconv.FormatInt(file.GetSize(), 10),
|
||||||
|
}}}
|
||||||
|
|
||||||
|
data, err := d.apiCall(ctx, "GetBucketWriteAuth4", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var resp DegooGetBucketWriteAuth4Data
|
||||||
|
err = json.Unmarshal(data, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &resp, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkSum calculates the SHA1-based checksum for Degoo upload API.
|
||||||
|
func (d *Degoo) checkSum(file io.Reader) (string, error) {
|
||||||
|
seed := []byte{13, 7, 2, 2, 15, 40, 75, 117, 13, 10, 19, 16, 29, 23, 3, 36}
|
||||||
|
hasher := sha1.New()
|
||||||
|
hasher.Write(seed)
|
||||||
|
|
||||||
|
if _, err := utils.CopyWithBuffer(hasher, file); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
cs := hasher.Sum(nil)
|
||||||
|
|
||||||
|
csBytes := []byte{10, byte(len(cs))}
|
||||||
|
csBytes = append(csBytes, cs...)
|
||||||
|
csBytes = append(csBytes, 16, 0)
|
||||||
|
|
||||||
|
return strings.ReplaceAll(base64.StdEncoding.EncodeToString(csBytes), "/", "_"), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Degoo) uploadS3(ctx context.Context, auths *DegooGetBucketWriteAuth4Data, tmpF model.File, file model.FileStreamer, checksum string) error {
|
||||||
|
a := auths.GetBucketWriteAuth4[0].AuthData
|
||||||
|
|
||||||
|
_, err := tmpF.Seek(0, io.SeekStart)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ext := utils.Ext(file.GetName())
|
||||||
|
key := fmt.Sprintf("%s%s/%s.%s", a.KeyPrefix, ext, checksum, ext)
|
||||||
|
|
||||||
|
var b bytes.Buffer
|
||||||
|
w := multipart.NewWriter(&b)
|
||||||
|
err = w.WriteField("key", key)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField("acl", a.ACL)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField("policy", a.PolicyBase64)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField("signature", a.Signature)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
err = w.WriteField(a.AccessKey.Key, a.AccessKey.Value)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, additional := range a.AdditionalBody {
|
||||||
|
err = w.WriteField(additional.Key, additional.Value)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
err = w.WriteField("Content-Type", "")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = w.CreateFormFile("file", key)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
headSize := b.Len()
|
||||||
|
err = w.Close()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
head := bytes.NewReader(b.Bytes()[:headSize])
|
||||||
|
tail := bytes.NewReader(b.Bytes()[headSize:])
|
||||||
|
|
||||||
|
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.MultiReader(head, tmpF, tail))
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodPost, a.BaseURL, rateLimitedRd)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
req.Header.Add("ngsw-bypass", "1")
|
||||||
|
req.Header.Add("Content-Type", w.FormDataContentType())
|
||||||
|
|
||||||
|
res, err := d.client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
if res.StatusCode != http.StatusNoContent {
|
||||||
|
return fmt.Errorf("upload failed with status code %d", res.StatusCode)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ driver.Driver = (*Degoo)(nil)
|
||||||
|
|
||||||
|
func (d *Degoo) SetUploadFile3(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooSetUploadFile3Data, error) {
|
||||||
|
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) {
|
||||||
|
setUploadFile3(Token: $Token, FileInfos: $FileInfos)
|
||||||
|
}`
|
||||||
|
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"FileInfos": []map[string]string{{
|
||||||
|
"Checksum": checksum,
|
||||||
|
"CreationTime": strconv.FormatInt(file.CreateTime().UnixMilli(), 10),
|
||||||
|
"Name": file.GetName(),
|
||||||
|
"ParentID": parentID,
|
||||||
|
"Size": strconv.FormatInt(file.GetSize(), 10),
|
||||||
|
}}}
|
||||||
|
|
||||||
|
data, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var resp DegooSetUploadFile3Data
|
||||||
|
err = json.Unmarshal(data, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &resp, nil
|
||||||
|
}
|
462
drivers/degoo/util.go
Normal file
462
drivers/degoo/util.go
Normal file
@ -0,0 +1,462 @@
|
|||||||
|
package degoo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Thanks to https://github.com/bernd-wechner/Degoo for API research.
|
||||||
|
|
||||||
|
const (
|
||||||
|
// API endpoints
|
||||||
|
loginURL = "https://rest-api.degoo.com/login"
|
||||||
|
accessTokenURL = "https://rest-api.degoo.com/access-token/v2"
|
||||||
|
apiURL = "https://production-appsync.degoo.com/graphql"
|
||||||
|
|
||||||
|
// API configuration
|
||||||
|
apiKey = "da2-vs6twz5vnjdavpqndtbzg3prra"
|
||||||
|
folderChecksum = "CgAQAg"
|
||||||
|
|
||||||
|
// Token management
|
||||||
|
tokenRefreshThreshold = 5 * time.Minute
|
||||||
|
|
||||||
|
// Rate limiting
|
||||||
|
minRequestInterval = 1 * time.Second
|
||||||
|
|
||||||
|
// Error messages
|
||||||
|
errRateLimited = "rate limited (429), please try again later"
|
||||||
|
errUnauthorized = "unauthorized access"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
// Global rate limiting - protects against concurrent API calls
|
||||||
|
lastRequestTime time.Time
|
||||||
|
requestMutex sync.Mutex
|
||||||
|
)
|
||||||
|
|
||||||
|
// JWT payload structure for token expiration checking
|
||||||
|
type JWTPayload struct {
|
||||||
|
UserID string `json:"userID"`
|
||||||
|
Exp int64 `json:"exp"`
|
||||||
|
Iat int64 `json:"iat"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rate limiting helper functions
|
||||||
|
|
||||||
|
// applyRateLimit ensures minimum interval between API requests
|
||||||
|
func applyRateLimit() {
|
||||||
|
requestMutex.Lock()
|
||||||
|
defer requestMutex.Unlock()
|
||||||
|
|
||||||
|
if !lastRequestTime.IsZero() {
|
||||||
|
if elapsed := time.Since(lastRequestTime); elapsed < minRequestInterval {
|
||||||
|
time.Sleep(minRequestInterval - elapsed)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
lastRequestTime = time.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
// HTTP request helper functions
|
||||||
|
|
||||||
|
// createJSONRequest creates a new HTTP request with JSON body
|
||||||
|
func createJSONRequest(ctx context.Context, method, url string, body interface{}) (*http.Request, error) {
|
||||||
|
jsonBody, err := json.Marshal(body)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to marshal request body: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
req, err := http.NewRequestWithContext(ctx, method, url, bytes.NewBuffer(jsonBody))
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
req.Header.Set("User-Agent", base.UserAgent)
|
||||||
|
return req, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkHTTPResponse checks for common HTTP error conditions
|
||||||
|
func checkHTTPResponse(resp *http.Response, operation string) error {
|
||||||
|
if resp.StatusCode == http.StatusTooManyRequests {
|
||||||
|
return fmt.Errorf("%s %s", operation, errRateLimited)
|
||||||
|
}
|
||||||
|
if resp.StatusCode != http.StatusOK {
|
||||||
|
return fmt.Errorf("%s failed: %s", operation, resp.Status)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// isTokenExpired checks if the JWT token is expired or will expire soon
|
||||||
|
func (d *Degoo) isTokenExpired() bool {
|
||||||
|
if d.AccessToken == "" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
payload, err := extractJWTPayload(d.AccessToken)
|
||||||
|
if err != nil {
|
||||||
|
return true // Invalid token format
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if token expires within the threshold
|
||||||
|
expireTime := time.Unix(payload.Exp, 0)
|
||||||
|
return time.Now().Add(tokenRefreshThreshold).After(expireTime)
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractJWTPayload extracts and parses JWT payload
|
||||||
|
func extractJWTPayload(token string) (*JWTPayload, error) {
|
||||||
|
parts := strings.Split(token, ".")
|
||||||
|
if len(parts) != 3 {
|
||||||
|
return nil, fmt.Errorf("invalid JWT format")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decode the payload (second part)
|
||||||
|
payload, err := base64.RawURLEncoding.DecodeString(parts[1])
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to decode JWT payload: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var jwtPayload JWTPayload
|
||||||
|
if err := json.Unmarshal(payload, &jwtPayload); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse JWT payload: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &jwtPayload, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// refreshToken attempts to refresh the access token using the refresh token
|
||||||
|
func (d *Degoo) refreshToken(ctx context.Context) error {
|
||||||
|
if d.RefreshToken == "" {
|
||||||
|
return fmt.Errorf("no refresh token available")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create request
|
||||||
|
tokenReq := DegooAccessTokenRequest{RefreshToken: d.RefreshToken}
|
||||||
|
req, err := createJSONRequest(ctx, "POST", accessTokenURL, tokenReq)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create refresh token request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute request
|
||||||
|
resp, err := d.client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("refresh token request failed: %w", err)
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Check response
|
||||||
|
if err := checkHTTPResponse(resp, "refresh token"); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var accessTokenResp DegooAccessTokenResponse
|
||||||
|
if err := json.NewDecoder(resp.Body).Decode(&accessTokenResp); err != nil {
|
||||||
|
return fmt.Errorf("failed to parse access token response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if accessTokenResp.AccessToken == "" {
|
||||||
|
return fmt.Errorf("empty access token received")
|
||||||
|
}
|
||||||
|
|
||||||
|
d.AccessToken = accessTokenResp.AccessToken
|
||||||
|
// Save the updated token to storage
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ensureValidToken ensures we have a valid, non-expired token
|
||||||
|
func (d *Degoo) ensureValidToken(ctx context.Context) error {
|
||||||
|
// Check if token is expired or will expire soon
|
||||||
|
if d.isTokenExpired() {
|
||||||
|
// Try to refresh token first if we have a refresh token
|
||||||
|
if d.RefreshToken != "" {
|
||||||
|
if refreshErr := d.refreshToken(ctx); refreshErr == nil {
|
||||||
|
return nil // Successfully refreshed
|
||||||
|
} else {
|
||||||
|
// If refresh failed, fall back to full login
|
||||||
|
fmt.Printf("Token refresh failed, falling back to full login: %v\n", refreshErr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Perform full login
|
||||||
|
if d.Username != "" && d.Password != "" {
|
||||||
|
return d.login(ctx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// login performs the login process and retrieves the access token.
|
||||||
|
func (d *Degoo) login(ctx context.Context) error {
|
||||||
|
if d.Username == "" || d.Password == "" {
|
||||||
|
return fmt.Errorf("username or password not provided")
|
||||||
|
}
|
||||||
|
|
||||||
|
creds := DegooLoginRequest{
|
||||||
|
GenerateToken: true,
|
||||||
|
Username: d.Username,
|
||||||
|
Password: d.Password,
|
||||||
|
}
|
||||||
|
|
||||||
|
jsonCreds, err := json.Marshal(creds)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to serialize login credentials: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
req, err := http.NewRequestWithContext(ctx, "POST", loginURL, bytes.NewBuffer(jsonCreds))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create login request: %w", err)
|
||||||
|
}
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
req.Header.Set("User-Agent", base.UserAgent)
|
||||||
|
req.Header.Set("Origin", "https://app.degoo.com")
|
||||||
|
|
||||||
|
resp, err := d.client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("login request failed: %w", err)
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Handle rate limiting (429 Too Many Requests)
|
||||||
|
if resp.StatusCode == http.StatusTooManyRequests {
|
||||||
|
return fmt.Errorf("login rate limited (429), please try again later")
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp.StatusCode != http.StatusOK {
|
||||||
|
return fmt.Errorf("login failed: %s", resp.Status)
|
||||||
|
}
|
||||||
|
|
||||||
|
var loginResp DegooLoginResponse
|
||||||
|
if err := json.NewDecoder(resp.Body).Decode(&loginResp); err != nil {
|
||||||
|
return fmt.Errorf("failed to parse login response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if loginResp.RefreshToken != "" {
|
||||||
|
tokenReq := DegooAccessTokenRequest{RefreshToken: loginResp.RefreshToken}
|
||||||
|
jsonTokenReq, err := json.Marshal(tokenReq)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to serialize access token request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tokenReqHTTP, err := http.NewRequestWithContext(ctx, "POST", accessTokenURL, bytes.NewBuffer(jsonTokenReq))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create access token request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tokenReqHTTP.Header.Set("User-Agent", base.UserAgent)
|
||||||
|
|
||||||
|
tokenResp, err := d.client.Do(tokenReqHTTP)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to get access token: %w", err)
|
||||||
|
}
|
||||||
|
defer tokenResp.Body.Close()
|
||||||
|
|
||||||
|
var accessTokenResp DegooAccessTokenResponse
|
||||||
|
if err := json.NewDecoder(tokenResp.Body).Decode(&accessTokenResp); err != nil {
|
||||||
|
return fmt.Errorf("failed to parse access token response: %w", err)
|
||||||
|
}
|
||||||
|
d.AccessToken = accessTokenResp.AccessToken
|
||||||
|
d.RefreshToken = loginResp.RefreshToken // Save refresh token
|
||||||
|
} else if loginResp.Token != "" {
|
||||||
|
d.AccessToken = loginResp.Token
|
||||||
|
d.RefreshToken = "" // Direct token, no refresh token available
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("login failed, no valid token returned")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save the updated tokens to storage
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// apiCall performs a Degoo GraphQL API request.
|
||||||
|
func (d *Degoo) apiCall(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
|
||||||
|
// Apply rate limiting
|
||||||
|
applyRateLimit()
|
||||||
|
|
||||||
|
// Ensure we have a valid token before making the API call
|
||||||
|
if err := d.ensureValidToken(ctx); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to ensure valid token: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update the Token in variables if it exists (after potential refresh)
|
||||||
|
d.updateTokenInVariables(variables)
|
||||||
|
|
||||||
|
return d.executeGraphQLRequest(ctx, operationName, query, variables)
|
||||||
|
}
|
||||||
|
|
||||||
|
// updateTokenInVariables updates the Token field in GraphQL variables
|
||||||
|
func (d *Degoo) updateTokenInVariables(variables map[string]interface{}) {
|
||||||
|
if variables != nil {
|
||||||
|
if _, hasToken := variables["Token"]; hasToken {
|
||||||
|
variables["Token"] = d.AccessToken
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// executeGraphQLRequest executes a GraphQL request with retry logic
|
||||||
|
func (d *Degoo) executeGraphQLRequest(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
|
||||||
|
reqBody := map[string]interface{}{
|
||||||
|
"operationName": operationName,
|
||||||
|
"query": query,
|
||||||
|
"variables": variables,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create and configure request
|
||||||
|
req, err := createJSONRequest(ctx, "POST", apiURL, reqBody)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set Degoo-specific headers
|
||||||
|
req.Header.Set("x-api-key", apiKey)
|
||||||
|
if d.AccessToken != "" {
|
||||||
|
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", d.AccessToken))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute request
|
||||||
|
resp, err := d.client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("GraphQL API request failed: %w", err)
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Check for HTTP errors
|
||||||
|
if err := checkHTTPResponse(resp, "GraphQL API"); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse GraphQL response
|
||||||
|
var degooResp DegooGraphqlResponse
|
||||||
|
if err := json.NewDecoder(resp.Body).Decode(°ooResp); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to decode GraphQL response: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle GraphQL errors
|
||||||
|
if len(degooResp.Errors) > 0 {
|
||||||
|
return d.handleGraphQLError(ctx, degooResp.Errors[0], operationName, query, variables)
|
||||||
|
}
|
||||||
|
|
||||||
|
return degooResp.Data, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleGraphQLError handles GraphQL-level errors with retry logic
|
||||||
|
func (d *Degoo) handleGraphQLError(ctx context.Context, gqlError DegooErrors, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
|
||||||
|
if gqlError.ErrorType == "Unauthorized" {
|
||||||
|
// Re-login and retry
|
||||||
|
if err := d.login(ctx); err != nil {
|
||||||
|
return nil, fmt.Errorf("%s, login failed: %w", errUnauthorized, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update token in variables and retry
|
||||||
|
d.updateTokenInVariables(variables)
|
||||||
|
return d.apiCall(ctx, operationName, query, variables)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, fmt.Errorf("GraphQL API error: %s", gqlError.Message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// humanReadableTimes converts Degoo timestamps to Go time.Time.
|
||||||
|
func humanReadableTimes(creation, modification, upload string) (cTime, mTime, uTime time.Time) {
|
||||||
|
cTime, _ = time.Parse(time.RFC3339, creation)
|
||||||
|
if modification != "" {
|
||||||
|
modMillis, _ := strconv.ParseInt(modification, 10, 64)
|
||||||
|
mTime = time.Unix(0, modMillis*int64(time.Millisecond))
|
||||||
|
}
|
||||||
|
if upload != "" {
|
||||||
|
upMillis, _ := strconv.ParseInt(upload, 10, 64)
|
||||||
|
uTime = time.Unix(0, upMillis*int64(time.Millisecond))
|
||||||
|
}
|
||||||
|
return cTime, mTime, uTime
|
||||||
|
}
|
||||||
|
|
||||||
|
// getDevices fetches and caches top-level devices and folders.
|
||||||
|
func (d *Degoo) getDevices(ctx context.Context) error {
|
||||||
|
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ParentID } NextToken } }`
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"ParentID": "0",
|
||||||
|
"Limit": 10,
|
||||||
|
"Order": 3,
|
||||||
|
}
|
||||||
|
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
var resp DegooGetChildren5Data
|
||||||
|
if err := json.Unmarshal(data, &resp); err != nil {
|
||||||
|
return fmt.Errorf("failed to parse device list: %w", err)
|
||||||
|
}
|
||||||
|
if d.RootFolderID == "0" {
|
||||||
|
if len(resp.GetFileChildren5.Items) > 0 {
|
||||||
|
d.RootFolderID = resp.GetFileChildren5.Items[0].ParentID
|
||||||
|
}
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getAllFileChildren5 fetches all children of a directory with pagination.
|
||||||
|
func (d *Degoo) getAllFileChildren5(ctx context.Context, parentID string) ([]DegooFileItem, error) {
|
||||||
|
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime FilePath IsInRecycleBin DeviceID MetadataID } NextToken } }`
|
||||||
|
var allItems []DegooFileItem
|
||||||
|
nextToken := ""
|
||||||
|
for {
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"ParentID": parentID,
|
||||||
|
"Limit": 1000,
|
||||||
|
"Order": 3,
|
||||||
|
}
|
||||||
|
if nextToken != "" {
|
||||||
|
variables["NextToken"] = nextToken
|
||||||
|
}
|
||||||
|
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var resp DegooGetChildren5Data
|
||||||
|
if err := json.Unmarshal(data, &resp); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
allItems = append(allItems, resp.GetFileChildren5.Items...)
|
||||||
|
if resp.GetFileChildren5.NextToken == "" {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
nextToken = resp.GetFileChildren5.NextToken
|
||||||
|
}
|
||||||
|
return allItems, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getOverlay4 fetches metadata for a single item by ID.
|
||||||
|
func (d *Degoo) getOverlay4(ctx context.Context, id string) (DegooFileItem, error) {
|
||||||
|
const query = `query GetOverlay4($Token: String!, $ID: IDType!) { getOverlay4(Token: $Token, ID: $ID) { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime URL FilePath IsInRecycleBin DeviceID MetadataID } }`
|
||||||
|
variables := map[string]interface{}{
|
||||||
|
"Token": d.AccessToken,
|
||||||
|
"ID": map[string]string{
|
||||||
|
"FileID": id,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
data, err := d.apiCall(ctx, "GetOverlay4", query, variables)
|
||||||
|
if err != nil {
|
||||||
|
return DegooFileItem{}, err
|
||||||
|
}
|
||||||
|
var resp DegooGetOverlay4Data
|
||||||
|
if err := json.Unmarshal(data, &resp); err != nil {
|
||||||
|
return DegooFileItem{}, fmt.Errorf("failed to parse item metadata: %w", err)
|
||||||
|
}
|
||||||
|
return resp.GetOverlay4, nil
|
||||||
|
}
|
@ -296,6 +296,23 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreame
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
upToken := utils.Json.Get(res, "upToken").ToString()
|
upToken := utils.Json.Get(res, "upToken").ToString()
|
||||||
|
if upToken == "-1" {
|
||||||
|
// 支持秒传
|
||||||
|
var resp UploadTokenRapidResp
|
||||||
|
err := utils.Json.Unmarshal(res, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &model.Object{
|
||||||
|
ID: strconv.FormatInt(resp.Map.FileID, 10),
|
||||||
|
Name: resp.Map.FileName,
|
||||||
|
Size: s.GetSize(),
|
||||||
|
Modified: s.ModTime(),
|
||||||
|
Ctime: s.CreateTime(),
|
||||||
|
IsFolder: false,
|
||||||
|
HashInfo: utils.NewHashInfo(utils.MD5, etag),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
key := fmt.Sprintf("disk/%d/%d/%d/%s/%016d", now.Year(), now.Month(), now.Day(), d.account, now.UnixMilli())
|
key := fmt.Sprintf("disk/%d/%d/%d/%s/%016d", now.Year(), now.Month(), now.Day(), d.account, now.UnixMilli())
|
||||||
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
|
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
|
||||||
|
@ -32,6 +32,7 @@ func init() {
|
|||||||
Name: "ILanZou",
|
Name: "ILanZou",
|
||||||
DefaultRoot: "0",
|
DefaultRoot: "0",
|
||||||
LocalSort: true,
|
LocalSort: true,
|
||||||
|
NoOverwriteUpload: true,
|
||||||
},
|
},
|
||||||
conf: Conf{
|
conf: Conf{
|
||||||
base: "https://api.ilanzou.com",
|
base: "https://api.ilanzou.com",
|
||||||
@ -50,6 +51,7 @@ func init() {
|
|||||||
Name: "FeijiPan",
|
Name: "FeijiPan",
|
||||||
DefaultRoot: "0",
|
DefaultRoot: "0",
|
||||||
LocalSort: true,
|
LocalSort: true,
|
||||||
|
NoOverwriteUpload: true,
|
||||||
},
|
},
|
||||||
conf: Conf{
|
conf: Conf{
|
||||||
base: "https://api.feijipan.com",
|
base: "https://api.feijipan.com",
|
||||||
|
@ -43,6 +43,18 @@ type Part struct {
|
|||||||
ETag string `json:"etag"`
|
ETag string `json:"etag"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type UploadTokenRapidResp struct {
|
||||||
|
Msg string `json:"msg"`
|
||||||
|
Code int `json:"code"`
|
||||||
|
UpToken string `json:"upToken"`
|
||||||
|
Map struct {
|
||||||
|
FileIconID int `json:"fileIconId"`
|
||||||
|
FileName string `json:"fileName"`
|
||||||
|
FileIcon string `json:"fileIcon"`
|
||||||
|
FileID int64 `json:"fileId"`
|
||||||
|
} `json:"map"`
|
||||||
|
}
|
||||||
|
|
||||||
type UploadResultResp struct {
|
type UploadResultResp struct {
|
||||||
Msg string `json:"msg"`
|
Msg string `json:"msg"`
|
||||||
Code int `json:"code"`
|
Code int `json:"code"`
|
||||||
|
@ -149,13 +149,19 @@ func (d *QuarkOrUC) getTranscodingLink(file model.Obj) (*model.Link, error) {
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, info := range resp.Data.VideoList {
|
||||||
|
if info.VideoInfo.URL != "" {
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
URL: resp.Data.VideoList[0].VideoInfo.URL,
|
URL: info.VideoInfo.URL,
|
||||||
ContentLength: resp.Data.VideoList[0].VideoInfo.Size,
|
ContentLength: info.VideoInfo.Size,
|
||||||
Concurrency: 3,
|
Concurrency: 3,
|
||||||
PartSize: 10 * utils.MB,
|
PartSize: 10 * utils.MB,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, errors.New("no link found")
|
||||||
|
}
|
||||||
|
|
||||||
func (d *QuarkOrUC) upPre(file model.FileStreamer, parentId string) (UpPreResp, error) {
|
func (d *QuarkOrUC) upPre(file model.FileStreamer, parentId string) (UpPreResp, error) {
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
|
@ -228,13 +228,19 @@ func (d *QuarkUCTV) getTranscodingLink(ctx context.Context, file model.Obj) (*mo
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, info := range fileLink.Data.VideoInfo {
|
||||||
|
if info.URL != "" {
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
URL: fileLink.Data.VideoInfo[0].URL,
|
URL: info.URL,
|
||||||
|
ContentLength: info.Size,
|
||||||
Concurrency: 3,
|
Concurrency: 3,
|
||||||
PartSize: 10 * utils.MB,
|
PartSize: 10 * utils.MB,
|
||||||
ContentLength: fileLink.Data.VideoInfo[0].Size,
|
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, errors.New("no link found")
|
||||||
|
}
|
||||||
|
|
||||||
func (d *QuarkUCTV) getDownloadLink(ctx context.Context, file model.Obj) (*model.Link, error) {
|
func (d *QuarkUCTV) getDownloadLink(ctx context.Context, file model.Obj) (*model.Link, error) {
|
||||||
var fileLink DownloadFileLink
|
var fileLink DownloadFileLink
|
||||||
|
137
drivers/teldrive/copy.go
Normal file
137
drivers/teldrive/copy.go
Normal file
@ -0,0 +1,137 @@
|
|||||||
|
package teldrive
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
|
"golang.org/x/net/context"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
"golang.org/x/sync/semaphore"
|
||||||
|
)
|
||||||
|
|
||||||
|
func NewCopyManager(ctx context.Context, concurrent int, d *Teldrive) *CopyManager {
|
||||||
|
g, ctx := errgroup.WithContext(ctx)
|
||||||
|
|
||||||
|
return &CopyManager{
|
||||||
|
TaskChan: make(chan CopyTask, concurrent*2),
|
||||||
|
Sem: semaphore.NewWeighted(int64(concurrent)),
|
||||||
|
G: g,
|
||||||
|
Ctx: ctx,
|
||||||
|
d: d,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cm *CopyManager) startWorkers() {
|
||||||
|
workerCount := cap(cm.TaskChan) / 2
|
||||||
|
for i := 0; i < workerCount; i++ {
|
||||||
|
cm.G.Go(func() error {
|
||||||
|
return cm.worker()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cm *CopyManager) worker() error {
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case task, ok := <-cm.TaskChan:
|
||||||
|
if !ok {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := cm.Sem.Acquire(cm.Ctx, 1); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var err error
|
||||||
|
|
||||||
|
err = cm.processFile(task)
|
||||||
|
|
||||||
|
cm.Sem.Release(1)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("task processing failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
case <-cm.Ctx.Done():
|
||||||
|
return cm.Ctx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cm *CopyManager) generateTasks(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
if srcObj.IsDir() {
|
||||||
|
return cm.generateFolderTasks(ctx, srcObj, dstDir)
|
||||||
|
} else {
|
||||||
|
// add single file task directly
|
||||||
|
select {
|
||||||
|
case cm.TaskChan <- CopyTask{SrcObj: srcObj, DstDir: dstDir}:
|
||||||
|
return nil
|
||||||
|
case <-ctx.Done():
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cm *CopyManager) generateFolderTasks(ctx context.Context, srcDir, dstDir model.Obj) error {
|
||||||
|
objs, err := cm.d.List(ctx, srcDir, model.ListArgs{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to list directory %s: %w", srcDir.GetPath(), err)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = cm.d.MakeDir(cm.Ctx, dstDir, srcDir.GetName())
|
||||||
|
if err != nil || len(objs) == 0 {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
newDstDir := &model.Object{
|
||||||
|
ID: dstDir.GetID(),
|
||||||
|
Path: dstDir.GetPath() + "/" + srcDir.GetName(),
|
||||||
|
Name: srcDir.GetName(),
|
||||||
|
IsFolder: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, file := range objs {
|
||||||
|
if utils.IsCanceled(ctx) {
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
|
||||||
|
srcFile := &model.Object{
|
||||||
|
ID: file.GetID(),
|
||||||
|
Path: srcDir.GetPath() + "/" + file.GetName(),
|
||||||
|
Name: file.GetName(),
|
||||||
|
IsFolder: file.IsDir(),
|
||||||
|
}
|
||||||
|
|
||||||
|
// 递归生成任务
|
||||||
|
if err := cm.generateTasks(ctx, srcFile, newDstDir); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cm *CopyManager) processFile(task CopyTask) error {
|
||||||
|
return cm.copySingleFile(cm.Ctx, task.SrcObj, task.DstDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cm *CopyManager) copySingleFile(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
// `override copy mode` should delete the existing file
|
||||||
|
if obj, err := cm.d.getFile(dstDir.GetPath(), srcObj.GetName(), srcObj.IsDir()); err == nil {
|
||||||
|
if err := cm.d.Remove(ctx, obj); err != nil {
|
||||||
|
return fmt.Errorf("failed to remove existing file: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Do copy
|
||||||
|
return cm.d.request(http.MethodPost, "/api/files/{id}/copy", func(req *resty.Request) {
|
||||||
|
req.SetPathParam("id", srcObj.GetID())
|
||||||
|
req.SetBody(base.Json{
|
||||||
|
"newName": srcObj.GetName(),
|
||||||
|
"destination": dstDir.GetPath(),
|
||||||
|
})
|
||||||
|
}, nil)
|
||||||
|
}
|
217
drivers/teldrive/driver.go
Normal file
217
drivers/teldrive/driver.go
Normal file
@ -0,0 +1,217 @@
|
|||||||
|
package teldrive
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/errs"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Teldrive struct {
|
||||||
|
model.Storage
|
||||||
|
Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Config() driver.Config {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) GetAddition() driver.Additional {
|
||||||
|
return &d.Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Init(ctx context.Context) error {
|
||||||
|
d.Address = strings.TrimSuffix(d.Address, "/")
|
||||||
|
if d.Cookie == "" || !strings.HasPrefix(d.Cookie, "access_token=") {
|
||||||
|
return fmt.Errorf("cookie must start with 'access_token='")
|
||||||
|
}
|
||||||
|
if d.UploadConcurrency == 0 {
|
||||||
|
d.UploadConcurrency = 4
|
||||||
|
}
|
||||||
|
if d.ChunkSize == 0 {
|
||||||
|
d.ChunkSize = 10
|
||||||
|
}
|
||||||
|
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Drop(ctx context.Context) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
|
var listResp ListResp
|
||||||
|
err := d.request(http.MethodGet, "/api/files", func(req *resty.Request) {
|
||||||
|
req.SetQueryParams(map[string]string{
|
||||||
|
"path": dir.GetPath(),
|
||||||
|
"limit": "1000", // overide default 500, TODO pagination
|
||||||
|
})
|
||||||
|
}, &listResp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return utils.SliceConvert(listResp.Items, func(src Object) (model.Obj, error) {
|
||||||
|
return &model.Object{
|
||||||
|
ID: src.ID,
|
||||||
|
Name: src.Name,
|
||||||
|
Size: func() int64 {
|
||||||
|
if src.Type == "folder" {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return src.Size
|
||||||
|
}(),
|
||||||
|
IsFolder: src.Type == "folder",
|
||||||
|
Modified: src.UpdatedAt,
|
||||||
|
}, nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
|
if d.UseShareLink {
|
||||||
|
shareObj, err := d.getShareFileById(file.GetID())
|
||||||
|
if err != nil || shareObj == nil {
|
||||||
|
if err := d.createShareFile(file.GetID()); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
shareObj, err = d.getShareFileById(file.GetID())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return &model.Link{
|
||||||
|
URL: d.Address + "/api/shares/" + url.PathEscape(shareObj.Id) + "/files/" + url.PathEscape(file.GetID()) + "/" + url.PathEscape(file.GetName()),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
return &model.Link{
|
||||||
|
URL: d.Address + "/api/files/" + url.PathEscape(file.GetID()) + "/" + url.PathEscape(file.GetName()),
|
||||||
|
Header: http.Header{
|
||||||
|
"Cookie": {d.Cookie},
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||||
|
return d.request(http.MethodPost, "/api/files/mkdir", func(req *resty.Request) {
|
||||||
|
req.SetBody(map[string]interface{}{
|
||||||
|
"path": parentDir.GetPath() + "/" + dirName,
|
||||||
|
})
|
||||||
|
}, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
body := base.Json{
|
||||||
|
"ids": []string{srcObj.GetID()},
|
||||||
|
"destinationParent": dstDir.GetID(),
|
||||||
|
}
|
||||||
|
return d.request(http.MethodPost, "/api/files/move", func(req *resty.Request) {
|
||||||
|
req.SetBody(body)
|
||||||
|
}, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||||
|
body := base.Json{
|
||||||
|
"name": newName,
|
||||||
|
}
|
||||||
|
return d.request(http.MethodPatch, "/api/files/{id}", func(req *resty.Request) {
|
||||||
|
req.SetPathParam("id", srcObj.GetID())
|
||||||
|
req.SetBody(body)
|
||||||
|
}, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
copyConcurrentLimit := 4
|
||||||
|
copyManager := NewCopyManager(ctx, copyConcurrentLimit, d)
|
||||||
|
copyManager.startWorkers()
|
||||||
|
copyManager.G.Go(func() error {
|
||||||
|
defer close(copyManager.TaskChan)
|
||||||
|
return copyManager.generateTasks(ctx, srcObj, dstDir)
|
||||||
|
})
|
||||||
|
return copyManager.G.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
|
body := base.Json{
|
||||||
|
"ids": []string{obj.GetID()},
|
||||||
|
}
|
||||||
|
return d.request(http.MethodPost, "/api/files/delete", func(req *resty.Request) {
|
||||||
|
req.SetBody(body)
|
||||||
|
}, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
|
fileId := uuid.New().String()
|
||||||
|
chunkSizeInMB := d.ChunkSize
|
||||||
|
chunkSize := chunkSizeInMB * 1024 * 1024 // Convert MB to bytes
|
||||||
|
totalSize := file.GetSize()
|
||||||
|
totalParts := int(math.Ceil(float64(totalSize) / float64(chunkSize)))
|
||||||
|
maxRetried := 3
|
||||||
|
|
||||||
|
// delete the upload task when finished or failed
|
||||||
|
defer func() {
|
||||||
|
_ = d.request(http.MethodDelete, "/api/uploads/{id}", func(req *resty.Request) {
|
||||||
|
req.SetPathParam("id", fileId)
|
||||||
|
}, nil)
|
||||||
|
}()
|
||||||
|
|
||||||
|
if obj, err := d.getFile(dstDir.GetPath(), file.GetName(), file.IsDir()); err == nil {
|
||||||
|
if err = d.Remove(ctx, obj); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// start the upload process
|
||||||
|
if err := d.request(http.MethodGet, "/api/uploads/fileId", func(req *resty.Request) {
|
||||||
|
req.SetPathParam("id", fileId)
|
||||||
|
}, nil); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if totalSize == 0 {
|
||||||
|
return d.touch(file.GetName(), dstDir.GetPath())
|
||||||
|
}
|
||||||
|
|
||||||
|
if totalParts <= 1 {
|
||||||
|
return d.doSingleUpload(ctx, dstDir, file, up, totalParts, chunkSize, fileId)
|
||||||
|
}
|
||||||
|
|
||||||
|
return d.doMultiUpload(ctx, dstDir, file, up, maxRetried, totalParts, chunkSize, fileId)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
|
||||||
|
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
|
||||||
|
return nil, errs.NotImplement
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
|
||||||
|
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
|
||||||
|
return nil, errs.NotImplement
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
|
||||||
|
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
|
||||||
|
return nil, errs.NotImplement
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
|
||||||
|
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
|
||||||
|
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
|
||||||
|
// return errs.NotImplement to use an internal archive tool
|
||||||
|
return nil, errs.NotImplement
|
||||||
|
}
|
||||||
|
|
||||||
|
//func (d *Teldrive) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
|
||||||
|
// return nil, errs.NotSupport
|
||||||
|
//}
|
||||||
|
|
||||||
|
var _ driver.Driver = (*Teldrive)(nil)
|
26
drivers/teldrive/meta.go
Normal file
26
drivers/teldrive/meta.go
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
package teldrive
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Addition struct {
|
||||||
|
driver.RootPath
|
||||||
|
Address string `json:"url" required:"true"`
|
||||||
|
Cookie string `json:"cookie" type:"string" required:"true" help:"access_token=xxx"`
|
||||||
|
UseShareLink bool `json:"use_share_link" type:"bool" default:"false" help:"Create share link when getting link to support 302. If disabled, you need to enable web proxy."`
|
||||||
|
ChunkSize int64 `json:"chunk_size" type:"number" default:"10" help:"Chunk size in MiB"`
|
||||||
|
UploadConcurrency int64 `json:"upload_concurrency" type:"number" default:"4" help:"Concurrency upload requests"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var config = driver.Config{
|
||||||
|
Name: "Teldrive",
|
||||||
|
DefaultRoot: "/",
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
op.RegisterDriver(func() driver.Driver {
|
||||||
|
return &Teldrive{}
|
||||||
|
})
|
||||||
|
}
|
77
drivers/teldrive/types.go
Normal file
77
drivers/teldrive/types.go
Normal file
@ -0,0 +1,77 @@
|
|||||||
|
package teldrive
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
"golang.org/x/sync/semaphore"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ErrResp struct {
|
||||||
|
Code int `json:"code"`
|
||||||
|
Message string `json:"message"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Object struct {
|
||||||
|
ID string `json:"id"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Type string `json:"type"`
|
||||||
|
MimeType string `json:"mimeType"`
|
||||||
|
Category string `json:"category,omitempty"`
|
||||||
|
ParentId string `json:"parentId"`
|
||||||
|
Size int64 `json:"size"`
|
||||||
|
Encrypted bool `json:"encrypted"`
|
||||||
|
UpdatedAt time.Time `json:"updatedAt"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ListResp struct {
|
||||||
|
Items []Object `json:"items"`
|
||||||
|
Meta struct {
|
||||||
|
Count int `json:"count"`
|
||||||
|
TotalPages int `json:"totalPages"`
|
||||||
|
CurrentPage int `json:"currentPage"`
|
||||||
|
} `json:"meta"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type FilePart struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
PartId int `json:"partId"`
|
||||||
|
PartNo int `json:"partNo"`
|
||||||
|
ChannelId int `json:"channelId"`
|
||||||
|
Size int `json:"size"`
|
||||||
|
Encrypted bool `json:"encrypted"`
|
||||||
|
Salt string `json:"salt"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type chunkTask struct {
|
||||||
|
chunkIdx int
|
||||||
|
fileName string
|
||||||
|
chunkSize int64
|
||||||
|
reader *stream.SectionReader
|
||||||
|
ss *stream.StreamSectionReader
|
||||||
|
}
|
||||||
|
|
||||||
|
type CopyManager struct {
|
||||||
|
TaskChan chan CopyTask
|
||||||
|
Sem *semaphore.Weighted
|
||||||
|
G *errgroup.Group
|
||||||
|
Ctx context.Context
|
||||||
|
d *Teldrive
|
||||||
|
}
|
||||||
|
|
||||||
|
type CopyTask struct {
|
||||||
|
SrcObj model.Obj
|
||||||
|
DstDir model.Obj
|
||||||
|
}
|
||||||
|
|
||||||
|
type ShareObj struct {
|
||||||
|
Id string `json:"id"`
|
||||||
|
Protected bool `json:"protected"`
|
||||||
|
UserId int `json:"userId"`
|
||||||
|
Type string `json:"type"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
ExpiresAt time.Time `json:"expiresAt"`
|
||||||
|
}
|
373
drivers/teldrive/upload.go
Normal file
373
drivers/teldrive/upload.go
Normal file
@ -0,0 +1,373 @@
|
|||||||
|
package teldrive
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/driver"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/stream"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
|
||||||
|
"github.com/avast/retry-go"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"golang.org/x/net/context"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
"golang.org/x/sync/semaphore"
|
||||||
|
)
|
||||||
|
|
||||||
|
// create empty file
|
||||||
|
func (d *Teldrive) touch(name, path string) error {
|
||||||
|
uploadBody := base.Json{
|
||||||
|
"name": name,
|
||||||
|
"type": "file",
|
||||||
|
"path": path,
|
||||||
|
}
|
||||||
|
if err := d.request(http.MethodPost, "/api/files", func(req *resty.Request) {
|
||||||
|
req.SetBody(uploadBody)
|
||||||
|
}, nil); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) createFileOnUploadSuccess(name, id, path string, uploadedFileParts []FilePart, totalSize int64) error {
|
||||||
|
remoteFileParts, err := d.getFilePart(id)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// check if the uploaded file parts match the remote file parts
|
||||||
|
if len(remoteFileParts) != len(uploadedFileParts) {
|
||||||
|
return fmt.Errorf("[Teldrive] file parts count mismatch: expected %d, got %d", len(uploadedFileParts), len(remoteFileParts))
|
||||||
|
}
|
||||||
|
formatParts := make([]base.Json, 0)
|
||||||
|
for _, p := range remoteFileParts {
|
||||||
|
formatParts = append(formatParts, base.Json{
|
||||||
|
"id": p.PartId,
|
||||||
|
"salt": p.Salt,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
uploadBody := base.Json{
|
||||||
|
"name": name,
|
||||||
|
"type": "file",
|
||||||
|
"path": path,
|
||||||
|
"parts": formatParts,
|
||||||
|
"size": totalSize,
|
||||||
|
}
|
||||||
|
// create file here
|
||||||
|
if err := d.request(http.MethodPost, "/api/files", func(req *resty.Request) {
|
||||||
|
req.SetBody(uploadBody)
|
||||||
|
}, nil); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) checkFilePartExist(fileId string, partId int) (FilePart, error) {
|
||||||
|
var uploadedParts []FilePart
|
||||||
|
var filePart FilePart
|
||||||
|
|
||||||
|
if err := d.request(http.MethodGet, "/api/uploads/{id}", func(req *resty.Request) {
|
||||||
|
req.SetPathParam("id", fileId)
|
||||||
|
}, &uploadedParts); err != nil {
|
||||||
|
return filePart, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, part := range uploadedParts {
|
||||||
|
if part.PartId == partId {
|
||||||
|
return part, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return filePart, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) getFilePart(fileId string) ([]FilePart, error) {
|
||||||
|
var uploadedParts []FilePart
|
||||||
|
if err := d.request(http.MethodGet, "/api/uploads/{id}", func(req *resty.Request) {
|
||||||
|
req.SetPathParam("id", fileId)
|
||||||
|
}, &uploadedParts); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return uploadedParts, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) singleUploadRequest(fileId string, callback base.ReqCallback, resp interface{}) error {
|
||||||
|
url := d.Address + "/api/uploads/" + fileId
|
||||||
|
client := resty.New().SetTimeout(0)
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
req := client.R().
|
||||||
|
SetContext(ctx)
|
||||||
|
req.SetHeader("Cookie", d.Cookie)
|
||||||
|
req.SetHeader("Content-Type", "application/octet-stream")
|
||||||
|
req.SetContentLength(true)
|
||||||
|
req.AddRetryCondition(func(r *resty.Response, err error) bool {
|
||||||
|
return false
|
||||||
|
})
|
||||||
|
if callback != nil {
|
||||||
|
callback(req)
|
||||||
|
}
|
||||||
|
if resp != nil {
|
||||||
|
req.SetResult(resp)
|
||||||
|
}
|
||||||
|
var e ErrResp
|
||||||
|
req.SetError(&e)
|
||||||
|
_req, err := req.Execute(http.MethodPost, url)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if _req.IsError() {
|
||||||
|
return &e
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) doSingleUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up model.UpdateProgress,
|
||||||
|
totalParts int, chunkSize int64, fileId string) error {
|
||||||
|
|
||||||
|
totalSize := file.GetSize()
|
||||||
|
var fileParts []FilePart
|
||||||
|
var uploaded int64 = 0
|
||||||
|
ss, err := stream.NewStreamSectionReader(file, int(totalSize), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for uploaded < totalSize {
|
||||||
|
if utils.IsCanceled(ctx) {
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
curChunkSize := min(totalSize-uploaded, chunkSize)
|
||||||
|
rd, err := ss.GetSectionReader(uploaded, curChunkSize)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
filePart := &FilePart{}
|
||||||
|
if err := retry.Do(func() error {
|
||||||
|
|
||||||
|
if _, err := rd.Seek(0, io.SeekStart); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := d.singleUploadRequest(fileId, func(req *resty.Request) {
|
||||||
|
uploadParams := map[string]string{
|
||||||
|
"partName": func() string {
|
||||||
|
digits := len(fmt.Sprintf("%d", totalParts))
|
||||||
|
return file.GetName() + fmt.Sprintf(".%0*d", digits, 1)
|
||||||
|
}(),
|
||||||
|
"partNo": strconv.Itoa(1),
|
||||||
|
"fileName": file.GetName(),
|
||||||
|
}
|
||||||
|
req.SetQueryParams(uploadParams)
|
||||||
|
req.SetBody(driver.NewLimitedUploadStream(ctx, rd))
|
||||||
|
req.SetHeader("Content-Length", strconv.FormatInt(curChunkSize, 10))
|
||||||
|
}, filePart); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
retry.Attempts(3),
|
||||||
|
retry.DelayType(retry.BackOffDelay),
|
||||||
|
retry.Delay(time.Second)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if filePart.Name != "" {
|
||||||
|
fileParts = append(fileParts, *filePart)
|
||||||
|
uploaded += curChunkSize
|
||||||
|
up(float64(uploaded) / float64(totalSize))
|
||||||
|
ss.FreeSectionReader(rd)
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
return d.createFileOnUploadSuccess(file.GetName(), fileId, dstDir.GetPath(), fileParts, totalSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) doMultiUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up model.UpdateProgress,
|
||||||
|
maxRetried, totalParts int, chunkSize int64, fileId string) error {
|
||||||
|
|
||||||
|
concurrent := d.UploadConcurrency
|
||||||
|
g, ctx := errgroup.WithContext(ctx)
|
||||||
|
sem := semaphore.NewWeighted(int64(concurrent))
|
||||||
|
chunkChan := make(chan chunkTask, concurrent*2)
|
||||||
|
resultChan := make(chan FilePart, concurrent)
|
||||||
|
totalSize := file.GetSize()
|
||||||
|
|
||||||
|
ss, err := stream.NewStreamSectionReader(file, int(totalSize), &up)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
ssLock := sync.Mutex{}
|
||||||
|
g.Go(func() error {
|
||||||
|
defer close(chunkChan)
|
||||||
|
|
||||||
|
chunkIdx := 0
|
||||||
|
for chunkIdx < totalParts {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return ctx.Err()
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
|
||||||
|
offset := int64(chunkIdx) * chunkSize
|
||||||
|
curChunkSize := min(totalSize-offset, chunkSize)
|
||||||
|
|
||||||
|
ssLock.Lock()
|
||||||
|
reader, err := ss.GetSectionReader(offset, curChunkSize)
|
||||||
|
ssLock.Unlock()
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
task := chunkTask{
|
||||||
|
chunkIdx: chunkIdx + 1,
|
||||||
|
chunkSize: curChunkSize,
|
||||||
|
fileName: file.GetName(),
|
||||||
|
reader: reader,
|
||||||
|
ss: ss,
|
||||||
|
}
|
||||||
|
// freeSectionReader will be called in d.uploadSingleChunk
|
||||||
|
select {
|
||||||
|
case chunkChan <- task:
|
||||||
|
chunkIdx++
|
||||||
|
case <-ctx.Done():
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
for i := 0; i < int(concurrent); i++ {
|
||||||
|
g.Go(func() error {
|
||||||
|
for task := range chunkChan {
|
||||||
|
if err := sem.Acquire(ctx, 1); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
filePart, err := d.uploadSingleChunk(ctx, fileId, task, totalParts, maxRetried)
|
||||||
|
sem.Release(1)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("upload chunk %d failed: %w", task.chunkIdx, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case resultChan <- *filePart:
|
||||||
|
case <-ctx.Done():
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
var fileParts []FilePart
|
||||||
|
var collectErr error
|
||||||
|
collectDone := make(chan struct{})
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
defer close(collectDone)
|
||||||
|
fileParts = make([]FilePart, 0, totalParts)
|
||||||
|
|
||||||
|
done := make(chan error, 1)
|
||||||
|
go func() {
|
||||||
|
done <- g.Wait()
|
||||||
|
close(resultChan)
|
||||||
|
}()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case filePart, ok := <-resultChan:
|
||||||
|
if !ok {
|
||||||
|
collectErr = <-done
|
||||||
|
return
|
||||||
|
}
|
||||||
|
fileParts = append(fileParts, filePart)
|
||||||
|
case err := <-done:
|
||||||
|
collectErr = err
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
<-collectDone
|
||||||
|
|
||||||
|
if collectErr != nil {
|
||||||
|
return fmt.Errorf("multi-upload failed: %w", collectErr)
|
||||||
|
}
|
||||||
|
sort.Slice(fileParts, func(i, j int) bool {
|
||||||
|
return fileParts[i].PartNo < fileParts[j].PartNo
|
||||||
|
})
|
||||||
|
|
||||||
|
return d.createFileOnUploadSuccess(file.GetName(), fileId, dstDir.GetPath(), fileParts, totalSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) uploadSingleChunk(ctx context.Context, fileId string, task chunkTask, totalParts, maxRetried int) (*FilePart, error) {
|
||||||
|
filePart := &FilePart{}
|
||||||
|
retryCount := 0
|
||||||
|
defer task.ss.FreeSectionReader(task.reader)
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil, ctx.Err()
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
|
||||||
|
if existingPart, err := d.checkFilePartExist(fileId, task.chunkIdx); err == nil && existingPart.Name != "" {
|
||||||
|
return &existingPart, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
err := d.singleUploadRequest(fileId, func(req *resty.Request) {
|
||||||
|
uploadParams := map[string]string{
|
||||||
|
"partName": func() string {
|
||||||
|
digits := len(fmt.Sprintf("%d", totalParts))
|
||||||
|
return task.fileName + fmt.Sprintf(".%0*d", digits, task.chunkIdx)
|
||||||
|
}(),
|
||||||
|
"partNo": strconv.Itoa(task.chunkIdx),
|
||||||
|
"fileName": task.fileName,
|
||||||
|
}
|
||||||
|
req.SetQueryParams(uploadParams)
|
||||||
|
req.SetBody(driver.NewLimitedUploadStream(ctx, task.reader))
|
||||||
|
req.SetHeader("Content-Length", strconv.Itoa(int(task.chunkSize)))
|
||||||
|
}, filePart)
|
||||||
|
|
||||||
|
if err == nil {
|
||||||
|
return filePart, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if retryCount >= maxRetried {
|
||||||
|
return nil, fmt.Errorf("upload failed after %d retries: %w", maxRetried, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
retryCount++
|
||||||
|
utils.Log.Errorf("[Teldrive] upload error: %v, retrying %d times", err, retryCount)
|
||||||
|
|
||||||
|
backoffDuration := time.Duration(retryCount*retryCount) * time.Second
|
||||||
|
if backoffDuration > 30*time.Second {
|
||||||
|
backoffDuration = 30 * time.Second
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-time.After(backoffDuration):
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil, ctx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
109
drivers/teldrive/util.go
Normal file
109
drivers/teldrive/util.go
Normal file
@ -0,0 +1,109 @@
|
|||||||
|
package teldrive
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/drivers/base"
|
||||||
|
"github.com/OpenListTeam/OpenList/v4/internal/model"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
// do others that not defined in Driver interface
|
||||||
|
|
||||||
|
func (d *Teldrive) request(method string, pathname string, callback base.ReqCallback, resp interface{}) error {
|
||||||
|
url := d.Address + pathname
|
||||||
|
req := base.RestyClient.R()
|
||||||
|
req.SetHeader("Cookie", d.Cookie)
|
||||||
|
if callback != nil {
|
||||||
|
callback(req)
|
||||||
|
}
|
||||||
|
if resp != nil {
|
||||||
|
req.SetResult(resp)
|
||||||
|
}
|
||||||
|
var e ErrResp
|
||||||
|
req.SetError(&e)
|
||||||
|
_req, err := req.Execute(method, url)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if _req.IsError() {
|
||||||
|
return &e
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) getFile(path, name string, isFolder bool) (model.Obj, error) {
|
||||||
|
resp := &ListResp{}
|
||||||
|
err := d.request(http.MethodGet, "/api/files", func(req *resty.Request) {
|
||||||
|
req.SetQueryParams(map[string]string{
|
||||||
|
"path": path,
|
||||||
|
"name": name,
|
||||||
|
"type": func() string {
|
||||||
|
if isFolder {
|
||||||
|
return "folder"
|
||||||
|
}
|
||||||
|
return "file"
|
||||||
|
}(),
|
||||||
|
"operation": "find",
|
||||||
|
})
|
||||||
|
}, resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(resp.Items) == 0 {
|
||||||
|
return nil, fmt.Errorf("file not found: %s/%s", path, name)
|
||||||
|
}
|
||||||
|
obj := resp.Items[0]
|
||||||
|
return &model.Object{
|
||||||
|
ID: obj.ID,
|
||||||
|
Name: obj.Name,
|
||||||
|
Size: obj.Size,
|
||||||
|
IsFolder: obj.Type == "folder",
|
||||||
|
}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (err *ErrResp) Error() string {
|
||||||
|
if err == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Sprintf("[Teldrive] message:%s Error code:%d", err.Message, err.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) createShareFile(fileId string) error {
|
||||||
|
var errResp ErrResp
|
||||||
|
if err := d.request(http.MethodPost, "/api/files/{id}/share", func(req *resty.Request) {
|
||||||
|
req.SetPathParam("id", fileId)
|
||||||
|
req.SetBody(base.Json{
|
||||||
|
"expiresAt": getDateTime(),
|
||||||
|
})
|
||||||
|
}, &errResp); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if errResp.Message != "" {
|
||||||
|
return &errResp
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Teldrive) getShareFileById(fileId string) (*ShareObj, error) {
|
||||||
|
var shareObj ShareObj
|
||||||
|
if err := d.request(http.MethodGet, "/api/files/{id}/share", func(req *resty.Request) {
|
||||||
|
req.SetPathParam("id", fileId)
|
||||||
|
}, &shareObj); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &shareObj, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getDateTime() string {
|
||||||
|
now := time.Now().UTC()
|
||||||
|
formattedWithMs := now.Add(time.Hour * 1).Format("2006-01-02T15:04:05.000Z")
|
||||||
|
return formattedWithMs
|
||||||
|
}
|
@ -36,5 +36,6 @@ func (d *Wopan) getSpaceType() string {
|
|||||||
|
|
||||||
// 20230607214351
|
// 20230607214351
|
||||||
func getTime(str string) (time.Time, error) {
|
func getTime(str string) (time.Time, error) {
|
||||||
return time.Parse("20060102150405", str)
|
loc := time.FixedZone("UTC+8", 8*60*60)
|
||||||
|
return time.ParseInLocation("20060102150405", str, loc)
|
||||||
}
|
}
|
||||||
|
@ -5,9 +5,23 @@ umask ${UMASK}
|
|||||||
if [ "$1" = "version" ]; then
|
if [ "$1" = "version" ]; then
|
||||||
./openlist version
|
./openlist version
|
||||||
else
|
else
|
||||||
|
# Check file of /opt/openlist/data permissions for current user
|
||||||
|
# 检查当前用户是否有当前目录的写和执行权限
|
||||||
|
if [ -d ./data ]; then
|
||||||
|
if ! [ -w ./data ] || ! [ -x ./data ]; then
|
||||||
|
cat <<EOF
|
||||||
|
Error: Current user does not have write and/or execute permissions for the ./data directory: $(pwd)/data
|
||||||
|
Please visit https://doc.oplist.org/guide/installation/docker#for-version-after-v4-1-0 for more information.
|
||||||
|
错误:当前用户没有 ./data 目录($(pwd)/data)的写和/或执行权限。
|
||||||
|
请访问 https://doc.oplist.org/guide/installation/docker#v4-1-0-%E4%BB%A5%E5%90%8E%E7%89%88%E6%9C%AC 获取更多信息。
|
||||||
|
Exiting...
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
# Define the target directory path for aria2 service
|
# Define the target directory path for aria2 service
|
||||||
ARIA2_DIR="/opt/service/start/aria2"
|
ARIA2_DIR="/opt/service/start/aria2"
|
||||||
|
|
||||||
if [ "$RUN_ARIA2" = "true" ]; then
|
if [ "$RUN_ARIA2" = "true" ]; then
|
||||||
# If aria2 should run and target directory doesn't exist, copy it
|
# If aria2 should run and target directory doesn't exist, copy it
|
||||||
if [ ! -d "$ARIA2_DIR" ]; then
|
if [ ! -d "$ARIA2_DIR" ]; then
|
||||||
|
8
go.mod
8
go.mod
@ -11,7 +11,7 @@ require (
|
|||||||
github.com/OpenListTeam/times v0.1.0
|
github.com/OpenListTeam/times v0.1.0
|
||||||
github.com/OpenListTeam/wopan-sdk-go v0.1.5
|
github.com/OpenListTeam/wopan-sdk-go v0.1.5
|
||||||
github.com/ProtonMail/go-crypto v1.3.0
|
github.com/ProtonMail/go-crypto v1.3.0
|
||||||
github.com/SheltonZhu/115driver v1.1.0
|
github.com/SheltonZhu/115driver v1.1.1
|
||||||
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
|
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
|
||||||
github.com/avast/retry-go v3.0.0+incompatible
|
github.com/avast/retry-go v3.0.0+incompatible
|
||||||
github.com/aws/aws-sdk-go v1.55.7
|
github.com/aws/aws-sdk-go v1.55.7
|
||||||
@ -41,7 +41,7 @@ require (
|
|||||||
github.com/hirochachacha/go-smb2 v1.1.0
|
github.com/hirochachacha/go-smb2 v1.1.0
|
||||||
github.com/ipfs/go-ipfs-api v0.7.0
|
github.com/ipfs/go-ipfs-api v0.7.0
|
||||||
github.com/itsHenry35/gofakes3 v0.0.8
|
github.com/itsHenry35/gofakes3 v0.0.8
|
||||||
github.com/jlaffaye/ftp v0.2.1-0.20240918233326-1b970516f5d3
|
github.com/jlaffaye/ftp v0.2.1-0.20250831012827-3f092e051c94
|
||||||
github.com/json-iterator/go v1.1.12
|
github.com/json-iterator/go v1.1.12
|
||||||
github.com/kdomanski/iso9660 v0.4.0
|
github.com/kdomanski/iso9660 v0.4.0
|
||||||
github.com/maruel/natural v1.1.1
|
github.com/maruel/natural v1.1.1
|
||||||
@ -58,7 +58,7 @@ require (
|
|||||||
github.com/sirupsen/logrus v1.9.3
|
github.com/sirupsen/logrus v1.9.3
|
||||||
github.com/spf13/afero v1.14.0
|
github.com/spf13/afero v1.14.0
|
||||||
github.com/spf13/cobra v1.9.1
|
github.com/spf13/cobra v1.9.1
|
||||||
github.com/stretchr/testify v1.10.0
|
github.com/stretchr/testify v1.11.1
|
||||||
github.com/t3rm1n4l/go-mega v0.0.0-20241213151442-a19cff0ec7b5
|
github.com/t3rm1n4l/go-mega v0.0.0-20241213151442-a19cff0ec7b5
|
||||||
github.com/u2takey/ffmpeg-go v0.5.0
|
github.com/u2takey/ffmpeg-go v0.5.0
|
||||||
github.com/upyun/go-sdk/v3 v3.0.4
|
github.com/upyun/go-sdk/v3 v3.0.4
|
||||||
@ -254,7 +254,7 @@ require (
|
|||||||
github.com/yusufpapurcu/wmi v1.2.4 // indirect
|
github.com/yusufpapurcu/wmi v1.2.4 // indirect
|
||||||
go.etcd.io/bbolt v1.4.0 // indirect
|
go.etcd.io/bbolt v1.4.0 // indirect
|
||||||
golang.org/x/arch v0.18.0 // indirect
|
golang.org/x/arch v0.18.0 // indirect
|
||||||
golang.org/x/sync v0.16.0 // indirect
|
golang.org/x/sync v0.16.0
|
||||||
golang.org/x/sys v0.34.0 // indirect
|
golang.org/x/sys v0.34.0 // indirect
|
||||||
golang.org/x/term v0.33.0 // indirect
|
golang.org/x/term v0.33.0 // indirect
|
||||||
golang.org/x/text v0.27.0
|
golang.org/x/text v0.27.0
|
||||||
|
8
go.sum
8
go.sum
@ -59,8 +59,8 @@ github.com/RoaringBitmap/roaring/v2 v2.4.5 h1:uGrrMreGjvAtTBobc0g5IrW1D5ldxDQYe2
|
|||||||
github.com/RoaringBitmap/roaring/v2 v2.4.5/go.mod h1:FiJcsfkGje/nZBZgCu0ZxCPOKD/hVXDS2dXi7/eUFE0=
|
github.com/RoaringBitmap/roaring/v2 v2.4.5/go.mod h1:FiJcsfkGje/nZBZgCu0ZxCPOKD/hVXDS2dXi7/eUFE0=
|
||||||
github.com/STARRY-S/zip v0.2.1 h1:pWBd4tuSGm3wtpoqRZZ2EAwOmcHK6XFf7bU9qcJXyFg=
|
github.com/STARRY-S/zip v0.2.1 h1:pWBd4tuSGm3wtpoqRZZ2EAwOmcHK6XFf7bU9qcJXyFg=
|
||||||
github.com/STARRY-S/zip v0.2.1/go.mod h1:xNvshLODWtC4EJ702g7cTYn13G53o1+X9BWnPFpcWV4=
|
github.com/STARRY-S/zip v0.2.1/go.mod h1:xNvshLODWtC4EJ702g7cTYn13G53o1+X9BWnPFpcWV4=
|
||||||
github.com/SheltonZhu/115driver v1.1.0 h1:kA8Vtu5JVWqqJFiTF06+HDb9zVEO6ZSdyjV5HsGx7Wg=
|
github.com/SheltonZhu/115driver v1.1.1 h1:9EMhe2ZJflGiAaZbYInw2jqxTcqZNF+DtVDsEy70aFU=
|
||||||
github.com/SheltonZhu/115driver v1.1.0/go.mod h1:rKvNd4Y4OkXv1TMbr/SKjGdcvMQxh6AW5Tw9w0CJb7E=
|
github.com/SheltonZhu/115driver v1.1.1/go.mod h1:rKvNd4Y4OkXv1TMbr/SKjGdcvMQxh6AW5Tw9w0CJb7E=
|
||||||
github.com/abbot/go-http-auth v0.4.0 h1:QjmvZ5gSC7jm3Zg54DqWE/T5m1t2AfDu6QlXJT0EVT0=
|
github.com/abbot/go-http-auth v0.4.0 h1:QjmvZ5gSC7jm3Zg54DqWE/T5m1t2AfDu6QlXJT0EVT0=
|
||||||
github.com/abbot/go-http-auth v0.4.0/go.mod h1:Cz6ARTIzApMJDzh5bRMSUou6UMSp0IEXg9km/ci7TJM=
|
github.com/abbot/go-http-auth v0.4.0/go.mod h1:Cz6ARTIzApMJDzh5bRMSUou6UMSp0IEXg9km/ci7TJM=
|
||||||
github.com/aead/ecdh v0.2.0 h1:pYop54xVaq/CEREFEcukHRZfTdjiWvYIsZDXXrBapQQ=
|
github.com/aead/ecdh v0.2.0 h1:pYop54xVaq/CEREFEcukHRZfTdjiWvYIsZDXXrBapQQ=
|
||||||
@ -402,6 +402,8 @@ github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
|
|||||||
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
|
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
|
||||||
github.com/jlaffaye/ftp v0.2.1-0.20240918233326-1b970516f5d3 h1:ZxO6Qr2GOXPdcW80Mcn3nemvilMPvpWqxrNfK2ZnNNs=
|
github.com/jlaffaye/ftp v0.2.1-0.20240918233326-1b970516f5d3 h1:ZxO6Qr2GOXPdcW80Mcn3nemvilMPvpWqxrNfK2ZnNNs=
|
||||||
github.com/jlaffaye/ftp v0.2.1-0.20240918233326-1b970516f5d3/go.mod h1:dvLUr/8Fs9a2OBrEnCC5duphbkz/k/mSy5OkXg3PAgI=
|
github.com/jlaffaye/ftp v0.2.1-0.20240918233326-1b970516f5d3/go.mod h1:dvLUr/8Fs9a2OBrEnCC5duphbkz/k/mSy5OkXg3PAgI=
|
||||||
|
github.com/jlaffaye/ftp v0.2.1-0.20250831012827-3f092e051c94 h1:sBUrMD4Gx91zDgzTqPCr3FqFs2+3wWX7lyUYIP/isuA=
|
||||||
|
github.com/jlaffaye/ftp v0.2.1-0.20250831012827-3f092e051c94/go.mod h1:H1+whwD0Qe3YOunlXIWhh3rlvzW5cZfkMDYGQPg+KAM=
|
||||||
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
|
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
|
||||||
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
||||||
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
|
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
|
||||||
@ -620,6 +622,8 @@ github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o
|
|||||||
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||||
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
|
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
github.com/t3rm1n4l/go-mega v0.0.0-20241213151442-a19cff0ec7b5 h1:Sa+sR8aaAMFwxhXWENEnE6ZpqhZ9d7u1RT2722Rw6hc=
|
github.com/t3rm1n4l/go-mega v0.0.0-20241213151442-a19cff0ec7b5 h1:Sa+sR8aaAMFwxhXWENEnE6ZpqhZ9d7u1RT2722Rw6hc=
|
||||||
github.com/t3rm1n4l/go-mega v0.0.0-20241213151442-a19cff0ec7b5/go.mod h1:UdZiFUFu6e2WjjtjxivwXWcwc1N/8zgbkBR9QNucUOY=
|
github.com/t3rm1n4l/go-mega v0.0.0-20241213151442-a19cff0ec7b5/go.mod h1:UdZiFUFu6e2WjjtjxivwXWcwc1N/8zgbkBR9QNucUOY=
|
||||||
github.com/taruti/bytepool v0.0.0-20160310082835-5e3a9ea56543 h1:6Y51mutOvRGRx6KqyMNo//xk8B8o6zW9/RVmy1VamOs=
|
github.com/taruti/bytepool v0.0.0-20160310082835-5e3a9ea56543 h1:6Y51mutOvRGRx6KqyMNo//xk8B8o6zW9/RVmy1VamOs=
|
||||||
|
@ -162,7 +162,7 @@ func InitialSettings() []model.SettingItem {
|
|||||||
{Key: conf.OcrApi, Value: "https://openlistteam-ocr-api-server.hf.space/ocr/file/json", MigrationValue: "https://api.example.com/ocr/file/json", Type: conf.TypeString, Group: model.GLOBAL}, // TODO: This can be replace by a community-hosted endpoint, see https://github.com/OpenListTeam/ocr_api_server
|
{Key: conf.OcrApi, Value: "https://openlistteam-ocr-api-server.hf.space/ocr/file/json", MigrationValue: "https://api.example.com/ocr/file/json", Type: conf.TypeString, Group: model.GLOBAL}, // TODO: This can be replace by a community-hosted endpoint, see https://github.com/OpenListTeam/ocr_api_server
|
||||||
{Key: conf.FilenameCharMapping, Value: `{"/": "|"}`, Type: conf.TypeText, Group: model.GLOBAL},
|
{Key: conf.FilenameCharMapping, Value: `{"/": "|"}`, Type: conf.TypeText, Group: model.GLOBAL},
|
||||||
{Key: conf.ForwardDirectLinkParams, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL},
|
{Key: conf.ForwardDirectLinkParams, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL},
|
||||||
{Key: conf.IgnoreDirectLinkParams, Value: "sign,openlist_ts", Type: conf.TypeString, Group: model.GLOBAL},
|
{Key: conf.IgnoreDirectLinkParams, Value: "sign,openlist_ts,raw", Type: conf.TypeString, Group: model.GLOBAL},
|
||||||
{Key: conf.WebauthnLoginEnabled, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
|
{Key: conf.WebauthnLoginEnabled, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
|
||||||
{Key: conf.SharePreview, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
|
{Key: conf.SharePreview, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
|
||||||
{Key: conf.ShareArchivePreview, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
|
{Key: conf.ShareArchivePreview, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
|
||||||
|
@ -114,9 +114,8 @@ func proxy(c *gin.Context, link *model.Link, file model.Obj, proxyRange bool) {
|
|||||||
link = common.ProxyRange(c, link, file.GetSize())
|
link = common.ProxyRange(c, link, file.GetSize())
|
||||||
}
|
}
|
||||||
Writer := &common.WrittenResponseWriter{ResponseWriter: c.Writer}
|
Writer := &common.WrittenResponseWriter{ResponseWriter: c.Writer}
|
||||||
|
raw, _ := strconv.ParseBool(c.DefaultQuery("raw", "false"))
|
||||||
//优先处理md文件
|
if utils.Ext(file.GetName()) == "md" && setting.GetBool(conf.FilterReadMeScripts) && !raw {
|
||||||
if utils.Ext(file.GetName()) == "md" && setting.GetBool(conf.FilterReadMeScripts) {
|
|
||||||
buf := bytes.NewBuffer(make([]byte, 0, file.GetSize()))
|
buf := bytes.NewBuffer(make([]byte, 0, file.GetSize()))
|
||||||
w := &common.InterceptResponseWriter{ResponseWriter: Writer, Writer: buf}
|
w := &common.InterceptResponseWriter{ResponseWriter: Writer, Writer: buf}
|
||||||
err = common.Proxy(w, c.Request, link, file)
|
err = common.Proxy(w, c.Request, link, file)
|
||||||
|
@ -195,6 +195,7 @@ func SharingArchiveList(c *gin.Context, req *ArchiveListReq) {
|
|||||||
func SharingDown(c *gin.Context) {
|
func SharingDown(c *gin.Context) {
|
||||||
sid := c.Request.Context().Value(conf.SharingIDKey).(string)
|
sid := c.Request.Context().Value(conf.SharingIDKey).(string)
|
||||||
path := c.Request.Context().Value(conf.PathKey).(string)
|
path := c.Request.Context().Value(conf.PathKey).(string)
|
||||||
|
path = utils.FixAndCleanPath(path)
|
||||||
pwd := c.Query("pwd")
|
pwd := c.Query("pwd")
|
||||||
s, err := op.GetSharingById(sid)
|
s, err := op.GetSharingById(sid)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
@ -219,6 +220,13 @@ func SharingDown(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
if setting.GetBool(conf.ShareForceProxy) || common.ShouldProxy(storage, stdpath.Base(actualPath)) {
|
if setting.GetBool(conf.ShareForceProxy) || common.ShouldProxy(storage, stdpath.Base(actualPath)) {
|
||||||
|
if _, ok := c.GetQuery("d"); !ok {
|
||||||
|
if url := common.GenerateDownProxyURL(storage.GetStorage(), unwrapPath); url != "" {
|
||||||
|
c.Redirect(302, url)
|
||||||
|
_ = countAccess(c.ClientIP(), s)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
link, obj, err := op.Link(c.Request.Context(), storage, actualPath, model.LinkArgs{
|
link, obj, err := op.Link(c.Request.Context(), storage, actualPath, model.LinkArgs{
|
||||||
Header: c.Request.Header,
|
Header: c.Request.Header,
|
||||||
Type: c.Query("type"),
|
Type: c.Query("type"),
|
||||||
@ -252,6 +260,7 @@ func SharingArchiveExtract(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
sid := c.Request.Context().Value(conf.SharingIDKey).(string)
|
sid := c.Request.Context().Value(conf.SharingIDKey).(string)
|
||||||
path := c.Request.Context().Value(conf.PathKey).(string)
|
path := c.Request.Context().Value(conf.PathKey).(string)
|
||||||
|
path = utils.FixAndCleanPath(path)
|
||||||
pwd := c.Query("pwd")
|
pwd := c.Query("pwd")
|
||||||
innerPath := utils.FixAndCleanPath(c.Query("inner"))
|
innerPath := utils.FixAndCleanPath(c.Query("inner"))
|
||||||
archivePass := c.Query("pass")
|
archivePass := c.Query("pass")
|
||||||
|
Reference in New Issue
Block a user