Compare commits

..

1 Commits

Author SHA1 Message Date
0793b6e754 fix(deps): update module github.com/golang-jwt/jwt/v4 to v5 2025-08-13 17:32:21 +00:00
81 changed files with 451 additions and 4643 deletions

View File

@ -1,56 +0,0 @@
<!--
Provide a general summary of your changes in the Title above.
The PR title must start with `feat(): `, `docs(): `, `fix(): `, `style(): `, or `refactor(): `, `chore(): `. For example: `feat(component): add new feature`.
If it spans multiple components, use the main component as the prefix and enumerate in the title, describe in the body.
-->
<!--
在上方标题中提供您更改的总体摘要。
PR 标题需以 `feat(): `, `docs(): `, `fix(): `, `style(): `, `refactor(): `, `chore(): ` 其中之一开头,例如:`feat(component): 新增功能`
如果跨多个组件,请使用主要组件作为前缀,并在标题中枚举、描述中说明。
-->
## Description / 描述
<!-- Describe your changes in detail -->
<!-- 详细描述您的更改 -->
## Motivation and Context / 背景
<!-- Why is this change required? What problem does it solve? -->
<!-- 为什么需要此更改?它解决了什么问题? -->
<!-- If it fixes an open issue, please link to the issue here. -->
<!-- 如果修复了一个打开的issue请在此处链接到该issue -->
Closes #XXXX
<!-- or -->
<!-- 或者 -->
Relates to #XXXX
## How Has This Been Tested? / 测试
<!-- Please describe in detail how you tested your changes. -->
<!-- 请详细描述您如何测试更改 -->
## Checklist / 检查清单
<!-- Go over all the following points, and put an `x` in all the boxes that apply. -->
<!-- 检查以下所有要点,并在所有适用的框中打`x` -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
<!-- 如果您对其中任何一项不确定,请不要犹豫提问。我们会帮助您! -->
- [ ] I have read the [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) document.
我已阅读 [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) 文档。
- [ ] I have formatted my code with `go fmt` or [prettier](https://prettier.io/).
我已使用 `go fmt` 或 [prettier](https://prettier.io/) 格式化提交的代码。
- [ ] I have added appropriate labels to this PR (or mentioned needed labels in the description if lacking permissions).
我已为此 PR 添加了适当的标签(如无权限或需要的标签不存在,请在描述中说明,管理员将后续处理)。
- [ ] I have requested review from relevant code authors using the "Request review" feature when applicable.
我已在适当情况下使用"Request review"功能请求相关代码作者进行审查。
- [ ] I have updated the repository accordingly (If its needed).
我已相应更新了相关仓库(若适用)。
- [ ] [OpenList-Frontend](https://github.com/OpenListTeam/OpenList-Frontend) #XXXX
- [ ] [OpenList-Docs](https://github.com/OpenListTeam/OpenList-Docs) #XXXX

View File

@ -1,38 +0,0 @@
name: Sync to Gitee
on:
push:
branches:
- main
workflow_dispatch:
jobs:
sync:
runs-on: ubuntu-latest
name: Sync GitHub to Gitee
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.GITEE_SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan gitee.com >> ~/.ssh/known_hosts
- name: Create single commit and push
run: |
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
# Create a new branch
git checkout --orphan new-main
git add .
git commit -m "Sync from GitHub: $(date)"
# Add Gitee remote and force push
git remote add gitee ${{ vars.GITEE_REPO_URL }}
git push --force gitee new-main:main

View File

@ -2,76 +2,106 @@
## Setup your machine ## Setup your machine
`OpenList` is written in [Go](https://golang.org/) and [SolidJS](https://www.solidjs.com/). `OpenList` is written in [Go](https://golang.org/) and [React](https://reactjs.org/).
Prerequisites: Prerequisites:
- [git](https://git-scm.com) - [git](https://git-scm.com)
- [Go 1.24+](https://golang.org/doc/install) - [Go 1.20+](https://golang.org/doc/install)
- [gcc](https://gcc.gnu.org/) - [gcc](https://gcc.gnu.org/)
- [nodejs](https://nodejs.org/) - [nodejs](https://nodejs.org/)
## Cloning a fork Clone `OpenList` and `OpenList-Frontend` anywhere:
Fork and clone `OpenList` and `OpenList-Frontend` anywhere:
```shell ```shell
$ git clone https://github.com/<your-username>/OpenList.git $ git clone https://github.com/OpenListTeam/OpenList.git
$ git clone --recurse-submodules https://github.com/<your-username>/OpenList-Frontend.git $ git clone --recurse-submodules https://github.com/OpenListTeam/OpenList-Frontend.git
```
## Creating a branch
Create a new branch from the `main` branch, with an appropriate name.
```shell
$ git checkout -b <branch-name>
``` ```
You should switch to the `main` branch for development.
## Preview your change ## Preview your change
### backend ### backend
```shell ```shell
$ go run main.go $ go run main.go
``` ```
### frontend ### frontend
```shell ```shell
$ pnpm dev $ pnpm dev
``` ```
## Add a new driver ## Add a new driver
Copy `drivers/template` folder and rename it, and follow the comments in it. Copy `drivers/template` folder and rename it, and follow the comments in it.
## Create a commit ## Create a commit
Commit messages should be well formatted, and to make that "standardized". Commit messages should be well formatted, and to make that "standardized".
Submit your pull request. For PR titles, follow [Conventional Commits](https://www.conventionalcommits.org). ### Commit Message Format
Each commit message consists of a **header**, a **body** and a **footer**. The header has a special
format that includes a **type**, a **scope** and a **subject**:
https://github.com/OpenListTeam/OpenList/issues/376 ```
<type>(<scope>): <subject>
<BLANK LINE>
<body>
<BLANK LINE>
<footer>
```
It's suggested to sign your commits. See: [How to sign commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits) The **header** is mandatory and the **scope** of the header is optional.
Any line of the commit message cannot be longer than 100 characters! This allows the message to be easier
to read on GitHub as well as in various git tools.
### Revert
If the commit reverts a previous commit, it should begin with `revert: `, followed by the header
of the reverted commit.
In the body it should say: `This reverts commit <hash>.`, where the hash is the SHA of the commit
being reverted.
### Type
Must be one of the following:
* **feat**: A new feature
* **fix**: A bug fix
* **docs**: Documentation only changes
* **style**: Changes that do not affect the meaning of the code (white-space, formatting, missing
semi-colons, etc)
* **refactor**: A code change that neither fixes a bug nor adds a feature
* **perf**: A code change that improves performance
* **test**: Adding missing or correcting existing tests
* **build**: Affects project builds or dependency modifications
* **revert**: Restore the previous commit
* **ci**: Continuous integration of related file modifications
* **chore**: Changes to the build process or auxiliary tools and libraries such as documentation
generation
* **release**: Release a new version
### Scope
The scope could be anything specifying place of the commit change. For example `$location`,
`$browser`, `$compile`, `$rootScope`, `ngHref`, `ngClick`, `ngView`, etc...
You can use `*` when the change affects more than a single scope.
### Subject
The subject contains succinct description of the change:
* use the imperative, present tense: "change" not "changed" nor "changes"
* don't capitalize first letter
* no dot (.) at the end
### Body
Just as in the **subject**, use the imperative, present tense: "change" not "changed" nor "changes".
The body should include the motivation for the change and contrast this with previous behavior.
### Footer
The footer should contain any information about **Breaking Changes** and is also the place to
[reference GitHub issues that this commit closes](https://help.github.com/articles/closing-issues-via-commit-messages/).
**Breaking Changes** should start with the word `BREAKING CHANGE:` with a space or two newlines.
The rest of the commit message is then used for this.
## Submit a pull request ## Submit a pull request
Please make sure your code has been formatted with `go fmt` or [prettier](https://prettier.io/) before submitting. Push your branch to your `openlist` fork and open a pull request against the
`main` branch.
Push your branch to your `openlist` fork and open a pull request against the `main` branch.
## Merge your pull request
Your pull request will be merged after review. Please wait for the maintainer to merge your pull request after review.
At least 1 approving review is required by reviewers with write access. You can also request a review from maintainers.
## Delete your branch
(Optional) After your pull request is merged, you can delete your branch.
---
Thank you for your contribution! Let's make OpenList better together!

View File

@ -20,12 +20,11 @@ ARG GID=1001
WORKDIR /opt/openlist/ WORKDIR /opt/openlist/
RUN addgroup -g ${GID} ${USER} && \ COPY --chmod=755 --from=builder /app/bin/openlist ./
adduser -D -u ${UID} -G ${USER} ${USER} && \ COPY --chmod=755 entrypoint.sh /entrypoint.sh
mkdir -p /opt/openlist/data RUN adduser -u ${UID} -g ${GID} -h /opt/openlist/data -D -s /bin/sh ${USER} \
&& chown -R ${UID}:${GID} /opt \
COPY --from=builder --chmod=755 --chown=${UID}:${GID} /app/bin/openlist ./ && chown -R ${UID}:${GID} /entrypoint.sh
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh
USER ${USER} USER ${USER}
RUN /entrypoint.sh version RUN /entrypoint.sh version

View File

@ -10,12 +10,12 @@ ARG GID=1001
WORKDIR /opt/openlist/ WORKDIR /opt/openlist/
RUN addgroup -g ${GID} ${USER} && \ COPY --chmod=755 /build/${TARGETPLATFORM}/openlist ./
adduser -D -u ${UID} -G ${USER} ${USER} && \ COPY --chmod=755 entrypoint.sh /entrypoint.sh
mkdir -p /opt/openlist/data
COPY --chmod=755 --chown=${UID}:${GID} /build/${TARGETPLATFORM}/openlist ./ RUN adduser -u ${UID} -g ${GID} -h /opt/openlist/data -D -s /bin/sh ${USER} \
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh && chown -R ${UID}:${GID} /opt \
&& chown -R ${UID}:${GID} /entrypoint.sh
USER ${USER} USER ${USER}
RUN /entrypoint.sh version RUN /entrypoint.sh version

View File

@ -6,9 +6,10 @@ services:
ports: ports:
- '5244:5244' - '5244:5244'
- '5245:5245' - '5245:5245'
user: '0:0'
environment: environment:
- PUID=0
- PGID=0
- UMASK=022 - UMASK=022
- TZ=Asia/Shanghai - TZ=UTC
container_name: openlist container_name: openlist
image: 'openlistteam/openlist:latest' image: 'openlistteam/openlist:latest'

View File

@ -1,60 +1,43 @@
package _115 package _115
import ( import (
"errors"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
driver115 "github.com/SheltonZhu/115driver/pkg/driver" driver115 "github.com/SheltonZhu/115driver/pkg/driver"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
var ( var (
md5Salt = "Qclm8MGWUv59TnrR0XPg" md5Salt = "Qclm8MGWUv59TnrR0XPg"
appVer = "35.6.0.3" appVer = "27.0.5.7"
) )
func (d *Pan115) getAppVersion() (string, error) { func (d *Pan115) getAppVersion() ([]driver115.AppVersion, error) {
result := VersionResp{} result := driver115.VersionResp{}
res, err := base.RestyClient.R().Get(driver115.ApiGetVersion) resp, err := base.RestyClient.R().Get(driver115.ApiGetVersion)
err = driver115.CheckErr(err, &result, resp)
if err != nil { if err != nil {
return "", err return nil, err
} }
err = utils.Json.Unmarshal(res.Body(), &result)
if err != nil { return result.Data.GetAppVersions(), nil
return "", err
}
if len(result.Error) > 0 {
return "", errors.New(result.Error)
}
return result.Data.Win.Version, nil
} }
func (d *Pan115) getAppVer() string { func (d *Pan115) getAppVer() string {
ver, err := d.getAppVersion() // todo add some cache
vers, err := d.getAppVersion()
if err != nil { if err != nil {
log.Warnf("[115] get app version failed: %v", err) log.Warnf("[115] get app version failed: %v", err)
return appVer return appVer
} }
if len(ver) > 0 { for _, ver := range vers {
return ver if ver.AppName == "win" {
return ver.Version
}
} }
return appVer return appVer
} }
func (d *Pan115) initAppVer() { func (d *Pan115) initAppVer() {
appVer = d.getAppVer() appVer = d.getAppVer()
log.Debugf("use app version: %v", appVer)
}
type VersionResp struct {
Error string `json:"error,omitempty"`
Data Versions `json:"data"`
}
type Versions struct {
Win Version `json:"win"`
}
type Version struct {
Version string `json:"version_code"`
} }

View File

@ -17,7 +17,6 @@ import (
type Open123 struct { type Open123 struct {
model.Storage model.Storage
Addition Addition
UID uint64
} }
func (d *Open123) Config() driver.Config { func (d *Open123) Config() driver.Config {
@ -70,45 +69,13 @@ func (d *Open123) List(ctx context.Context, dir model.Obj, args model.ListArgs)
func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
fileId, _ := strconv.ParseInt(file.GetID(), 10, 64) fileId, _ := strconv.ParseInt(file.GetID(), 10, 64)
if d.DirectLink {
res, err := d.getDirectLink(fileId)
if err != nil {
return nil, err
}
if d.DirectLinkPrivateKey == "" {
duration := 365 * 24 * time.Hour // 缓存1年
return &model.Link{
URL: res.Data.URL,
Expiration: &duration,
}, nil
}
uid, err := d.getUID()
if err != nil {
return nil, err
}
duration := time.Duration(d.DirectLinkValidDuration) * time.Minute
newURL, err := d.SignURL(res.Data.URL, d.DirectLinkPrivateKey,
uid, duration)
if err != nil {
return nil, err
}
return &model.Link{
URL: newURL,
Expiration: &duration,
}, nil
}
res, err := d.getDownloadInfo(fileId) res, err := d.getDownloadInfo(fileId)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &model.Link{URL: res.Data.DownloadUrl}, nil link := model.Link{URL: res.Data.DownloadUrl}
return &link, nil
} }
func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {

View File

@ -23,11 +23,6 @@ type Addition struct {
// 上传线程数 // 上传线程数
UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"` UploadThread int `json:"UploadThread" type:"number" default:"3" help:"the threads of upload"`
// 使用直链
DirectLink bool `json:"DirectLink" type:"bool" default:"false" required:"false" help:"use direct link when download file"`
DirectLinkPrivateKey string `json:"DirectLinkPrivateKey" required:"false" help:"private key for direct link, if URL authentication is enabled"`
DirectLinkValidDuration int64 `json:"DirectLinkValidDuration" type:"number" default:"30" required:"false" help:"minutes, if URL authentication is enabled"`
driver.RootID driver.RootID
} }

View File

@ -127,19 +127,19 @@ type RefreshTokenResp struct {
type UserInfoResp struct { type UserInfoResp struct {
BaseResp BaseResp
Data struct { Data struct {
UID uint64 `json:"uid"` UID int64 `json:"uid"`
// Username string `json:"username"` Username string `json:"username"`
// DisplayName string `json:"displayName"` DisplayName string `json:"displayName"`
// HeadImage string `json:"headImage"` HeadImage string `json:"headImage"`
// Passport string `json:"passport"` Passport string `json:"passport"`
// Mail string `json:"mail"` Mail string `json:"mail"`
// SpaceUsed int64 `json:"spaceUsed"` SpaceUsed int64 `json:"spaceUsed"`
// SpacePermanent int64 `json:"spacePermanent"` SpacePermanent int64 `json:"spacePermanent"`
// SpaceTemp int64 `json:"spaceTemp"` SpaceTemp int64 `json:"spaceTemp"`
// SpaceTempExpr int64 `json:"spaceTempExpr"` SpaceTempExpr string `json:"spaceTempExpr"`
// Vip bool `json:"vip"` Vip bool `json:"vip"`
// DirectTraffic int64 `json:"directTraffic"` DirectTraffic int64 `json:"directTraffic"`
// IsHideUID bool `json:"isHideUID"` IsHideUID bool `json:"isHideUID"`
} `json:"data"` } `json:"data"`
} }
@ -158,13 +158,6 @@ type DownloadInfoResp struct {
} `json:"data"` } `json:"data"`
} }
type DirectLinkResp struct {
BaseResp
Data struct {
URL string `json:"url"`
} `json:"data"`
}
// 创建文件V2返回 // 创建文件V2返回
type UploadCreateResp struct { type UploadCreateResp struct {
BaseResp BaseResp

View File

@ -70,8 +70,6 @@ func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createRes
var reader *stream.SectionReader var reader *stream.SectionReader
var rateLimitedRd io.Reader var rateLimitedRd io.Reader
sliceMD5 := "" sliceMD5 := ""
// 表单
b := bytes.NewBuffer(make([]byte, 0, 2048))
threadG.GoWithLifecycle(errgroup.Lifecycle{ threadG.GoWithLifecycle(errgroup.Lifecycle{
Before: func(ctx context.Context) error { Before: func(ctx context.Context) error {
if reader == nil { if reader == nil {
@ -86,6 +84,7 @@ func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createRes
if err != nil { if err != nil {
return err return err
} }
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader)
} }
return nil return nil
}, },
@ -93,8 +92,9 @@ func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createRes
// 重置分片reader位置因为HashReader、上一次失败已经读取到分片EOF // 重置分片reader位置因为HashReader、上一次失败已经读取到分片EOF
reader.Seek(0, io.SeekStart) reader.Seek(0, io.SeekStart)
b.Reset() // 创建表单数据
w := multipart.NewWriter(b) var b bytes.Buffer
w := multipart.NewWriter(&b)
// 添加表单字段 // 添加表单字段
err = w.WriteField("preuploadID", createResp.Data.PreuploadID) err = w.WriteField("preuploadID", createResp.Data.PreuploadID)
if err != nil { if err != nil {
@ -109,20 +109,21 @@ func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, createRes
return err return err
} }
// 写入文件内容 // 写入文件内容
_, err = w.CreateFormFile("slice", fmt.Sprintf("%s.part%d", file.GetName(), partNumber)) fw, err := w.CreateFormFile("slice", fmt.Sprintf("%s.part%d", file.GetName(), partNumber))
if err != nil {
return err
}
_, err = utils.CopyWithBuffer(fw, rateLimitedRd)
if err != nil { if err != nil {
return err return err
} }
headSize := b.Len()
err = w.Close() err = w.Close()
if err != nil { if err != nil {
return err return err
} }
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd = driver.NewLimitedUploadStream(ctx, io.MultiReader(head, reader, tail))
// 创建请求并设置header // 创建请求并设置header
req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadDomain+"/upload/v2/file/slice", rateLimitedRd) req, err := http.NewRequestWithContext(ctx, http.MethodPost, uploadDomain+"/upload/v2/file/slice", &b)
if err != nil { if err != nil {
return err return err
} }

View File

@ -1,20 +1,15 @@
package _123_open package _123_open
import ( import (
"crypto/md5"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt"
"net/http" "net/http"
"net/url"
"strconv" "strconv"
"strings"
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
"github.com/google/uuid"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
@ -25,8 +20,7 @@ var ( //不同情况下获取的AccessTokenQPS限制不同 如下模块化易于
RefreshToken = InitApiInfo(Api+"/api/v1/oauth2/access_token", 1) RefreshToken = InitApiInfo(Api+"/api/v1/oauth2/access_token", 1)
UserInfo = InitApiInfo(Api+"/api/v1/user/info", 1) UserInfo = InitApiInfo(Api+"/api/v1/user/info", 1)
FileList = InitApiInfo(Api+"/api/v2/file/list", 3) FileList = InitApiInfo(Api+"/api/v2/file/list", 3)
DownloadInfo = InitApiInfo(Api+"/api/v1/file/download_info", 5) DownloadInfo = InitApiInfo(Api+"/api/v1/file/download_info", 0)
DirectLink = InitApiInfo(Api+"/api/v1/direct-link/url", 5)
Mkdir = InitApiInfo(Api+"/upload/v1/file/mkdir", 2) Mkdir = InitApiInfo(Api+"/upload/v1/file/mkdir", 2)
Move = InitApiInfo(Api+"/api/v1/file/move", 1) Move = InitApiInfo(Api+"/api/v1/file/move", 1)
Rename = InitApiInfo(Api+"/api/v1/file/name", 1) Rename = InitApiInfo(Api+"/api/v1/file/name", 1)
@ -86,24 +80,8 @@ func (d *Open123) Request(apiInfo *ApiInfo, method string, callback base.ReqCall
} }
func (d *Open123) flushAccessToken() error { func (d *Open123) flushAccessToken() error {
if d.ClientID != "" { if d.Addition.ClientID != "" {
if d.RefreshToken != "" { if d.Addition.ClientSecret != "" {
var resp RefreshTokenResp
_, err := d.Request(RefreshToken, http.MethodPost, func(req *resty.Request) {
req.SetQueryParam("client_id", d.ClientID)
if d.ClientSecret != "" {
req.SetQueryParam("client_secret", d.ClientSecret)
}
req.SetQueryParam("grant_type", "refresh_token")
req.SetQueryParam("refresh_token", d.RefreshToken)
}, &resp)
if err != nil {
return err
}
d.AccessToken = resp.AccessToken
d.RefreshToken = resp.RefreshToken
op.MustSaveDriverStorage(d)
} else if d.ClientSecret != "" {
var resp AccessTokenResp var resp AccessTokenResp
_, err := d.Request(AccessToken, http.MethodPost, func(req *resty.Request) { _, err := d.Request(AccessToken, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
@ -116,38 +94,24 @@ func (d *Open123) flushAccessToken() error {
} }
d.AccessToken = resp.Data.AccessToken d.AccessToken = resp.Data.AccessToken
op.MustSaveDriverStorage(d) op.MustSaveDriverStorage(d)
} else if d.Addition.RefreshToken != "" {
var resp RefreshTokenResp
_, err := d.Request(RefreshToken, http.MethodPost, func(req *resty.Request) {
req.SetQueryParam("client_id", d.ClientID)
req.SetQueryParam("grant_type", "refresh_token")
req.SetQueryParam("refresh_token", d.Addition.RefreshToken)
}, &resp)
if err != nil {
return err
}
d.AccessToken = resp.AccessToken
d.RefreshToken = resp.RefreshToken
op.MustSaveDriverStorage(d)
} }
} }
return nil return nil
} }
func (d *Open123) SignURL(originURL, privateKey string, uid uint64, validDuration time.Duration) (newURL string, err error) {
// 生成Unix时间戳
ts := time.Now().Add(validDuration).Unix()
// 生成随机数建议使用UUID不能包含中划线-
rand := strings.ReplaceAll(uuid.New().String(), "-", "")
// 解析URL
objURL, err := url.Parse(originURL)
if err != nil {
return "", err
}
// 待签名字符串格式path-timestamp-rand-uid-privateKey
unsignedStr := fmt.Sprintf("%s-%d-%s-%d-%s", objURL.Path, ts, rand, uid, privateKey)
md5Hash := md5.Sum([]byte(unsignedStr))
// 生成鉴权参数格式timestamp-rand-uid-md5hash
authKey := fmt.Sprintf("%d-%s-%d-%x", ts, rand, uid, md5Hash)
// 添加鉴权参数到URL查询参数
v := objURL.Query()
v.Add("auth_key", authKey)
objURL.RawQuery = v.Encode()
return objURL.String(), nil
}
func (d *Open123) getUserInfo() (*UserInfoResp, error) { func (d *Open123) getUserInfo() (*UserInfoResp, error) {
var resp UserInfoResp var resp UserInfoResp
@ -158,18 +122,6 @@ func (d *Open123) getUserInfo() (*UserInfoResp, error) {
return &resp, nil return &resp, nil
} }
func (d *Open123) getUID() (uint64, error) {
if d.UID != 0 {
return d.UID, nil
}
resp, err := d.getUserInfo()
if err != nil {
return 0, err
}
d.UID = resp.Data.UID
return resp.Data.UID, nil
}
func (d *Open123) getFiles(parentFileId int64, limit int, lastFileId int64) (*FileListResp, error) { func (d *Open123) getFiles(parentFileId int64, limit int, lastFileId int64) (*FileListResp, error) {
var resp FileListResp var resp FileListResp
@ -207,21 +159,6 @@ func (d *Open123) getDownloadInfo(fileId int64) (*DownloadInfoResp, error) {
return &resp, nil return &resp, nil
} }
func (d *Open123) getDirectLink(fileId int64) (*DirectLinkResp, error) {
var resp DirectLinkResp
_, err := d.Request(DirectLink, http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"fileID": strconv.FormatInt(fileId, 10),
})
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *Open123) mkdir(parentID int64, name string) error { func (d *Open123) mkdir(parentID int64, name string) error {
_, err := d.Request(Mkdir, http.MethodPost, func(req *resty.Request) { _, err := d.Request(Mkdir, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{

View File

@ -534,15 +534,16 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if size > partSize { if size > partSize {
part = (size + partSize - 1) / partSize part = (size + partSize - 1) / partSize
} }
// 生成所有 partInfos
partInfos := make([]PartInfo, 0, part) partInfos := make([]PartInfo, 0, part)
for i := int64(0); i < part; i++ { for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
} }
start := i * partSize start := i * partSize
byteSize := min(size-start, partSize) byteSize := size - start
if byteSize > partSize {
byteSize = partSize
}
partNumber := i + 1 partNumber := i + 1
partInfo := PartInfo{ partInfo := PartInfo{
PartNumber: partNumber, PartNumber: partNumber,
@ -590,20 +591,17 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
// resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址 // resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址
// 快传的情况下同样需要手动处理冲突 // 快传的情况下同样需要手动处理冲突
if resp.Data.PartInfos != nil { if resp.Data.PartInfos != nil {
// Progress // 读取前100个分片的上传地址
p := driver.NewProgress(size, up) uploadPartInfos := resp.Data.PartInfos
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
// 先上传前100个分片 // 获取后续分片的上传地址
err = d.uploadPersonalParts(ctx, partInfos, resp.Data.PartInfos, rateLimited, p) for i := 101; i < len(partInfos); i += 100 {
if err != nil { end := i + 100
return err if end > len(partInfos) {
end = len(partInfos)
} }
// 如果还有剩余分片,分批获取上传地址并上传
for i := 100; i < len(partInfos); i += 100 {
end := min(i+100, len(partInfos))
batchPartInfos := partInfos[i:end] batchPartInfos := partInfos[i:end]
moredata := base.Json{ moredata := base.Json{
"fileId": resp.Data.FileId, "fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId, "uploadId": resp.Data.UploadId,
@ -619,13 +617,44 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil { if err != nil {
return err return err
} }
err = d.uploadPersonalParts(ctx, partInfos, moreresp.Data.PartInfos, rateLimited, p) uploadPartInfos = append(uploadPartInfos, moreresp.Data.PartInfos...)
}
// Progress
p := driver.NewProgress(size, up)
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
// 上传所有分片
for _, uploadPartInfo := range uploadPartInfos {
index := uploadPartInfo.PartNumber - 1
partSize := partInfos[index].PartSize
log.Debugf("[139] uploading part %+v/%+v", index, len(uploadPartInfos))
limitReader := io.LimitReader(rateLimited, partSize)
// Update Progress
r := io.TeeReader(limitReader, p)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadPartInfo.UploadUrl, r)
if err != nil { if err != nil {
return err return err
} }
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(partSize))
req.Header.Set("Origin", "https://yun.139.com")
req.Header.Set("Referer", "https://yun.139.com/")
req.ContentLength = partSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
_ = res.Body.Close()
log.Debugf("[139] uploaded: %+v", res)
if res.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
} }
// 全部分片上传完毕后complete
data = base.Json{ data = base.Json{
"contentHash": fullHash, "contentHash": fullHash,
"contentHashAlgorithm": "SHA256", "contentHashAlgorithm": "SHA256",

View File

@ -1,11 +1,9 @@
package _139 package _139
import ( import (
"context"
"encoding/base64" "encoding/base64"
"errors" "errors"
"fmt" "fmt"
"io"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@ -15,7 +13,6 @@ import (
"time" "time"
"github.com/OpenListTeam/OpenList/v4/drivers/base" "github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
@ -626,47 +623,3 @@ func (d *Yun139) getPersonalCloudHost() string {
} }
return d.PersonalCloudHost return d.PersonalCloudHost
} }
func (d *Yun139) uploadPersonalParts(ctx context.Context, partInfos []PartInfo, uploadPartInfos []PersonalPartInfo, rateLimited *driver.RateLimitReader, p *driver.Progress) error {
// 确保数组以 PartNumber 从小到大排序
sort.Slice(uploadPartInfos, func(i, j int) bool {
return uploadPartInfos[i].PartNumber < uploadPartInfos[j].PartNumber
})
for _, uploadPartInfo := range uploadPartInfos {
index := uploadPartInfo.PartNumber - 1
if index < 0 || index >= len(partInfos) {
return fmt.Errorf("invalid PartNumber %d: index out of bounds (partInfos length: %d)", uploadPartInfo.PartNumber, len(partInfos))
}
partSize := partInfos[index].PartSize
log.Debugf("[139] uploading part %+v/%+v", index, len(partInfos))
limitReader := io.LimitReader(rateLimited, partSize)
r := io.TeeReader(limitReader, p)
req, err := http.NewRequestWithContext(ctx, http.MethodPut, uploadPartInfo.UploadUrl, r)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(partSize))
req.Header.Set("Origin", "https://yun.139.com")
req.Header.Set("Referer", "https://yun.139.com/")
req.ContentLength = partSize
err = func() error {
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
log.Debugf("[139] uploaded: %+v", res)
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("unexpected status code: %d, body: %s", res.StatusCode, string(body))
}
return nil
}()
if err != nil {
return err
}
}
return nil
}

View File

@ -131,7 +131,6 @@ func (y *Cloud189TV) put(ctx context.Context, url string, headers map[string]str
} }
} }
// 请求完成后http.Client会Close Request.Body
resp, err := base.HttpClient.Do(req) resp, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return nil, err return nil, err
@ -334,10 +333,6 @@ func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model
// 网盘中不存在该文件,开始上传 // 网盘中不存在该文件,开始上传
status := GetUploadFileStatusResp{CreateUploadFileResp: *uploadInfo} status := GetUploadFileStatusResp{CreateUploadFileResp: *uploadInfo}
// driver.RateLimitReader会尝试Close底层的reader
// 但这里的tempFile是一个*os.FileClose后就没法继续读了
// 所以这里用io.NopCloser包一层
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.NopCloser(tempFile))
for status.GetSize() < file.GetSize() && status.FileDataExists != 1 { for status.GetSize() < file.GetSize() && status.FileDataExists != 1 {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return nil, ctx.Err() return nil, ctx.Err()
@ -355,7 +350,7 @@ func (y *Cloud189TV) OldUpload(ctx context.Context, dstDir model.Obj, file model
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId) header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
} }
_, err := y.put(ctx, status.FileUploadUrl, header, true, rateLimitedRd, isFamily) _, err := y.put(ctx, status.FileUploadUrl, header, true, tempFile, isFamily)
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" { if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
return nil, err return nil, err
} }

View File

@ -472,16 +472,14 @@ func (y *Cloud189PC) refreshSession() (err error) {
// 普通上传 // 普通上传
// 无法上传大小为0的文件 // 无法上传大小为0的文件
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) { func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
// 文件大小 size := file.GetSize()
fileSize := file.GetSize() sliceSize := min(size, partSize(size))
// 分片大小,不得为文件大小
sliceSize := partSize(fileSize)
params := Params{ params := Params{
"parentFolderId": dstDir.GetID(), "parentFolderId": dstDir.GetID(),
"fileName": url.QueryEscape(file.GetName()), "fileName": url.QueryEscape(file.GetName()),
"fileSize": fmt.Sprint(fileSize), "fileSize": fmt.Sprint(file.GetSize()),
"sliceSize": fmt.Sprint(sliceSize), // 必须为特定分片大小 "sliceSize": fmt.Sprint(sliceSize),
"lazyCheck": "1", "lazyCheck": "1",
} }
@ -514,10 +512,10 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
retry.DelayType(retry.BackOffDelay)) retry.DelayType(retry.BackOffDelay))
count := 1 count := 1
if fileSize > sliceSize { if size > sliceSize {
count = int((fileSize + sliceSize - 1) / sliceSize) count = int((size + sliceSize - 1) / sliceSize)
} }
lastPartSize := fileSize % sliceSize lastPartSize := size % sliceSize
if lastPartSize == 0 { if lastPartSize == 0 {
lastPartSize = sliceSize lastPartSize = sliceSize
} }
@ -537,9 +535,9 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
break break
} }
offset := int64((i)-1) * sliceSize offset := int64((i)-1) * sliceSize
partSize := sliceSize size := sliceSize
if i == count { if i == count {
partSize = lastPartSize size = lastPartSize
} }
partInfo := "" partInfo := ""
var reader *stream.SectionReader var reader *stream.SectionReader
@ -548,14 +546,14 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
Before: func(ctx context.Context) error { Before: func(ctx context.Context) error {
if reader == nil { if reader == nil {
var err error var err error
reader, err = ss.GetSectionReader(offset, partSize) reader, err = ss.GetSectionReader(offset, size)
if err != nil { if err != nil {
return err return err
} }
silceMd5.Reset() silceMd5.Reset()
w, err := utils.CopyWithBuffer(writers, reader) w, err := utils.CopyWithBuffer(writers, reader)
if w != partSize { if w != size {
return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", partSize, w, err) return fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", size, w, err)
} }
// 计算块md5并进行hex和base64编码 // 计算块md5并进行hex和base64编码
md5Bytes := silceMd5.Sum(nil) md5Bytes := silceMd5.Sum(nil)
@ -597,7 +595,7 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
fileMd5Hex = strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil))) fileMd5Hex = strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
} }
sliceMd5Hex := fileMd5Hex sliceMd5Hex := fileMd5Hex
if fileSize > sliceSize { if file.GetSize() > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n"))) sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
} }

View File

@ -23,7 +23,6 @@ import (
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve" _ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve"
_ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve_v4" _ "github.com/OpenListTeam/OpenList/v4/drivers/cloudreve_v4"
_ "github.com/OpenListTeam/OpenList/v4/drivers/crypt" _ "github.com/OpenListTeam/OpenList/v4/drivers/crypt"
_ "github.com/OpenListTeam/OpenList/v4/drivers/degoo"
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao" _ "github.com/OpenListTeam/OpenList/v4/drivers/doubao"
_ "github.com/OpenListTeam/OpenList/v4/drivers/doubao_share" _ "github.com/OpenListTeam/OpenList/v4/drivers/doubao_share"
_ "github.com/OpenListTeam/OpenList/v4/drivers/dropbox" _ "github.com/OpenListTeam/OpenList/v4/drivers/dropbox"
@ -49,7 +48,6 @@ import (
_ "github.com/OpenListTeam/OpenList/v4/drivers/onedrive_app" _ "github.com/OpenListTeam/OpenList/v4/drivers/onedrive_app"
_ "github.com/OpenListTeam/OpenList/v4/drivers/onedrive_sharelink" _ "github.com/OpenListTeam/OpenList/v4/drivers/onedrive_sharelink"
_ "github.com/OpenListTeam/OpenList/v4/drivers/openlist" _ "github.com/OpenListTeam/OpenList/v4/drivers/openlist"
_ "github.com/OpenListTeam/OpenList/v4/drivers/openlist_share"
_ "github.com/OpenListTeam/OpenList/v4/drivers/pikpak" _ "github.com/OpenListTeam/OpenList/v4/drivers/pikpak"
_ "github.com/OpenListTeam/OpenList/v4/drivers/pikpak_share" _ "github.com/OpenListTeam/OpenList/v4/drivers/pikpak_share"
_ "github.com/OpenListTeam/OpenList/v4/drivers/quark_open" _ "github.com/OpenListTeam/OpenList/v4/drivers/quark_open"
@ -61,7 +59,6 @@ import (
_ "github.com/OpenListTeam/OpenList/v4/drivers/smb" _ "github.com/OpenListTeam/OpenList/v4/drivers/smb"
_ "github.com/OpenListTeam/OpenList/v4/drivers/strm" _ "github.com/OpenListTeam/OpenList/v4/drivers/strm"
_ "github.com/OpenListTeam/OpenList/v4/drivers/teambition" _ "github.com/OpenListTeam/OpenList/v4/drivers/teambition"
_ "github.com/OpenListTeam/OpenList/v4/drivers/teldrive"
_ "github.com/OpenListTeam/OpenList/v4/drivers/terabox" _ "github.com/OpenListTeam/OpenList/v4/drivers/terabox"
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder" _ "github.com/OpenListTeam/OpenList/v4/drivers/thunder"
_ "github.com/OpenListTeam/OpenList/v4/drivers/thunder_browser" _ "github.com/OpenListTeam/OpenList/v4/drivers/thunder_browser"

View File

@ -21,8 +21,6 @@ type CloudreveV4 struct {
model.Storage model.Storage
Addition Addition
ref *CloudreveV4 ref *CloudreveV4
AccessExpires string
RefreshExpires string
} }
func (d *CloudreveV4) Config() driver.Config { func (d *CloudreveV4) Config() driver.Config {
@ -46,17 +44,13 @@ func (d *CloudreveV4) Init(ctx context.Context) error {
if d.ref != nil { if d.ref != nil {
return nil return nil
} }
if d.canLogin() { if d.AccessToken == "" && d.RefreshToken != "" {
return d.login()
}
if d.RefreshToken != "" {
return d.refreshToken() return d.refreshToken()
} }
if d.AccessToken == "" { if d.Username != "" {
return errors.New("no way to authenticate. At least AccessToken is required") return d.login()
} }
// ensure AccessToken is valid return nil
return d.parseJWT(d.AccessToken, &AccessJWT{})
} }
func (d *CloudreveV4) InitReference(storage driver.Driver) error { func (d *CloudreveV4) InitReference(storage driver.Driver) error {

View File

@ -66,27 +66,11 @@ type CaptchaResp struct {
Ticket string `json:"ticket"` Ticket string `json:"ticket"`
} }
type AccessJWT struct {
TokenType string `json:"token_type"`
Sub string `json:"sub"`
Exp int64 `json:"exp"`
Nbf int64 `json:"nbf"`
}
type RefreshJWT struct {
TokenType string `json:"token_type"`
Sub string `json:"sub"`
Exp int `json:"exp"`
Nbf int `json:"nbf"`
StateHash string `json:"state_hash"`
RootTokenID string `json:"root_token_id"`
}
type Token struct { type Token struct {
AccessToken string `json:"access_token"` AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"` RefreshToken string `json:"refresh_token"`
AccessExpires string `json:"access_expires"` AccessExpires time.Time `json:"access_expires"`
RefreshExpires string `json:"refresh_expires"` RefreshExpires time.Time `json:"refresh_expires"`
} }
type TokenResponse struct { type TokenResponse struct {

View File

@ -28,15 +28,6 @@ import (
// do others that not defined in Driver interface // do others that not defined in Driver interface
const (
CodeLoginRequired = http.StatusUnauthorized
CodeCredentialInvalid = 40020 // Failed to issue token
)
var (
ErrorIssueToken = errors.New("failed to issue token")
)
func (d *CloudreveV4) getUA() string { func (d *CloudreveV4) getUA() string {
if d.CustomUA != "" { if d.CustomUA != "" {
return d.CustomUA return d.CustomUA
@ -48,23 +39,6 @@ func (d *CloudreveV4) request(method string, path string, callback base.ReqCallb
if d.ref != nil { if d.ref != nil {
return d.ref.request(method, path, callback, out) return d.ref.request(method, path, callback, out)
} }
// ensure token
if d.isTokenExpired() {
err := d.refreshToken()
if err != nil {
return err
}
}
return d._request(method, path, callback, out)
}
func (d *CloudreveV4) _request(method string, path string, callback base.ReqCallback, out any) error {
if d.ref != nil {
return d.ref._request(method, path, callback, out)
}
u := d.Address + "/api/v4" + path u := d.Address + "/api/v4" + path
req := base.RestyClient.R() req := base.RestyClient.R()
req.SetHeaders(map[string]string{ req.SetHeaders(map[string]string{
@ -91,17 +65,15 @@ func (d *CloudreveV4) _request(method string, path string, callback base.ReqCall
} }
if r.Code != 0 { if r.Code != 0 {
if r.Code == CodeLoginRequired && d.canLogin() && path != "/session/token/refresh" { if r.Code == 401 && d.RefreshToken != "" && path != "/session/token/refresh" {
err = d.login() // try to refresh token
err = d.refreshToken()
if err != nil { if err != nil {
return err return err
} }
return d.request(method, path, callback, out) return d.request(method, path, callback, out)
} }
if r.Code == CodeCredentialInvalid { return errors.New(r.Msg)
return ErrorIssueToken
}
return fmt.Errorf("%d: %s", r.Code, r.Msg)
} }
if out != nil && r.Data != nil { if out != nil && r.Data != nil {
@ -119,18 +91,14 @@ func (d *CloudreveV4) _request(method string, path string, callback base.ReqCall
return nil return nil
} }
func (d *CloudreveV4) canLogin() bool {
return d.Username != "" && d.Password != ""
}
func (d *CloudreveV4) login() error { func (d *CloudreveV4) login() error {
var siteConfig SiteLoginConfigResp var siteConfig SiteLoginConfigResp
err := d._request(http.MethodGet, "/site/config/login", nil, &siteConfig) err := d.request(http.MethodGet, "/site/config/login", nil, &siteConfig)
if err != nil { if err != nil {
return err return err
} }
var prepareLogin PrepareLoginResp var prepareLogin PrepareLoginResp
err = d._request(http.MethodGet, "/session/prepare?email="+d.Addition.Username, nil, &prepareLogin) err = d.request(http.MethodGet, "/session/prepare?email="+d.Addition.Username, nil, &prepareLogin)
if err != nil { if err != nil {
return err return err
} }
@ -160,7 +128,7 @@ func (d *CloudreveV4) doLogin(needCaptcha bool) error {
} }
if needCaptcha { if needCaptcha {
var config BasicConfigResp var config BasicConfigResp
err = d._request(http.MethodGet, "/site/config/basic", nil, &config) err = d.request(http.MethodGet, "/site/config/basic", nil, &config)
if err != nil { if err != nil {
return err return err
} }
@ -168,7 +136,7 @@ func (d *CloudreveV4) doLogin(needCaptcha bool) error {
return fmt.Errorf("captcha type %s not support", config.CaptchaType) return fmt.Errorf("captcha type %s not support", config.CaptchaType)
} }
var captcha CaptchaResp var captcha CaptchaResp
err = d._request(http.MethodGet, "/site/captcha", nil, &captcha) err = d.request(http.MethodGet, "/site/captcha", nil, &captcha)
if err != nil { if err != nil {
return err return err
} }
@ -194,22 +162,20 @@ func (d *CloudreveV4) doLogin(needCaptcha bool) error {
loginBody["captcha"] = captchaCode loginBody["captcha"] = captchaCode
} }
var token TokenResponse var token TokenResponse
err = d._request(http.MethodPost, "/session/token", func(req *resty.Request) { err = d.request(http.MethodPost, "/session/token", func(req *resty.Request) {
req.SetBody(loginBody) req.SetBody(loginBody)
}, &token) }, &token)
if err != nil { if err != nil {
return err return err
} }
d.AccessToken, d.RefreshToken = token.Token.AccessToken, token.Token.RefreshToken d.AccessToken, d.RefreshToken = token.Token.AccessToken, token.Token.RefreshToken
d.AccessExpires, d.RefreshExpires = token.Token.AccessExpires, token.Token.RefreshExpires
op.MustSaveDriverStorage(d) op.MustSaveDriverStorage(d)
return nil return nil
} }
func (d *CloudreveV4) refreshToken() error { func (d *CloudreveV4) refreshToken() error {
// if no refresh token, try to login if possible
if d.RefreshToken == "" { if d.RefreshToken == "" {
if d.canLogin() { if d.Username != "" {
err := d.login() err := d.login()
if err != nil { if err != nil {
return fmt.Errorf("cannot login to get refresh token, error: %s", err) return fmt.Errorf("cannot login to get refresh token, error: %s", err)
@ -217,127 +183,20 @@ func (d *CloudreveV4) refreshToken() error {
} }
return nil return nil
} }
// parse jwt to check if refresh token is valid
var jwt RefreshJWT
err := d.parseJWT(d.RefreshToken, &jwt)
if err != nil {
// if refresh token is invalid, try to login if possible
if d.canLogin() {
return d.login()
}
d.GetStorage().SetStatus(fmt.Sprintf("Invalid RefreshToken: %s", err.Error()))
op.MustSaveDriverStorage(d)
return fmt.Errorf("invalid refresh token: %w", err)
}
// do refresh token
var token Token var token Token
err = d._request(http.MethodPost, "/session/token/refresh", func(req *resty.Request) { err := d.request(http.MethodPost, "/session/token/refresh", func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"refresh_token": d.RefreshToken, "refresh_token": d.RefreshToken,
}) })
}, &token) }, &token)
if err != nil { if err != nil {
if errors.Is(err, ErrorIssueToken) {
if d.canLogin() {
// try to login again
return d.login()
}
d.GetStorage().SetStatus("This session is no longer valid")
op.MustSaveDriverStorage(d)
return ErrorIssueToken
}
return err return err
} }
d.AccessToken, d.RefreshToken = token.AccessToken, token.RefreshToken d.AccessToken, d.RefreshToken = token.AccessToken, token.RefreshToken
d.AccessExpires, d.RefreshExpires = token.AccessExpires, token.RefreshExpires
op.MustSaveDriverStorage(d) op.MustSaveDriverStorage(d)
return nil return nil
} }
func (d *CloudreveV4) parseJWT(token string, jwt any) error {
split := strings.Split(token, ".")
if len(split) != 3 {
return fmt.Errorf("invalid token length: %d, ensure the token is a valid JWT", len(split))
}
data, err := base64.RawURLEncoding.DecodeString(split[1])
if err != nil {
return fmt.Errorf("invalid token encoding: %w, ensure the token is a valid JWT", err)
}
err = json.Unmarshal(data, &jwt)
if err != nil {
return fmt.Errorf("invalid token content: %w, ensure the token is a valid JWT", err)
}
return nil
}
// check if token is expired
// https://github.com/cloudreve/frontend/blob/ddfacc1c31c49be03beb71de4cc114c8811038d6/src/session/index.ts#L177-L200
func (d *CloudreveV4) isTokenExpired() bool {
if d.RefreshToken == "" {
// login again if username and password is set
if d.canLogin() {
return true
}
// no refresh token, cannot refresh
return false
}
if d.AccessToken == "" {
return true
}
var (
err error
expires time.Time
)
// check if token is expired
if d.AccessExpires != "" {
// use expires field if possible to prevent timezone issue
// only available after login or refresh token
// 2025-08-28T02:43:07.645109985+08:00
expires, err = time.Parse(time.RFC3339Nano, d.AccessExpires)
if err != nil {
return false
}
} else {
// fallback to parse jwt
// if failed, disable the storage
var jwt AccessJWT
err = d.parseJWT(d.AccessToken, &jwt)
if err != nil {
d.GetStorage().SetStatus(fmt.Sprintf("Invalid AccessToken: %s", err.Error()))
op.MustSaveDriverStorage(d)
return false
}
// may be have timezone issue
expires = time.Unix(jwt.Exp, 0)
}
// add a 10 minutes safe margin
ddl := time.Now().Add(10 * time.Minute)
if expires.Before(ddl) {
// current access token expired, check if refresh token is expired
// warning: cannot parse refresh token from jwt, because the exp field is not standard
if d.RefreshExpires != "" {
refreshExpires, err := time.Parse(time.RFC3339Nano, d.RefreshExpires)
if err != nil {
return false
}
if refreshExpires.Before(time.Now()) {
// This session is no longer valid
if d.canLogin() {
// try to login again
return true
}
d.GetStorage().SetStatus("This session is no longer valid")
op.MustSaveDriverStorage(d)
return false
}
}
return true
}
return false
}
func (d *CloudreveV4) upLocal(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error { func (d *CloudreveV4) upLocal(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
var finish int64 = 0 var finish int64 = 0
var chunk int = 0 var chunk int = 0

View File

@ -1,203 +0,0 @@
package degoo
import (
"context"
"fmt"
"net/http"
"strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
type Degoo struct {
model.Storage
Addition
client *http.Client
}
func (d *Degoo) Config() driver.Config {
return config
}
func (d *Degoo) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Degoo) Init(ctx context.Context) error {
d.client = base.HttpClient
// Ensure we have a valid token (will login if needed or refresh if expired)
if err := d.ensureValidToken(ctx); err != nil {
return fmt.Errorf("failed to initialize token: %w", err)
}
return d.getDevices(ctx)
}
func (d *Degoo) Drop(ctx context.Context) error {
return nil
}
func (d *Degoo) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
items, err := d.getAllFileChildren5(ctx, dir.GetID())
if err != nil {
return nil, err
}
return utils.MustSliceConvert(items, func(s DegooFileItem) model.Obj {
isFolder := s.Category == 2 || s.Category == 1 || s.Category == 10
createTime, modTime, _ := humanReadableTimes(s.CreationTime, s.LastModificationTime, s.LastUploadTime)
size, err := strconv.ParseInt(s.Size, 10, 64)
if err != nil {
size = 0 // Default to 0 if size parsing fails
}
return &model.Object{
ID: s.ID,
Path: s.FilePath,
Name: s.Name,
Size: size,
Modified: modTime,
Ctime: createTime,
IsFolder: isFolder,
}
}), nil
}
func (d *Degoo) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
item, err := d.getOverlay4(ctx, file.GetID())
if err != nil {
return nil, err
}
return &model.Link{URL: item.URL}, nil
}
func (d *Degoo) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
// This is done by calling the setUploadFile3 API with a special checksum and size.
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) { setUploadFile3(Token: $Token, FileInfos: $FileInfos) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileInfos": []map[string]interface{}{
{
"Checksum": folderChecksum,
"Name": dirName,
"CreationTime": time.Now().UnixMilli(),
"ParentID": parentDir.GetID(),
"Size": 0,
},
},
}
_, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
if err != nil {
return err
}
return nil
}
func (d *Degoo) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
const query = `mutation SetMoveFile($Token: String!, $Copy: Boolean, $NewParentID: String!, $FileIDs: [String]!) { setMoveFile(Token: $Token, Copy: $Copy, NewParentID: $NewParentID, FileIDs: $FileIDs) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"Copy": false,
"NewParentID": dstDir.GetID(),
"FileIDs": []string{srcObj.GetID()},
}
_, err := d.apiCall(ctx, "SetMoveFile", query, variables)
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Degoo) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
const query = `mutation SetRenameFile($Token: String!, $FileRenames: [FileRenameInfo]!) { setRenameFile(Token: $Token, FileRenames: $FileRenames) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileRenames": []DegooFileRenameInfo{
{
ID: srcObj.GetID(),
NewName: newName,
},
},
}
_, err := d.apiCall(ctx, "SetRenameFile", query, variables)
if err != nil {
return err
}
return nil
}
func (d *Degoo) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// Copy is not implemented, Degoo API does not support direct copy.
return nil, errs.NotImplement
}
func (d *Degoo) Remove(ctx context.Context, obj model.Obj) error {
// Remove deletes a file or folder (moves to trash).
const query = `mutation SetDeleteFile5($Token: String!, $IsInRecycleBin: Boolean!, $IDs: [IDType]!) { setDeleteFile5(Token: $Token, IsInRecycleBin: $IsInRecycleBin, IDs: $IDs) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"IsInRecycleBin": false,
"IDs": []map[string]string{{"FileID": obj.GetID()}},
}
_, err := d.apiCall(ctx, "SetDeleteFile5", query, variables)
return err
}
func (d *Degoo) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
tmpF, err := file.CacheFullAndWriter(&up, nil)
if err != nil {
return err
}
parentID := dstDir.GetID()
// Calculate the checksum for the file.
checksum, err := d.checkSum(tmpF)
if err != nil {
return err
}
// 1. Get upload authorization via getBucketWriteAuth4.
auths, err := d.getBucketWriteAuth4(ctx, file, parentID, checksum)
if err != nil {
return err
}
// 2. Upload file.
// support rapid upload
if auths.GetBucketWriteAuth4[0].Error != "Already exist!" {
err = d.uploadS3(ctx, auths, tmpF, file, checksum)
if err != nil {
return err
}
}
// 3. Register metadata with setUploadFile3.
data, err := d.SetUploadFile3(ctx, file, parentID, checksum)
if err != nil {
return err
}
if !data.SetUploadFile3 {
return fmt.Errorf("setUploadFile3 failed: %v", data)
}
return nil
}

View File

@ -1,27 +0,0 @@
package degoo
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootID
Username string `json:"username" help:"Your Degoo account email"`
Password string `json:"password" help:"Your Degoo account password"`
RefreshToken string `json:"refresh_token" help:"Refresh token for automatic token renewal, obtained automatically"`
AccessToken string `json:"access_token" help:"Access token for Degoo API, obtained automatically"`
}
var config = driver.Config{
Name: "Degoo",
LocalSort: true,
DefaultRoot: "0",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Degoo{}
})
}

View File

@ -1,110 +0,0 @@
package degoo
import (
"encoding/json"
)
// DegooLoginRequest represents the login request body.
type DegooLoginRequest struct {
GenerateToken bool `json:"GenerateToken"`
Username string `json:"Username"`
Password string `json:"Password"`
}
// DegooLoginResponse represents a successful login response.
type DegooLoginResponse struct {
Token string `json:"Token"`
RefreshToken string `json:"RefreshToken"`
}
// DegooAccessTokenRequest represents the token refresh request body.
type DegooAccessTokenRequest struct {
RefreshToken string `json:"RefreshToken"`
}
// DegooAccessTokenResponse represents the token refresh response.
type DegooAccessTokenResponse struct {
AccessToken string `json:"AccessToken"`
}
// DegooFileItem represents a Degoo file or folder.
type DegooFileItem struct {
ID string `json:"ID"`
ParentID string `json:"ParentID"`
Name string `json:"Name"`
Category int `json:"Category"`
Size string `json:"Size"`
URL string `json:"URL"`
CreationTime string `json:"CreationTime"`
LastModificationTime string `json:"LastModificationTime"`
LastUploadTime string `json:"LastUploadTime"`
MetadataID string `json:"MetadataID"`
DeviceID int64 `json:"DeviceID"`
FilePath string `json:"FilePath"`
IsInRecycleBin bool `json:"IsInRecycleBin"`
}
type DegooErrors struct {
Path []string `json:"path"`
Data interface{} `json:"data"`
ErrorType string `json:"errorType"`
ErrorInfo interface{} `json:"errorInfo"`
Message string `json:"message"`
}
// DegooGraphqlResponse is the common structure for GraphQL API responses.
type DegooGraphqlResponse struct {
Data json.RawMessage `json:"data"`
Errors []DegooErrors `json:"errors,omitempty"`
}
// DegooGetChildren5Data is the data field for getFileChildren5.
type DegooGetChildren5Data struct {
GetFileChildren5 struct {
Items []DegooFileItem `json:"Items"`
NextToken string `json:"NextToken"`
} `json:"getFileChildren5"`
}
// DegooGetOverlay4Data is the data field for getOverlay4.
type DegooGetOverlay4Data struct {
GetOverlay4 DegooFileItem `json:"getOverlay4"`
}
// DegooFileRenameInfo represents a file rename operation.
type DegooFileRenameInfo struct {
ID string `json:"ID"`
NewName string `json:"NewName"`
}
// DegooFileIDs represents a list of file IDs for move operations.
type DegooFileIDs struct {
FileIDs []string `json:"FileIDs"`
}
// DegooGetBucketWriteAuth4Data is the data field for GetBucketWriteAuth4.
type DegooGetBucketWriteAuth4Data struct {
GetBucketWriteAuth4 []struct {
AuthData struct {
PolicyBase64 string `json:"PolicyBase64"`
Signature string `json:"Signature"`
BaseURL string `json:"BaseURL"`
KeyPrefix string `json:"KeyPrefix"`
AccessKey struct {
Key string `json:"Key"`
Value string `json:"Value"`
} `json:"AccessKey"`
ACL string `json:"ACL"`
AdditionalBody []struct {
Key string `json:"Key"`
Value string `json:"Value"`
} `json:"AdditionalBody"`
} `json:"AuthData"`
Error interface{} `json:"Error"`
} `json:"getBucketWriteAuth4"`
}
// DegooSetUploadFile3Data is the data field for SetUploadFile3.
type DegooSetUploadFile3Data struct {
SetUploadFile3 bool `json:"setUploadFile3"`
}

View File

@ -1,198 +0,0 @@
package degoo
import (
"bytes"
"context"
"crypto/sha1"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"mime/multipart"
"net/http"
"strconv"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
func (d *Degoo) getBucketWriteAuth4(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooGetBucketWriteAuth4Data, error) {
const query = `query GetBucketWriteAuth4(
$Token: String!
$ParentID: String!
$StorageUploadInfos: [StorageUploadInfo2]
) {
getBucketWriteAuth4(
Token: $Token
ParentID: $ParentID
StorageUploadInfos: $StorageUploadInfos
) {
AuthData {
PolicyBase64
Signature
BaseURL
KeyPrefix
AccessKey {
Key
Value
}
ACL
AdditionalBody {
Key
Value
}
}
Error
}
}`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": parentID,
"StorageUploadInfos": []map[string]string{{
"FileName": file.GetName(),
"Checksum": checksum,
"Size": strconv.FormatInt(file.GetSize(), 10),
}}}
data, err := d.apiCall(ctx, "GetBucketWriteAuth4", query, variables)
if err != nil {
return nil, err
}
var resp DegooGetBucketWriteAuth4Data
err = json.Unmarshal(data, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
// checkSum calculates the SHA1-based checksum for Degoo upload API.
func (d *Degoo) checkSum(file io.Reader) (string, error) {
seed := []byte{13, 7, 2, 2, 15, 40, 75, 117, 13, 10, 19, 16, 29, 23, 3, 36}
hasher := sha1.New()
hasher.Write(seed)
if _, err := utils.CopyWithBuffer(hasher, file); err != nil {
return "", err
}
cs := hasher.Sum(nil)
csBytes := []byte{10, byte(len(cs))}
csBytes = append(csBytes, cs...)
csBytes = append(csBytes, 16, 0)
return strings.ReplaceAll(base64.StdEncoding.EncodeToString(csBytes), "/", "_"), nil
}
func (d *Degoo) uploadS3(ctx context.Context, auths *DegooGetBucketWriteAuth4Data, tmpF model.File, file model.FileStreamer, checksum string) error {
a := auths.GetBucketWriteAuth4[0].AuthData
_, err := tmpF.Seek(0, io.SeekStart)
if err != nil {
return err
}
ext := utils.Ext(file.GetName())
key := fmt.Sprintf("%s%s/%s.%s", a.KeyPrefix, ext, checksum, ext)
var b bytes.Buffer
w := multipart.NewWriter(&b)
err = w.WriteField("key", key)
if err != nil {
return err
}
err = w.WriteField("acl", a.ACL)
if err != nil {
return err
}
err = w.WriteField("policy", a.PolicyBase64)
if err != nil {
return err
}
err = w.WriteField("signature", a.Signature)
if err != nil {
return err
}
err = w.WriteField(a.AccessKey.Key, a.AccessKey.Value)
if err != nil {
return err
}
for _, additional := range a.AdditionalBody {
err = w.WriteField(additional.Key, additional.Value)
if err != nil {
return err
}
}
err = w.WriteField("Content-Type", "")
if err != nil {
return err
}
_, err = w.CreateFormFile("file", key)
if err != nil {
return err
}
headSize := b.Len()
err = w.Close()
if err != nil {
return err
}
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.MultiReader(head, tmpF, tail))
req, err := http.NewRequestWithContext(ctx, http.MethodPost, a.BaseURL, rateLimitedRd)
if err != nil {
return err
}
req.Header.Add("ngsw-bypass", "1")
req.Header.Add("Content-Type", w.FormDataContentType())
res, err := d.client.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusNoContent {
return fmt.Errorf("upload failed with status code %d", res.StatusCode)
}
return nil
}
var _ driver.Driver = (*Degoo)(nil)
func (d *Degoo) SetUploadFile3(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooSetUploadFile3Data, error) {
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) {
setUploadFile3(Token: $Token, FileInfos: $FileInfos)
}`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileInfos": []map[string]string{{
"Checksum": checksum,
"CreationTime": strconv.FormatInt(file.CreateTime().UnixMilli(), 10),
"Name": file.GetName(),
"ParentID": parentID,
"Size": strconv.FormatInt(file.GetSize(), 10),
}}}
data, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
if err != nil {
return nil, err
}
var resp DegooSetUploadFile3Data
err = json.Unmarshal(data, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}

View File

@ -1,462 +0,0 @@
package degoo
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"fmt"
"net/http"
"strconv"
"strings"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
// Thanks to https://github.com/bernd-wechner/Degoo for API research.
const (
// API endpoints
loginURL = "https://rest-api.degoo.com/login"
accessTokenURL = "https://rest-api.degoo.com/access-token/v2"
apiURL = "https://production-appsync.degoo.com/graphql"
// API configuration
apiKey = "da2-vs6twz5vnjdavpqndtbzg3prra"
folderChecksum = "CgAQAg"
// Token management
tokenRefreshThreshold = 5 * time.Minute
// Rate limiting
minRequestInterval = 1 * time.Second
// Error messages
errRateLimited = "rate limited (429), please try again later"
errUnauthorized = "unauthorized access"
)
var (
// Global rate limiting - protects against concurrent API calls
lastRequestTime time.Time
requestMutex sync.Mutex
)
// JWT payload structure for token expiration checking
type JWTPayload struct {
UserID string `json:"userID"`
Exp int64 `json:"exp"`
Iat int64 `json:"iat"`
}
// Rate limiting helper functions
// applyRateLimit ensures minimum interval between API requests
func applyRateLimit() {
requestMutex.Lock()
defer requestMutex.Unlock()
if !lastRequestTime.IsZero() {
if elapsed := time.Since(lastRequestTime); elapsed < minRequestInterval {
time.Sleep(minRequestInterval - elapsed)
}
}
lastRequestTime = time.Now()
}
// HTTP request helper functions
// createJSONRequest creates a new HTTP request with JSON body
func createJSONRequest(ctx context.Context, method, url string, body interface{}) (*http.Request, error) {
jsonBody, err := json.Marshal(body)
if err != nil {
return nil, fmt.Errorf("failed to marshal request body: %w", err)
}
req, err := http.NewRequestWithContext(ctx, method, url, bytes.NewBuffer(jsonBody))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", base.UserAgent)
return req, nil
}
// checkHTTPResponse checks for common HTTP error conditions
func checkHTTPResponse(resp *http.Response, operation string) error {
if resp.StatusCode == http.StatusTooManyRequests {
return fmt.Errorf("%s %s", operation, errRateLimited)
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("%s failed: %s", operation, resp.Status)
}
return nil
}
// isTokenExpired checks if the JWT token is expired or will expire soon
func (d *Degoo) isTokenExpired() bool {
if d.AccessToken == "" {
return true
}
payload, err := extractJWTPayload(d.AccessToken)
if err != nil {
return true // Invalid token format
}
// Check if token expires within the threshold
expireTime := time.Unix(payload.Exp, 0)
return time.Now().Add(tokenRefreshThreshold).After(expireTime)
}
// extractJWTPayload extracts and parses JWT payload
func extractJWTPayload(token string) (*JWTPayload, error) {
parts := strings.Split(token, ".")
if len(parts) != 3 {
return nil, fmt.Errorf("invalid JWT format")
}
// Decode the payload (second part)
payload, err := base64.RawURLEncoding.DecodeString(parts[1])
if err != nil {
return nil, fmt.Errorf("failed to decode JWT payload: %w", err)
}
var jwtPayload JWTPayload
if err := json.Unmarshal(payload, &jwtPayload); err != nil {
return nil, fmt.Errorf("failed to parse JWT payload: %w", err)
}
return &jwtPayload, nil
}
// refreshToken attempts to refresh the access token using the refresh token
func (d *Degoo) refreshToken(ctx context.Context) error {
if d.RefreshToken == "" {
return fmt.Errorf("no refresh token available")
}
// Create request
tokenReq := DegooAccessTokenRequest{RefreshToken: d.RefreshToken}
req, err := createJSONRequest(ctx, "POST", accessTokenURL, tokenReq)
if err != nil {
return fmt.Errorf("failed to create refresh token request: %w", err)
}
// Execute request
resp, err := d.client.Do(req)
if err != nil {
return fmt.Errorf("refresh token request failed: %w", err)
}
defer resp.Body.Close()
// Check response
if err := checkHTTPResponse(resp, "refresh token"); err != nil {
return err
}
var accessTokenResp DegooAccessTokenResponse
if err := json.NewDecoder(resp.Body).Decode(&accessTokenResp); err != nil {
return fmt.Errorf("failed to parse access token response: %w", err)
}
if accessTokenResp.AccessToken == "" {
return fmt.Errorf("empty access token received")
}
d.AccessToken = accessTokenResp.AccessToken
// Save the updated token to storage
op.MustSaveDriverStorage(d)
return nil
}
// ensureValidToken ensures we have a valid, non-expired token
func (d *Degoo) ensureValidToken(ctx context.Context) error {
// Check if token is expired or will expire soon
if d.isTokenExpired() {
// Try to refresh token first if we have a refresh token
if d.RefreshToken != "" {
if refreshErr := d.refreshToken(ctx); refreshErr == nil {
return nil // Successfully refreshed
} else {
// If refresh failed, fall back to full login
fmt.Printf("Token refresh failed, falling back to full login: %v\n", refreshErr)
}
}
// Perform full login
if d.Username != "" && d.Password != "" {
return d.login(ctx)
}
}
return nil
}
// login performs the login process and retrieves the access token.
func (d *Degoo) login(ctx context.Context) error {
if d.Username == "" || d.Password == "" {
return fmt.Errorf("username or password not provided")
}
creds := DegooLoginRequest{
GenerateToken: true,
Username: d.Username,
Password: d.Password,
}
jsonCreds, err := json.Marshal(creds)
if err != nil {
return fmt.Errorf("failed to serialize login credentials: %w", err)
}
req, err := http.NewRequestWithContext(ctx, "POST", loginURL, bytes.NewBuffer(jsonCreds))
if err != nil {
return fmt.Errorf("failed to create login request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", base.UserAgent)
req.Header.Set("Origin", "https://app.degoo.com")
resp, err := d.client.Do(req)
if err != nil {
return fmt.Errorf("login request failed: %w", err)
}
defer resp.Body.Close()
// Handle rate limiting (429 Too Many Requests)
if resp.StatusCode == http.StatusTooManyRequests {
return fmt.Errorf("login rate limited (429), please try again later")
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("login failed: %s", resp.Status)
}
var loginResp DegooLoginResponse
if err := json.NewDecoder(resp.Body).Decode(&loginResp); err != nil {
return fmt.Errorf("failed to parse login response: %w", err)
}
if loginResp.RefreshToken != "" {
tokenReq := DegooAccessTokenRequest{RefreshToken: loginResp.RefreshToken}
jsonTokenReq, err := json.Marshal(tokenReq)
if err != nil {
return fmt.Errorf("failed to serialize access token request: %w", err)
}
tokenReqHTTP, err := http.NewRequestWithContext(ctx, "POST", accessTokenURL, bytes.NewBuffer(jsonTokenReq))
if err != nil {
return fmt.Errorf("failed to create access token request: %w", err)
}
tokenReqHTTP.Header.Set("User-Agent", base.UserAgent)
tokenResp, err := d.client.Do(tokenReqHTTP)
if err != nil {
return fmt.Errorf("failed to get access token: %w", err)
}
defer tokenResp.Body.Close()
var accessTokenResp DegooAccessTokenResponse
if err := json.NewDecoder(tokenResp.Body).Decode(&accessTokenResp); err != nil {
return fmt.Errorf("failed to parse access token response: %w", err)
}
d.AccessToken = accessTokenResp.AccessToken
d.RefreshToken = loginResp.RefreshToken // Save refresh token
} else if loginResp.Token != "" {
d.AccessToken = loginResp.Token
d.RefreshToken = "" // Direct token, no refresh token available
} else {
return fmt.Errorf("login failed, no valid token returned")
}
// Save the updated tokens to storage
op.MustSaveDriverStorage(d)
return nil
}
// apiCall performs a Degoo GraphQL API request.
func (d *Degoo) apiCall(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
// Apply rate limiting
applyRateLimit()
// Ensure we have a valid token before making the API call
if err := d.ensureValidToken(ctx); err != nil {
return nil, fmt.Errorf("failed to ensure valid token: %w", err)
}
// Update the Token in variables if it exists (after potential refresh)
d.updateTokenInVariables(variables)
return d.executeGraphQLRequest(ctx, operationName, query, variables)
}
// updateTokenInVariables updates the Token field in GraphQL variables
func (d *Degoo) updateTokenInVariables(variables map[string]interface{}) {
if variables != nil {
if _, hasToken := variables["Token"]; hasToken {
variables["Token"] = d.AccessToken
}
}
}
// executeGraphQLRequest executes a GraphQL request with retry logic
func (d *Degoo) executeGraphQLRequest(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
reqBody := map[string]interface{}{
"operationName": operationName,
"query": query,
"variables": variables,
}
// Create and configure request
req, err := createJSONRequest(ctx, "POST", apiURL, reqBody)
if err != nil {
return nil, err
}
// Set Degoo-specific headers
req.Header.Set("x-api-key", apiKey)
if d.AccessToken != "" {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", d.AccessToken))
}
// Execute request
resp, err := d.client.Do(req)
if err != nil {
return nil, fmt.Errorf("GraphQL API request failed: %w", err)
}
defer resp.Body.Close()
// Check for HTTP errors
if err := checkHTTPResponse(resp, "GraphQL API"); err != nil {
return nil, err
}
// Parse GraphQL response
var degooResp DegooGraphqlResponse
if err := json.NewDecoder(resp.Body).Decode(&degooResp); err != nil {
return nil, fmt.Errorf("failed to decode GraphQL response: %w", err)
}
// Handle GraphQL errors
if len(degooResp.Errors) > 0 {
return d.handleGraphQLError(ctx, degooResp.Errors[0], operationName, query, variables)
}
return degooResp.Data, nil
}
// handleGraphQLError handles GraphQL-level errors with retry logic
func (d *Degoo) handleGraphQLError(ctx context.Context, gqlError DegooErrors, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
if gqlError.ErrorType == "Unauthorized" {
// Re-login and retry
if err := d.login(ctx); err != nil {
return nil, fmt.Errorf("%s, login failed: %w", errUnauthorized, err)
}
// Update token in variables and retry
d.updateTokenInVariables(variables)
return d.apiCall(ctx, operationName, query, variables)
}
return nil, fmt.Errorf("GraphQL API error: %s", gqlError.Message)
}
// humanReadableTimes converts Degoo timestamps to Go time.Time.
func humanReadableTimes(creation, modification, upload string) (cTime, mTime, uTime time.Time) {
cTime, _ = time.Parse(time.RFC3339, creation)
if modification != "" {
modMillis, _ := strconv.ParseInt(modification, 10, 64)
mTime = time.Unix(0, modMillis*int64(time.Millisecond))
}
if upload != "" {
upMillis, _ := strconv.ParseInt(upload, 10, 64)
uTime = time.Unix(0, upMillis*int64(time.Millisecond))
}
return cTime, mTime, uTime
}
// getDevices fetches and caches top-level devices and folders.
func (d *Degoo) getDevices(ctx context.Context) error {
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ParentID } NextToken } }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": "0",
"Limit": 10,
"Order": 3,
}
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
if err != nil {
return err
}
var resp DegooGetChildren5Data
if err := json.Unmarshal(data, &resp); err != nil {
return fmt.Errorf("failed to parse device list: %w", err)
}
if d.RootFolderID == "0" {
if len(resp.GetFileChildren5.Items) > 0 {
d.RootFolderID = resp.GetFileChildren5.Items[0].ParentID
}
op.MustSaveDriverStorage(d)
}
return nil
}
// getAllFileChildren5 fetches all children of a directory with pagination.
func (d *Degoo) getAllFileChildren5(ctx context.Context, parentID string) ([]DegooFileItem, error) {
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime FilePath IsInRecycleBin DeviceID MetadataID } NextToken } }`
var allItems []DegooFileItem
nextToken := ""
for {
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": parentID,
"Limit": 1000,
"Order": 3,
}
if nextToken != "" {
variables["NextToken"] = nextToken
}
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
if err != nil {
return nil, err
}
var resp DegooGetChildren5Data
if err := json.Unmarshal(data, &resp); err != nil {
return nil, err
}
allItems = append(allItems, resp.GetFileChildren5.Items...)
if resp.GetFileChildren5.NextToken == "" {
break
}
nextToken = resp.GetFileChildren5.NextToken
}
return allItems, nil
}
// getOverlay4 fetches metadata for a single item by ID.
func (d *Degoo) getOverlay4(ctx context.Context, id string) (DegooFileItem, error) {
const query = `query GetOverlay4($Token: String!, $ID: IDType!) { getOverlay4(Token: $Token, ID: $ID) { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime URL FilePath IsInRecycleBin DeviceID MetadataID } }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ID": map[string]string{
"FileID": id,
},
}
data, err := d.apiCall(ctx, "GetOverlay4", query, variables)
if err != nil {
return DegooFileItem{}, err
}
var resp DegooGetOverlay4Data
if err := json.Unmarshal(data, &resp); err != nil {
return DegooFileItem{}, fmt.Errorf("failed to parse item metadata: %w", err)
}
return resp.GetOverlay4, nil
}

View File

@ -13,7 +13,7 @@ type Addition struct {
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"` ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
AccessToken string AccessToken string
RefreshToken string `json:"refresh_token" required:"true"` RefreshToken string `json:"refresh_token" required:"true"`
RootNamespaceId string `json:"RootNamespaceId" required:"false"` RootNamespaceId string
} }
var config = driver.Config{ var config = driver.Config{

View File

@ -175,13 +175,6 @@ func (d *Dropbox) finishUploadSession(ctx context.Context, toPath string, offset
} }
req.Header.Set("Content-Type", "application/octet-stream") req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Authorization", "Bearer "+d.AccessToken) req.Header.Set("Authorization", "Bearer "+d.AccessToken)
if d.RootNamespaceId != "" {
apiPathRootJson, err := d.buildPathRootHeader()
if err != nil {
return err
}
req.Header.Set("Dropbox-API-Path-Root", apiPathRootJson)
}
uploadFinishArgs := UploadFinishArgs{ uploadFinishArgs := UploadFinishArgs{
Commit: struct { Commit: struct {
@ -226,13 +219,6 @@ func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) {
} }
req.Header.Set("Content-Type", "application/octet-stream") req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Authorization", "Bearer "+d.AccessToken) req.Header.Set("Authorization", "Bearer "+d.AccessToken)
if d.RootNamespaceId != "" {
apiPathRootJson, err := d.buildPathRootHeader()
if err != nil {
return "", err
}
req.Header.Set("Dropbox-API-Path-Root", apiPathRootJson)
}
req.Header.Set("Dropbox-API-Arg", "{\"close\":false}") req.Header.Set("Dropbox-API-Arg", "{\"close\":false}")
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
@ -247,11 +233,3 @@ func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) {
_ = res.Body.Close() _ = res.Body.Close()
return sessionId, nil return sessionId, nil
} }
func (d *Dropbox) buildPathRootHeader() (string, error) {
return utils.Json.MarshalToString(map[string]interface{}{
".tag": "root",
"root": d.RootNamespaceId,
})
}

View File

@ -296,23 +296,6 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreame
return nil, err return nil, err
} }
upToken := utils.Json.Get(res, "upToken").ToString() upToken := utils.Json.Get(res, "upToken").ToString()
if upToken == "-1" {
// 支持秒传
var resp UploadTokenRapidResp
err := utils.Json.Unmarshal(res, &resp)
if err != nil {
return nil, err
}
return &model.Object{
ID: strconv.FormatInt(resp.Map.FileID, 10),
Name: resp.Map.FileName,
Size: s.GetSize(),
Modified: s.ModTime(),
Ctime: s.CreateTime(),
IsFolder: false,
HashInfo: utils.NewHashInfo(utils.MD5, etag),
}, nil
}
now := time.Now() now := time.Now()
key := fmt.Sprintf("disk/%d/%d/%d/%s/%016d", now.Year(), now.Month(), now.Day(), d.account, now.UnixMilli()) key := fmt.Sprintf("disk/%d/%d/%d/%s/%016d", now.Year(), now.Month(), now.Day(), d.account, now.UnixMilli())
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{ reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{

View File

@ -32,7 +32,6 @@ func init() {
Name: "ILanZou", Name: "ILanZou",
DefaultRoot: "0", DefaultRoot: "0",
LocalSort: true, LocalSort: true,
NoOverwriteUpload: true,
}, },
conf: Conf{ conf: Conf{
base: "https://api.ilanzou.com", base: "https://api.ilanzou.com",
@ -51,7 +50,6 @@ func init() {
Name: "FeijiPan", Name: "FeijiPan",
DefaultRoot: "0", DefaultRoot: "0",
LocalSort: true, LocalSort: true,
NoOverwriteUpload: true,
}, },
conf: Conf{ conf: Conf{
base: "https://api.feijipan.com", base: "https://api.feijipan.com",

View File

@ -43,18 +43,6 @@ type Part struct {
ETag string `json:"etag"` ETag string `json:"etag"`
} }
type UploadTokenRapidResp struct {
Msg string `json:"msg"`
Code int `json:"code"`
UpToken string `json:"upToken"`
Map struct {
FileIconID int `json:"fileIconId"`
FileName string `json:"fileName"`
FileIcon string `json:"fileIcon"`
FileID int64 `json:"fileId"`
} `json:"map"`
}
type UploadResultResp struct { type UploadResultResp struct {
Msg string `json:"msg"` Msg string `json:"msg"`
Code int `json:"code"` Code int `json:"code"`

View File

@ -1,181 +0,0 @@
package openlist_share
import (
"context"
"fmt"
"net/http"
"net/url"
stdpath "path"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/go-resty/resty/v2"
)
type OpenListShare struct {
model.Storage
Addition
serverArchivePreview bool
}
func (d *OpenListShare) Config() driver.Config {
return config
}
func (d *OpenListShare) GetAddition() driver.Additional {
return &d.Addition
}
func (d *OpenListShare) Init(ctx context.Context) error {
d.Addition.Address = strings.TrimSuffix(d.Addition.Address, "/")
var settings common.Resp[map[string]string]
_, _, err := d.request("/public/settings", http.MethodGet, func(req *resty.Request) {
req.SetResult(&settings)
})
if err != nil {
return err
}
d.serverArchivePreview = settings.Data["share_archive_preview"] == "true"
return nil
}
func (d *OpenListShare) Drop(ctx context.Context) error {
return nil
}
func (d *OpenListShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var resp common.Resp[FsListResp]
_, _, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ListReq{
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), dir.GetPath()),
Password: d.Pwd,
Refresh: false,
})
})
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *OpenListShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
path := utils.FixAndCleanPath(stdpath.Join(d.ShareId, file.GetPath()))
u := fmt.Sprintf("%s/sd%s?pwd=%s", d.Address, path, d.Pwd)
return &model.Link{URL: u}, nil
}
func (d *OpenListShare) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveMetaResp]
_, code, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), obj.GetPath()),
Password: d.Pwd,
Refresh: false,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var tree []model.ObjTree
if resp.Data.Content != nil {
tree = make([]model.ObjTree, 0, len(resp.Data.Content))
for _, content := range resp.Data.Content {
tree = append(tree, &content)
}
}
return &model.ArchiveMetaInfo{
Comment: resp.Data.Comment,
Encrypted: resp.Data.Encrypted,
Tree: tree,
}, nil
}
func (d *OpenListShare) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveListResp]
_, code, err := d.request("/fs/archive/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveListReq{
ArchiveMetaReq: ArchiveMetaReq{
ArchivePass: args.Password,
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), obj.GetPath()),
Password: d.Pwd,
Refresh: false,
},
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
InnerPath: args.InnerPath,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *OpenListShare) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotSupport
}
path := utils.FixAndCleanPath(stdpath.Join(d.ShareId, obj.GetPath()))
u := fmt.Sprintf("%s/sad%s?pwd=%s&inner=%s&pass=%s",
d.Address,
path,
d.Pwd,
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password))
return &model.Link{URL: u}, nil
}
var _ driver.Driver = (*OpenListShare)(nil)

View File

@ -1,27 +0,0 @@
package openlist_share
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
ShareId string `json:"sid" required:"true"`
Pwd string `json:"pwd"`
ForwardArchiveReq bool `json:"forward_archive_requests" default:"true"`
}
var config = driver.Config{
Name: "OpenListShare",
LocalSort: true,
NoUpload: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &OpenListShare{}
})
}

View File

@ -1,111 +0,0 @@
package openlist_share
import (
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
type ListReq struct {
model.PageReq
Path string `json:"path" form:"path"`
Password string `json:"password" form:"password"`
Refresh bool `json:"refresh"`
}
type ObjResp struct {
Name string `json:"name"`
Size int64 `json:"size"`
IsDir bool `json:"is_dir"`
Modified time.Time `json:"modified"`
Created time.Time `json:"created"`
Sign string `json:"sign"`
Thumb string `json:"thumb"`
Type int `json:"type"`
HashInfo string `json:"hashinfo"`
}
type FsListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
Readme string `json:"readme"`
Write bool `json:"write"`
Provider string `json:"provider"`
}
type ArchiveMetaReq struct {
ArchivePass string `json:"archive_pass"`
Password string `json:"password"`
Path string `json:"path"`
Refresh bool `json:"refresh"`
}
type TreeResp struct {
ObjResp
Children []TreeResp `json:"children"`
hashCache *utils.HashInfo
}
func (t *TreeResp) GetSize() int64 {
return t.Size
}
func (t *TreeResp) GetName() string {
return t.Name
}
func (t *TreeResp) ModTime() time.Time {
return t.Modified
}
func (t *TreeResp) CreateTime() time.Time {
return t.Created
}
func (t *TreeResp) IsDir() bool {
return t.ObjResp.IsDir
}
func (t *TreeResp) GetHash() utils.HashInfo {
return utils.FromString(t.HashInfo)
}
func (t *TreeResp) GetID() string {
return ""
}
func (t *TreeResp) GetPath() string {
return ""
}
func (t *TreeResp) GetChildren() []model.ObjTree {
ret := make([]model.ObjTree, 0, len(t.Children))
for _, child := range t.Children {
ret = append(ret, &child)
}
return ret
}
func (t *TreeResp) Thumb() string {
return t.ObjResp.Thumb
}
type ArchiveMetaResp struct {
Comment string `json:"comment"`
Encrypted bool `json:"encrypted"`
Content []TreeResp `json:"content"`
RawURL string `json:"raw_url"`
Sign string `json:"sign"`
}
type ArchiveListReq struct {
model.PageReq
ArchiveMetaReq
InnerPath string `json:"inner_path"`
}
type ArchiveListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
}

View File

@ -1,32 +0,0 @@
package openlist_share
import (
"fmt"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
func (d *OpenListShare) request(api, method string, callback base.ReqCallback) ([]byte, int, error) {
url := d.Address + "/api" + api
req := base.RestyClient.R()
if callback != nil {
callback(req)
}
res, err := req.Execute(method, url)
if err != nil {
code := 0
if res != nil {
code = res.StatusCode()
}
return nil, code, err
}
if res.StatusCode() >= 400 {
return nil, res.StatusCode(), fmt.Errorf("request failed, status: %s", res.Status())
}
code := utils.Json.Get(res.Body(), "code").ToInt()
if code != 200 {
return nil, code, fmt.Errorf("request failed, code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
}
return res.Body(), 200, nil
}

View File

@ -149,19 +149,13 @@ func (d *QuarkOrUC) getTranscodingLink(file model.Obj) (*model.Link, error) {
return nil, err return nil, err
} }
for _, info := range resp.Data.VideoList {
if info.VideoInfo.URL != "" {
return &model.Link{ return &model.Link{
URL: info.VideoInfo.URL, URL: resp.Data.VideoList[0].VideoInfo.URL,
ContentLength: info.VideoInfo.Size, ContentLength: resp.Data.VideoList[0].VideoInfo.Size,
Concurrency: 3, Concurrency: 3,
PartSize: 10 * utils.MB, PartSize: 10 * utils.MB,
}, nil }, nil
} }
}
return nil, errors.New("no link found")
}
func (d *QuarkOrUC) upPre(file model.FileStreamer, parentId string) (UpPreResp, error) { func (d *QuarkOrUC) upPre(file model.FileStreamer, parentId string) (UpPreResp, error) {
now := time.Now() now := time.Now()

View File

@ -228,19 +228,13 @@ func (d *QuarkUCTV) getTranscodingLink(ctx context.Context, file model.Obj) (*mo
return nil, err return nil, err
} }
for _, info := range fileLink.Data.VideoInfo {
if info.URL != "" {
return &model.Link{ return &model.Link{
URL: info.URL, URL: fileLink.Data.VideoInfo[0].URL,
ContentLength: info.Size,
Concurrency: 3, Concurrency: 3,
PartSize: 10 * utils.MB, PartSize: 10 * utils.MB,
ContentLength: fileLink.Data.VideoInfo[0].Size,
}, nil }, nil
} }
}
return nil, errors.New("no link found")
}
func (d *QuarkUCTV) getDownloadLink(ctx context.Context, file model.Obj) (*model.Link, error) { func (d *QuarkUCTV) getDownloadLink(ctx context.Context, file model.Obj) (*model.Link, error) {
var fileLink DownloadFileLink var fileLink DownloadFileLink

View File

@ -1,137 +0,0 @@
package teldrive
import (
"fmt"
"net/http"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
"golang.org/x/net/context"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
func NewCopyManager(ctx context.Context, concurrent int, d *Teldrive) *CopyManager {
g, ctx := errgroup.WithContext(ctx)
return &CopyManager{
TaskChan: make(chan CopyTask, concurrent*2),
Sem: semaphore.NewWeighted(int64(concurrent)),
G: g,
Ctx: ctx,
d: d,
}
}
func (cm *CopyManager) startWorkers() {
workerCount := cap(cm.TaskChan) / 2
for i := 0; i < workerCount; i++ {
cm.G.Go(func() error {
return cm.worker()
})
}
}
func (cm *CopyManager) worker() error {
for {
select {
case task, ok := <-cm.TaskChan:
if !ok {
return nil
}
if err := cm.Sem.Acquire(cm.Ctx, 1); err != nil {
return err
}
var err error
err = cm.processFile(task)
cm.Sem.Release(1)
if err != nil {
return fmt.Errorf("task processing failed: %w", err)
}
case <-cm.Ctx.Done():
return cm.Ctx.Err()
}
}
}
func (cm *CopyManager) generateTasks(ctx context.Context, srcObj, dstDir model.Obj) error {
if srcObj.IsDir() {
return cm.generateFolderTasks(ctx, srcObj, dstDir)
} else {
// add single file task directly
select {
case cm.TaskChan <- CopyTask{SrcObj: srcObj, DstDir: dstDir}:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
}
func (cm *CopyManager) generateFolderTasks(ctx context.Context, srcDir, dstDir model.Obj) error {
objs, err := cm.d.List(ctx, srcDir, model.ListArgs{})
if err != nil {
return fmt.Errorf("failed to list directory %s: %w", srcDir.GetPath(), err)
}
err = cm.d.MakeDir(cm.Ctx, dstDir, srcDir.GetName())
if err != nil || len(objs) == 0 {
return err
}
newDstDir := &model.Object{
ID: dstDir.GetID(),
Path: dstDir.GetPath() + "/" + srcDir.GetName(),
Name: srcDir.GetName(),
IsFolder: true,
}
for _, file := range objs {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
srcFile := &model.Object{
ID: file.GetID(),
Path: srcDir.GetPath() + "/" + file.GetName(),
Name: file.GetName(),
IsFolder: file.IsDir(),
}
// 递归生成任务
if err := cm.generateTasks(ctx, srcFile, newDstDir); err != nil {
return err
}
}
return nil
}
func (cm *CopyManager) processFile(task CopyTask) error {
return cm.copySingleFile(cm.Ctx, task.SrcObj, task.DstDir)
}
func (cm *CopyManager) copySingleFile(ctx context.Context, srcObj, dstDir model.Obj) error {
// `override copy mode` should delete the existing file
if obj, err := cm.d.getFile(dstDir.GetPath(), srcObj.GetName(), srcObj.IsDir()); err == nil {
if err := cm.d.Remove(ctx, obj); err != nil {
return fmt.Errorf("failed to remove existing file: %w", err)
}
}
// Do copy
return cm.d.request(http.MethodPost, "/api/files/{id}/copy", func(req *resty.Request) {
req.SetPathParam("id", srcObj.GetID())
req.SetBody(base.Json{
"newName": srcObj.GetName(),
"destination": dstDir.GetPath(),
})
}, nil)
}

View File

@ -1,217 +0,0 @@
package teldrive
import (
"context"
"fmt"
"math"
"net/http"
"net/url"
"strings"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Teldrive struct {
model.Storage
Addition
}
func (d *Teldrive) Config() driver.Config {
return config
}
func (d *Teldrive) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Teldrive) Init(ctx context.Context) error {
d.Address = strings.TrimSuffix(d.Address, "/")
if d.Cookie == "" || !strings.HasPrefix(d.Cookie, "access_token=") {
return fmt.Errorf("cookie must start with 'access_token='")
}
if d.UploadConcurrency == 0 {
d.UploadConcurrency = 4
}
if d.ChunkSize == 0 {
d.ChunkSize = 10
}
op.MustSaveDriverStorage(d)
return nil
}
func (d *Teldrive) Drop(ctx context.Context) error {
return nil
}
func (d *Teldrive) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var listResp ListResp
err := d.request(http.MethodGet, "/api/files", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"path": dir.GetPath(),
"limit": "1000", // overide default 500, TODO pagination
})
}, &listResp)
if err != nil {
return nil, err
}
return utils.SliceConvert(listResp.Items, func(src Object) (model.Obj, error) {
return &model.Object{
ID: src.ID,
Name: src.Name,
Size: func() int64 {
if src.Type == "folder" {
return 0
}
return src.Size
}(),
IsFolder: src.Type == "folder",
Modified: src.UpdatedAt,
}, nil
})
}
func (d *Teldrive) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.UseShareLink {
shareObj, err := d.getShareFileById(file.GetID())
if err != nil || shareObj == nil {
if err := d.createShareFile(file.GetID()); err != nil {
return nil, err
}
shareObj, err = d.getShareFileById(file.GetID())
if err != nil {
return nil, err
}
}
return &model.Link{
URL: d.Address + "/api/shares/" + url.PathEscape(shareObj.Id) + "/files/" + url.PathEscape(file.GetID()) + "/" + url.PathEscape(file.GetName()),
}, nil
}
return &model.Link{
URL: d.Address + "/api/files/" + url.PathEscape(file.GetID()) + "/" + url.PathEscape(file.GetName()),
Header: http.Header{
"Cookie": {d.Cookie},
},
}, nil
}
func (d *Teldrive) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
return d.request(http.MethodPost, "/api/files/mkdir", func(req *resty.Request) {
req.SetBody(map[string]interface{}{
"path": parentDir.GetPath() + "/" + dirName,
})
}, nil)
}
func (d *Teldrive) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
body := base.Json{
"ids": []string{srcObj.GetID()},
"destinationParent": dstDir.GetID(),
}
return d.request(http.MethodPost, "/api/files/move", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
body := base.Json{
"name": newName,
}
return d.request(http.MethodPatch, "/api/files/{id}", func(req *resty.Request) {
req.SetPathParam("id", srcObj.GetID())
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
copyConcurrentLimit := 4
copyManager := NewCopyManager(ctx, copyConcurrentLimit, d)
copyManager.startWorkers()
copyManager.G.Go(func() error {
defer close(copyManager.TaskChan)
return copyManager.generateTasks(ctx, srcObj, dstDir)
})
return copyManager.G.Wait()
}
func (d *Teldrive) Remove(ctx context.Context, obj model.Obj) error {
body := base.Json{
"ids": []string{obj.GetID()},
}
return d.request(http.MethodPost, "/api/files/delete", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
fileId := uuid.New().String()
chunkSizeInMB := d.ChunkSize
chunkSize := chunkSizeInMB * 1024 * 1024 // Convert MB to bytes
totalSize := file.GetSize()
totalParts := int(math.Ceil(float64(totalSize) / float64(chunkSize)))
maxRetried := 3
// delete the upload task when finished or failed
defer func() {
_ = d.request(http.MethodDelete, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, nil)
}()
if obj, err := d.getFile(dstDir.GetPath(), file.GetName(), file.IsDir()); err == nil {
if err = d.Remove(ctx, obj); err != nil {
return err
}
}
// start the upload process
if err := d.request(http.MethodGet, "/api/uploads/fileId", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, nil); err != nil {
return err
}
if totalSize == 0 {
return d.touch(file.GetName(), dstDir.GetPath())
}
if totalParts <= 1 {
return d.doSingleUpload(ctx, dstDir, file, up, totalParts, chunkSize, fileId)
}
return d.doMultiUpload(ctx, dstDir, file, up, maxRetried, totalParts, chunkSize, fileId)
}
func (d *Teldrive) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *Teldrive) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Teldrive)(nil)

View File

@ -1,26 +0,0 @@
package teldrive
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
Cookie string `json:"cookie" type:"string" required:"true" help:"access_token=xxx"`
UseShareLink bool `json:"use_share_link" type:"bool" default:"false" help:"Create share link when getting link to support 302. If disabled, you need to enable web proxy."`
ChunkSize int64 `json:"chunk_size" type:"number" default:"10" help:"Chunk size in MiB"`
UploadConcurrency int64 `json:"upload_concurrency" type:"number" default:"4" help:"Concurrency upload requests"`
}
var config = driver.Config{
Name: "Teldrive",
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Teldrive{}
})
}

View File

@ -1,77 +0,0 @@
package teldrive
import (
"context"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
type ErrResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
type Object struct {
ID string `json:"id"`
Name string `json:"name"`
Type string `json:"type"`
MimeType string `json:"mimeType"`
Category string `json:"category,omitempty"`
ParentId string `json:"parentId"`
Size int64 `json:"size"`
Encrypted bool `json:"encrypted"`
UpdatedAt time.Time `json:"updatedAt"`
}
type ListResp struct {
Items []Object `json:"items"`
Meta struct {
Count int `json:"count"`
TotalPages int `json:"totalPages"`
CurrentPage int `json:"currentPage"`
} `json:"meta"`
}
type FilePart struct {
Name string `json:"name"`
PartId int `json:"partId"`
PartNo int `json:"partNo"`
ChannelId int `json:"channelId"`
Size int `json:"size"`
Encrypted bool `json:"encrypted"`
Salt string `json:"salt"`
}
type chunkTask struct {
chunkIdx int
fileName string
chunkSize int64
reader *stream.SectionReader
ss *stream.StreamSectionReader
}
type CopyManager struct {
TaskChan chan CopyTask
Sem *semaphore.Weighted
G *errgroup.Group
Ctx context.Context
d *Teldrive
}
type CopyTask struct {
SrcObj model.Obj
DstDir model.Obj
}
type ShareObj struct {
Id string `json:"id"`
Protected bool `json:"protected"`
UserId int `json:"userId"`
Type string `json:"type"`
Name string `json:"name"`
ExpiresAt time.Time `json:"expiresAt"`
}

View File

@ -1,373 +0,0 @@
package teldrive
import (
"fmt"
"io"
"net/http"
"sort"
"strconv"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2"
"github.com/pkg/errors"
"golang.org/x/net/context"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
// create empty file
func (d *Teldrive) touch(name, path string) error {
uploadBody := base.Json{
"name": name,
"type": "file",
"path": path,
}
if err := d.request(http.MethodPost, "/api/files", func(req *resty.Request) {
req.SetBody(uploadBody)
}, nil); err != nil {
return err
}
return nil
}
func (d *Teldrive) createFileOnUploadSuccess(name, id, path string, uploadedFileParts []FilePart, totalSize int64) error {
remoteFileParts, err := d.getFilePart(id)
if err != nil {
return err
}
// check if the uploaded file parts match the remote file parts
if len(remoteFileParts) != len(uploadedFileParts) {
return fmt.Errorf("[Teldrive] file parts count mismatch: expected %d, got %d", len(uploadedFileParts), len(remoteFileParts))
}
formatParts := make([]base.Json, 0)
for _, p := range remoteFileParts {
formatParts = append(formatParts, base.Json{
"id": p.PartId,
"salt": p.Salt,
})
}
uploadBody := base.Json{
"name": name,
"type": "file",
"path": path,
"parts": formatParts,
"size": totalSize,
}
// create file here
if err := d.request(http.MethodPost, "/api/files", func(req *resty.Request) {
req.SetBody(uploadBody)
}, nil); err != nil {
return err
}
return nil
}
func (d *Teldrive) checkFilePartExist(fileId string, partId int) (FilePart, error) {
var uploadedParts []FilePart
var filePart FilePart
if err := d.request(http.MethodGet, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &uploadedParts); err != nil {
return filePart, err
}
for _, part := range uploadedParts {
if part.PartId == partId {
return part, nil
}
}
return filePart, nil
}
func (d *Teldrive) getFilePart(fileId string) ([]FilePart, error) {
var uploadedParts []FilePart
if err := d.request(http.MethodGet, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &uploadedParts); err != nil {
return nil, err
}
return uploadedParts, nil
}
func (d *Teldrive) singleUploadRequest(fileId string, callback base.ReqCallback, resp interface{}) error {
url := d.Address + "/api/uploads/" + fileId
client := resty.New().SetTimeout(0)
ctx := context.Background()
req := client.R().
SetContext(ctx)
req.SetHeader("Cookie", d.Cookie)
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.AddRetryCondition(func(r *resty.Response, err error) bool {
return false
})
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
var e ErrResp
req.SetError(&e)
_req, err := req.Execute(http.MethodPost, url)
if err != nil {
return err
}
if _req.IsError() {
return &e
}
return nil
}
func (d *Teldrive) doSingleUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up model.UpdateProgress,
totalParts int, chunkSize int64, fileId string) error {
totalSize := file.GetSize()
var fileParts []FilePart
var uploaded int64 = 0
ss, err := stream.NewStreamSectionReader(file, int(totalSize), &up)
if err != nil {
return err
}
for uploaded < totalSize {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
curChunkSize := min(totalSize-uploaded, chunkSize)
rd, err := ss.GetSectionReader(uploaded, curChunkSize)
if err != nil {
return err
}
filePart := &FilePart{}
if err := retry.Do(func() error {
if _, err := rd.Seek(0, io.SeekStart); err != nil {
return err
}
if err := d.singleUploadRequest(fileId, func(req *resty.Request) {
uploadParams := map[string]string{
"partName": func() string {
digits := len(fmt.Sprintf("%d", totalParts))
return file.GetName() + fmt.Sprintf(".%0*d", digits, 1)
}(),
"partNo": strconv.Itoa(1),
"fileName": file.GetName(),
}
req.SetQueryParams(uploadParams)
req.SetBody(driver.NewLimitedUploadStream(ctx, rd))
req.SetHeader("Content-Length", strconv.FormatInt(curChunkSize, 10))
}, filePart); err != nil {
return err
}
return nil
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second)); err != nil {
return err
}
if filePart.Name != "" {
fileParts = append(fileParts, *filePart)
uploaded += curChunkSize
up(float64(uploaded) / float64(totalSize))
ss.FreeSectionReader(rd)
}
}
return d.createFileOnUploadSuccess(file.GetName(), fileId, dstDir.GetPath(), fileParts, totalSize)
}
func (d *Teldrive) doMultiUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up model.UpdateProgress,
maxRetried, totalParts int, chunkSize int64, fileId string) error {
concurrent := d.UploadConcurrency
g, ctx := errgroup.WithContext(ctx)
sem := semaphore.NewWeighted(int64(concurrent))
chunkChan := make(chan chunkTask, concurrent*2)
resultChan := make(chan FilePart, concurrent)
totalSize := file.GetSize()
ss, err := stream.NewStreamSectionReader(file, int(totalSize), &up)
if err != nil {
return err
}
ssLock := sync.Mutex{}
g.Go(func() error {
defer close(chunkChan)
chunkIdx := 0
for chunkIdx < totalParts {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
offset := int64(chunkIdx) * chunkSize
curChunkSize := min(totalSize-offset, chunkSize)
ssLock.Lock()
reader, err := ss.GetSectionReader(offset, curChunkSize)
ssLock.Unlock()
if err != nil {
return err
}
task := chunkTask{
chunkIdx: chunkIdx + 1,
chunkSize: curChunkSize,
fileName: file.GetName(),
reader: reader,
ss: ss,
}
// freeSectionReader will be called in d.uploadSingleChunk
select {
case chunkChan <- task:
chunkIdx++
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
for i := 0; i < int(concurrent); i++ {
g.Go(func() error {
for task := range chunkChan {
if err := sem.Acquire(ctx, 1); err != nil {
return err
}
filePart, err := d.uploadSingleChunk(ctx, fileId, task, totalParts, maxRetried)
sem.Release(1)
if err != nil {
return fmt.Errorf("upload chunk %d failed: %w", task.chunkIdx, err)
}
select {
case resultChan <- *filePart:
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
}
var fileParts []FilePart
var collectErr error
collectDone := make(chan struct{})
go func() {
defer close(collectDone)
fileParts = make([]FilePart, 0, totalParts)
done := make(chan error, 1)
go func() {
done <- g.Wait()
close(resultChan)
}()
for {
select {
case filePart, ok := <-resultChan:
if !ok {
collectErr = <-done
return
}
fileParts = append(fileParts, filePart)
case err := <-done:
collectErr = err
return
}
}
}()
<-collectDone
if collectErr != nil {
return fmt.Errorf("multi-upload failed: %w", collectErr)
}
sort.Slice(fileParts, func(i, j int) bool {
return fileParts[i].PartNo < fileParts[j].PartNo
})
return d.createFileOnUploadSuccess(file.GetName(), fileId, dstDir.GetPath(), fileParts, totalSize)
}
func (d *Teldrive) uploadSingleChunk(ctx context.Context, fileId string, task chunkTask, totalParts, maxRetried int) (*FilePart, error) {
filePart := &FilePart{}
retryCount := 0
defer task.ss.FreeSectionReader(task.reader)
for {
select {
case <-ctx.Done():
return nil, ctx.Err()
default:
}
if existingPart, err := d.checkFilePartExist(fileId, task.chunkIdx); err == nil && existingPart.Name != "" {
return &existingPart, nil
}
err := d.singleUploadRequest(fileId, func(req *resty.Request) {
uploadParams := map[string]string{
"partName": func() string {
digits := len(fmt.Sprintf("%d", totalParts))
return task.fileName + fmt.Sprintf(".%0*d", digits, task.chunkIdx)
}(),
"partNo": strconv.Itoa(task.chunkIdx),
"fileName": task.fileName,
}
req.SetQueryParams(uploadParams)
req.SetBody(driver.NewLimitedUploadStream(ctx, task.reader))
req.SetHeader("Content-Length", strconv.Itoa(int(task.chunkSize)))
}, filePart)
if err == nil {
return filePart, nil
}
if retryCount >= maxRetried {
return nil, fmt.Errorf("upload failed after %d retries: %w", maxRetried, err)
}
if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) {
continue
}
retryCount++
utils.Log.Errorf("[Teldrive] upload error: %v, retrying %d times", err, retryCount)
backoffDuration := time.Duration(retryCount*retryCount) * time.Second
if backoffDuration > 30*time.Second {
backoffDuration = 30 * time.Second
}
select {
case <-time.After(backoffDuration):
case <-ctx.Done():
return nil, ctx.Err()
}
}
}

View File

@ -1,109 +0,0 @@
package teldrive
import (
"fmt"
"net/http"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/go-resty/resty/v2"
)
// do others that not defined in Driver interface
func (d *Teldrive) request(method string, pathname string, callback base.ReqCallback, resp interface{}) error {
url := d.Address + pathname
req := base.RestyClient.R()
req.SetHeader("Cookie", d.Cookie)
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
var e ErrResp
req.SetError(&e)
_req, err := req.Execute(method, url)
if err != nil {
return err
}
if _req.IsError() {
return &e
}
return nil
}
func (d *Teldrive) getFile(path, name string, isFolder bool) (model.Obj, error) {
resp := &ListResp{}
err := d.request(http.MethodGet, "/api/files", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"path": path,
"name": name,
"type": func() string {
if isFolder {
return "folder"
}
return "file"
}(),
"operation": "find",
})
}, resp)
if err != nil {
return nil, err
}
if len(resp.Items) == 0 {
return nil, fmt.Errorf("file not found: %s/%s", path, name)
}
obj := resp.Items[0]
return &model.Object{
ID: obj.ID,
Name: obj.Name,
Size: obj.Size,
IsFolder: obj.Type == "folder",
}, err
}
func (err *ErrResp) Error() string {
if err == nil {
return ""
}
return fmt.Sprintf("[Teldrive] message:%s Error code:%d", err.Message, err.Code)
}
func (d *Teldrive) createShareFile(fileId string) error {
var errResp ErrResp
if err := d.request(http.MethodPost, "/api/files/{id}/share", func(req *resty.Request) {
req.SetPathParam("id", fileId)
req.SetBody(base.Json{
"expiresAt": getDateTime(),
})
}, &errResp); err != nil {
return err
}
if errResp.Message != "" {
return &errResp
}
return nil
}
func (d *Teldrive) getShareFileById(fileId string) (*ShareObj, error) {
var shareObj ShareObj
if err := d.request(http.MethodGet, "/api/files/{id}/share", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &shareObj); err != nil {
return nil, err
}
return &shareObj, nil
}
func getDateTime() string {
now := time.Now().UTC()
formattedWithMs := now.Add(time.Hour * 1).Format("2006-01-02T15:04:05.000Z")
return formattedWithMs
}

View File

@ -132,7 +132,7 @@ func (d *Terabox) Remove(ctx context.Context, obj model.Obj) error {
func (d *Terabox) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error { func (d *Terabox) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
resp, err := base.RestyClient.R(). resp, err := base.RestyClient.R().
SetContext(ctx). SetContext(ctx).
Get("https://" + d.url_domain_prefix + "-data.terabox.com/rest/2.0/pcs/file?method=locateupload") Get("https://d.terabox.com/rest/2.0/pcs/file?method=locateupload")
if err != nil { if err != nil {
return err return err
} }

View File

@ -36,6 +36,5 @@ func (d *Wopan) getSpaceType() string {
// 20230607214351 // 20230607214351
func getTime(str string) (time.Time, error) { func getTime(str string) (time.Time, error) {
loc := time.FixedZone("UTC+8", 8*60*60) return time.Parse("20060102150405", str)
return time.ParseInLocation("20060102150405", str, loc)
} }

View File

@ -5,35 +5,26 @@ umask ${UMASK}
if [ "$1" = "version" ]; then if [ "$1" = "version" ]; then
./openlist version ./openlist version
else else
# Check file of /opt/openlist/data permissions for current user # Define the target directory path for openlist service
# 检查当前用户是否有当前目录的写和执行权限 OPENLIST_DIR="/opt/service/start/openlist"
if [ -d ./data ]; then if [ ! -d "$OPENLIST_DIR" ]; then
if ! [ -w ./data ] || ! [ -x ./data ]; then cp -r /opt/service/stop/openlist "$OPENLIST_DIR" 2>/dev/null
cat <<EOF
Error: Current user does not have write and/or execute permissions for the ./data directory: $(pwd)/data
Please visit https://doc.oplist.org/guide/installation/docker#for-version-after-v4-1-0 for more information.
错误:当前用户没有 ./data 目录($(pwd)/data的写和/或执行权限。
请访问 https://doc.oplist.org/guide/installation/docker#v4-1-0-%E4%BB%A5%E5%90%8E%E7%89%88%E6%9C%AC 获取更多信息。
Exiting...
EOF
exit 1
fi fi
fi
# Define the target directory path for aria2 service # Define the target directory path for aria2 service
ARIA2_DIR="/opt/service/start/aria2" ARIA2_DIR="/opt/service/start/aria2"
if [ "$RUN_ARIA2" = "true" ]; then if [ "$RUN_ARIA2" = "true" ]; then
# If aria2 should run and target directory doesn't exist, copy it # If aria2 should run and target directory doesn't exist, copy it
if [ ! -d "$ARIA2_DIR" ]; then if [ ! -d "$ARIA2_DIR" ]; then
mkdir -p "$ARIA2_DIR" mkdir -p "$ARIA2_DIR"
cp -r /opt/service/stop/aria2/* "$ARIA2_DIR" 2>/dev/null cp -r /opt/service/stop/aria2/* "$ARIA2_DIR" 2>/dev/null
fi fi
runsvdir /opt/service/start &
else else
# If aria2 should NOT run and target directory exists, remove it # If aria2 should NOT run and target directory exists, remove it
if [ -d "$ARIA2_DIR" ]; then if [ -d "$ARIA2_DIR" ]; then
rm -rf "$ARIA2_DIR" rm -rf "$ARIA2_DIR"
fi fi
fi fi
exec ./openlist server --no-prefix
exec runsvdir /opt/service/start
fi fi

4
go.mod
View File

@ -11,7 +11,7 @@ require (
github.com/OpenListTeam/times v0.1.0 github.com/OpenListTeam/times v0.1.0
github.com/OpenListTeam/wopan-sdk-go v0.1.5 github.com/OpenListTeam/wopan-sdk-go v0.1.5
github.com/ProtonMail/go-crypto v1.3.0 github.com/ProtonMail/go-crypto v1.3.0
github.com/SheltonZhu/115driver v1.1.1 github.com/SheltonZhu/115driver v1.1.0
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
github.com/avast/retry-go v3.0.0+incompatible github.com/avast/retry-go v3.0.0+incompatible
github.com/aws/aws-sdk-go v1.55.7 github.com/aws/aws-sdk-go v1.55.7
@ -35,6 +35,7 @@ require (
github.com/go-resty/resty/v2 v2.16.5 github.com/go-resty/resty/v2 v2.16.5
github.com/go-webauthn/webauthn v0.13.4 github.com/go-webauthn/webauthn v0.13.4
github.com/golang-jwt/jwt/v4 v4.5.2 github.com/golang-jwt/jwt/v4 v4.5.2
github.com/golang-jwt/jwt/v5 v5.3.0
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3 github.com/gorilla/websocket v1.5.3
github.com/hekmon/transmissionrpc/v3 v3.0.0 github.com/hekmon/transmissionrpc/v3 v3.0.0
@ -178,7 +179,6 @@ require (
github.com/go-sql-driver/mysql v1.7.0 // indirect github.com/go-sql-driver/mysql v1.7.0 // indirect
github.com/go-webauthn/x v0.1.23 // indirect github.com/go-webauthn/x v0.1.23 // indirect
github.com/goccy/go-json v0.10.5 // indirect github.com/goccy/go-json v0.10.5 // indirect
github.com/golang-jwt/jwt/v5 v5.2.3 // indirect
github.com/golang/protobuf v1.5.4 // indirect github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/snappy v0.0.4 // indirect github.com/golang/snappy v0.0.4 // indirect
github.com/google/go-tpm v0.9.5 // indirect github.com/google/go-tpm v0.9.5 // indirect

6
go.sum
View File

@ -59,8 +59,8 @@ github.com/RoaringBitmap/roaring/v2 v2.4.5 h1:uGrrMreGjvAtTBobc0g5IrW1D5ldxDQYe2
github.com/RoaringBitmap/roaring/v2 v2.4.5/go.mod h1:FiJcsfkGje/nZBZgCu0ZxCPOKD/hVXDS2dXi7/eUFE0= github.com/RoaringBitmap/roaring/v2 v2.4.5/go.mod h1:FiJcsfkGje/nZBZgCu0ZxCPOKD/hVXDS2dXi7/eUFE0=
github.com/STARRY-S/zip v0.2.1 h1:pWBd4tuSGm3wtpoqRZZ2EAwOmcHK6XFf7bU9qcJXyFg= github.com/STARRY-S/zip v0.2.1 h1:pWBd4tuSGm3wtpoqRZZ2EAwOmcHK6XFf7bU9qcJXyFg=
github.com/STARRY-S/zip v0.2.1/go.mod h1:xNvshLODWtC4EJ702g7cTYn13G53o1+X9BWnPFpcWV4= github.com/STARRY-S/zip v0.2.1/go.mod h1:xNvshLODWtC4EJ702g7cTYn13G53o1+X9BWnPFpcWV4=
github.com/SheltonZhu/115driver v1.1.1 h1:9EMhe2ZJflGiAaZbYInw2jqxTcqZNF+DtVDsEy70aFU= github.com/SheltonZhu/115driver v1.1.0 h1:kA8Vtu5JVWqqJFiTF06+HDb9zVEO6ZSdyjV5HsGx7Wg=
github.com/SheltonZhu/115driver v1.1.1/go.mod h1:rKvNd4Y4OkXv1TMbr/SKjGdcvMQxh6AW5Tw9w0CJb7E= github.com/SheltonZhu/115driver v1.1.0/go.mod h1:rKvNd4Y4OkXv1TMbr/SKjGdcvMQxh6AW5Tw9w0CJb7E=
github.com/abbot/go-http-auth v0.4.0 h1:QjmvZ5gSC7jm3Zg54DqWE/T5m1t2AfDu6QlXJT0EVT0= github.com/abbot/go-http-auth v0.4.0 h1:QjmvZ5gSC7jm3Zg54DqWE/T5m1t2AfDu6QlXJT0EVT0=
github.com/abbot/go-http-auth v0.4.0/go.mod h1:Cz6ARTIzApMJDzh5bRMSUou6UMSp0IEXg9km/ci7TJM= github.com/abbot/go-http-auth v0.4.0/go.mod h1:Cz6ARTIzApMJDzh5bRMSUou6UMSp0IEXg9km/ci7TJM=
github.com/aead/ecdh v0.2.0 h1:pYop54xVaq/CEREFEcukHRZfTdjiWvYIsZDXXrBapQQ= github.com/aead/ecdh v0.2.0 h1:pYop54xVaq/CEREFEcukHRZfTdjiWvYIsZDXXrBapQQ=
@ -306,6 +306,8 @@ github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXe
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.2.3 h1:kkGXqQOBSDDWRhWNXTFpqGSCMyh/PLnqUvMGJPDJDs0= github.com/golang-jwt/jwt/v5 v5.2.3 h1:kkGXqQOBSDDWRhWNXTFpqGSCMyh/PLnqUvMGJPDJDs0=
github.com/golang-jwt/jwt/v5 v5.2.3/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= github.com/golang-jwt/jwt/v5 v5.2.3/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=

View File

@ -77,10 +77,6 @@ func InitConfig() {
log.Fatalf("update config struct error: %+v", err) log.Fatalf("update config struct error: %+v", err)
} }
} }
if !conf.Conf.Force {
confFromEnv()
}
if conf.Conf.MaxConcurrency > 0 { if conf.Conf.MaxConcurrency > 0 {
net.DefaultConcurrencyLimit = &net.ConcurrencyLimit{Limit: conf.Conf.MaxConcurrency} net.DefaultConcurrencyLimit = &net.ConcurrencyLimit{Limit: conf.Conf.MaxConcurrency}
} }
@ -96,31 +92,25 @@ func InitConfig() {
conf.MaxBufferLimit = conf.Conf.MaxBufferLimit * utils.MB conf.MaxBufferLimit = conf.Conf.MaxBufferLimit * utils.MB
} }
log.Infof("max buffer limit: %dMB", conf.MaxBufferLimit/utils.MB) log.Infof("max buffer limit: %dMB", conf.MaxBufferLimit/utils.MB)
if conf.Conf.MmapThreshold > 0 { if !conf.Conf.Force {
conf.MmapThreshold = conf.Conf.MmapThreshold * utils.MB confFromEnv()
} else {
conf.MmapThreshold = 0
} }
log.Infof("mmap threshold: %dMB", conf.Conf.MmapThreshold)
if len(conf.Conf.Log.Filter.Filters) == 0 { if len(conf.Conf.Log.Filter.Filters) == 0 {
conf.Conf.Log.Filter.Enable = false conf.Conf.Log.Filter.Enable = false
} }
// convert abs path // convert abs path
convertAbsPath := func(path *string) { convertAbsPath := func(path *string) {
if *path != "" && !filepath.IsAbs(*path) { if !filepath.IsAbs(*path) {
*path = filepath.Join(pwd, *path) *path = filepath.Join(pwd, *path)
} }
} }
convertAbsPath(&conf.Conf.Database.DBFile)
convertAbsPath(&conf.Conf.Scheme.CertFile)
convertAbsPath(&conf.Conf.Scheme.KeyFile)
convertAbsPath(&conf.Conf.Scheme.UnixFile)
convertAbsPath(&conf.Conf.Log.Name)
convertAbsPath(&conf.Conf.TempDir) convertAbsPath(&conf.Conf.TempDir)
convertAbsPath(&conf.Conf.BleveDir) convertAbsPath(&conf.Conf.BleveDir)
convertAbsPath(&conf.Conf.Log.Name)
convertAbsPath(&conf.Conf.Database.DBFile)
if conf.Conf.DistDir != "" {
convertAbsPath(&conf.Conf.DistDir) convertAbsPath(&conf.Conf.DistDir)
}
err := os.MkdirAll(conf.Conf.TempDir, 0o777) err := os.MkdirAll(conf.Conf.TempDir, 0o777)
if err != nil { if err != nil {
log.Fatalf("create temp dir error: %+v", err) log.Fatalf("create temp dir error: %+v", err)

View File

@ -111,7 +111,6 @@ func InitialSettings() []model.SettingItem {
{Key: conf.Favicon, Value: "https://res.oplist.org/logo/logo.svg", MigrationValue: "https://cdn.oplist.org/gh/OpenListTeam/Logo@main/logo.svg", Type: conf.TypeString, Group: model.STYLE}, {Key: conf.Favicon, Value: "https://res.oplist.org/logo/logo.svg", MigrationValue: "https://cdn.oplist.org/gh/OpenListTeam/Logo@main/logo.svg", Type: conf.TypeString, Group: model.STYLE},
{Key: conf.MainColor, Value: "#1890ff", Type: conf.TypeString, Group: model.STYLE}, {Key: conf.MainColor, Value: "#1890ff", Type: conf.TypeString, Group: model.STYLE},
{Key: "home_icon", Value: "🏠", Type: conf.TypeString, Group: model.STYLE}, {Key: "home_icon", Value: "🏠", Type: conf.TypeString, Group: model.STYLE},
{Key: "share_icon", Value: "🎁", Type: conf.TypeString, Group: model.STYLE},
{Key: "home_container", Value: "max_980px", Type: conf.TypeSelect, Options: "max_980px,hope_container", Group: model.STYLE}, {Key: "home_container", Value: "max_980px", Type: conf.TypeSelect, Options: "max_980px,hope_container", Group: model.STYLE},
{Key: "settings_layout", Value: "list", Type: conf.TypeSelect, Options: "list,responsive", Group: model.STYLE}, {Key: "settings_layout", Value: "list", Type: conf.TypeSelect, Options: "list,responsive", Group: model.STYLE},
// preview settings // preview settings
@ -162,12 +161,8 @@ func InitialSettings() []model.SettingItem {
{Key: conf.OcrApi, Value: "https://openlistteam-ocr-api-server.hf.space/ocr/file/json", MigrationValue: "https://api.example.com/ocr/file/json", Type: conf.TypeString, Group: model.GLOBAL}, // TODO: This can be replace by a community-hosted endpoint, see https://github.com/OpenListTeam/ocr_api_server {Key: conf.OcrApi, Value: "https://openlistteam-ocr-api-server.hf.space/ocr/file/json", MigrationValue: "https://api.example.com/ocr/file/json", Type: conf.TypeString, Group: model.GLOBAL}, // TODO: This can be replace by a community-hosted endpoint, see https://github.com/OpenListTeam/ocr_api_server
{Key: conf.FilenameCharMapping, Value: `{"/": "|"}`, Type: conf.TypeText, Group: model.GLOBAL}, {Key: conf.FilenameCharMapping, Value: `{"/": "|"}`, Type: conf.TypeText, Group: model.GLOBAL},
{Key: conf.ForwardDirectLinkParams, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL}, {Key: conf.ForwardDirectLinkParams, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL},
{Key: conf.IgnoreDirectLinkParams, Value: "sign,openlist_ts,raw", Type: conf.TypeString, Group: model.GLOBAL}, {Key: conf.IgnoreDirectLinkParams, Value: "sign,openlist_ts", Type: conf.TypeString, Group: model.GLOBAL},
{Key: conf.WebauthnLoginEnabled, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC}, {Key: conf.WebauthnLoginEnabled, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
{Key: conf.SharePreview, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
{Key: conf.ShareArchivePreview, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
{Key: conf.ShareForceProxy, Value: "true", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PRIVATE},
{Key: conf.ShareSummaryContent, Value: "@{{creator}} shared {{#each files}}{{#if @first}}\"{{filename this}}\"{{/if}}{{#if @last}}{{#unless (eq @index 0)}} and {{@index}} more files{{/unless}}{{/if}}{{/each}} from {{site_title}}: {{base_url}}/@s/{{id}}{{#if pwd}} , the share code is {{pwd}}{{/if}}{{#if expires}}, please access before {{dateLocaleString expires}}.{{/if}}", Type: conf.TypeText, Group: model.GLOBAL, Flag: model.PUBLIC},
// single settings // single settings
{Key: conf.Token, Value: token, Type: conf.TypeString, Group: model.SINGLE, Flag: model.PRIVATE}, {Key: conf.Token, Value: token, Type: conf.TypeString, Group: model.SINGLE, Flag: model.PRIVATE},

View File

@ -33,8 +33,8 @@ func initUser() {
Role: model.ADMIN, Role: model.ADMIN,
BasePath: "/", BasePath: "/",
Authn: "[]", Authn: "[]",
// 0(can see hidden) - 8(webdav read) & 12(can read archives) - 14(can share) // 0(can see hidden) - 7(can remove) & 12(can read archives) - 13(can decompress archives)
Permission: 0x71FF, Permission: 0x31FF,
} }
if err := op.CreateUser(admin); err != nil { if err := op.CreateUser(admin); err != nil {
panic(err) panic(err)

View File

@ -120,7 +120,6 @@ type Config struct {
Log LogConfig `json:"log" envPrefix:"LOG_"` Log LogConfig `json:"log" envPrefix:"LOG_"`
DelayedStart int `json:"delayed_start" env:"DELAYED_START"` DelayedStart int `json:"delayed_start" env:"DELAYED_START"`
MaxBufferLimit int `json:"max_buffer_limitMB" env:"MAX_BUFFER_LIMIT_MB"` MaxBufferLimit int `json:"max_buffer_limitMB" env:"MAX_BUFFER_LIMIT_MB"`
MmapThreshold int `json:"mmap_thresholdMB" env:"MMAP_THRESHOLD_MB"`
MaxConnections int `json:"max_connections" env:"MAX_CONNECTIONS"` MaxConnections int `json:"max_connections" env:"MAX_CONNECTIONS"`
MaxConcurrency int `json:"max_concurrency" env:"MAX_CONCURRENCY"` MaxConcurrency int `json:"max_concurrency" env:"MAX_CONCURRENCY"`
TlsInsecureSkipVerify bool `json:"tls_insecure_skip_verify" env:"TLS_INSECURE_SKIP_VERIFY"` TlsInsecureSkipVerify bool `json:"tls_insecure_skip_verify" env:"TLS_INSECURE_SKIP_VERIFY"`
@ -177,7 +176,6 @@ func DefaultConfig(dataDir string) *Config {
}, },
}, },
MaxBufferLimit: -1, MaxBufferLimit: -1,
MmapThreshold: 4,
MaxConnections: 0, MaxConnections: 0,
MaxConcurrency: 64, MaxConcurrency: 64,
TlsInsecureSkipVerify: true, TlsInsecureSkipVerify: true,

View File

@ -33,7 +33,6 @@ const (
PreviewArchivesByDefault = "preview_archives_by_default" PreviewArchivesByDefault = "preview_archives_by_default"
ReadMeAutoRender = "readme_autorender" ReadMeAutoRender = "readme_autorender"
FilterReadMeScripts = "filter_readme_scripts" FilterReadMeScripts = "filter_readme_scripts"
// global // global
HideFiles = "hide_files" HideFiles = "hide_files"
CustomizeHead = "customize_head" CustomizeHead = "customize_head"
@ -46,10 +45,6 @@ const (
ForwardDirectLinkParams = "forward_direct_link_params" ForwardDirectLinkParams = "forward_direct_link_params"
IgnoreDirectLinkParams = "ignore_direct_link_params" IgnoreDirectLinkParams = "ignore_direct_link_params"
WebauthnLoginEnabled = "webauthn_login_enabled" WebauthnLoginEnabled = "webauthn_login_enabled"
SharePreview = "share_preview"
ShareArchivePreview = "share_archive_preview"
ShareForceProxy = "share_force_proxy"
ShareSummaryContent = "share_summary_content"
// index // index
SearchIndex = "search_index" SearchIndex = "search_index"
@ -172,5 +167,4 @@ const (
RequestHeaderKey RequestHeaderKey
UserAgentKey UserAgentKey
PathKey PathKey
SharingIDKey
) )

View File

@ -25,10 +25,7 @@ var PrivacyReg []*regexp.Regexp
var ( var (
// StoragesLoaded loaded success if empty // StoragesLoaded loaded success if empty
StoragesLoaded = false StoragesLoaded = false
// 单个Buffer最大限制
MaxBufferLimit = 16 * 1024 * 1024 MaxBufferLimit = 16 * 1024 * 1024
// 超过该阈值的Buffer将使用 mmap 分配,可主动释放内存
MmapThreshold = 4 * 1024 * 1024
) )
var ( var (
RawIndexHtml string RawIndexHtml string

View File

@ -12,7 +12,7 @@ var db *gorm.DB
func Init(d *gorm.DB) { func Init(d *gorm.DB) {
db = d db = d
err := AutoMigrate(new(model.Storage), new(model.User), new(model.Meta), new(model.SettingItem), new(model.SearchNode), new(model.TaskItem), new(model.SSHPublicKey), new(model.SharingDB)) err := AutoMigrate(new(model.Storage), new(model.User), new(model.Meta), new(model.SettingItem), new(model.SearchNode), new(model.TaskItem), new(model.SSHPublicKey))
if err != nil { if err != nil {
log.Fatalf("failed migrate database: %s", err.Error()) log.Fatalf("failed migrate database: %s", err.Error())
} }

View File

@ -1,62 +0,0 @@
package db
import (
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random"
"github.com/pkg/errors"
)
func GetSharingById(id string) (*model.SharingDB, error) {
s := model.SharingDB{ID: id}
if err := db.Where(s).First(&s).Error; err != nil {
return nil, errors.Wrapf(err, "failed get sharing")
}
return &s, nil
}
func GetSharings(pageIndex, pageSize int) (sharings []model.SharingDB, count int64, err error) {
sharingDB := db.Model(&model.SharingDB{})
if err := sharingDB.Count(&count).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get sharings count")
}
if err := sharingDB.Order(columnName("id")).Offset((pageIndex - 1) * pageSize).Limit(pageSize).Find(&sharings).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get find sharings")
}
return sharings, count, nil
}
func GetSharingsByCreatorId(creator uint, pageIndex, pageSize int) (sharings []model.SharingDB, count int64, err error) {
sharingDB := db.Model(&model.SharingDB{})
cond := model.SharingDB{CreatorId: creator}
if err := sharingDB.Where(cond).Count(&count).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get sharings count")
}
if err := sharingDB.Where(cond).Order(columnName("id")).Offset((pageIndex - 1) * pageSize).Limit(pageSize).Find(&sharings).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get find sharings")
}
return sharings, count, nil
}
func CreateSharing(s *model.SharingDB) (string, error) {
id := random.String(8)
for len(id) < 12 {
old := model.SharingDB{
ID: id,
}
if err := db.Where(old).First(&old).Error; err != nil {
s.ID = id
return id, errors.WithStack(db.Create(s).Error)
}
id += random.String(1)
}
return "", errors.New("failed find valid id")
}
func UpdateSharing(s *model.SharingDB) error {
return errors.WithStack(db.Save(s).Error)
}
func DeleteSharingById(id string) error {
s := model.SharingDB{ID: id}
return errors.WithStack(db.Where(s).Delete(&s).Error)
}

View File

@ -23,10 +23,6 @@ var (
UnknownArchiveFormat = errors.New("unknown archive format") UnknownArchiveFormat = errors.New("unknown archive format")
WrongArchivePassword = errors.New("wrong archive password") WrongArchivePassword = errors.New("wrong archive password")
DriverExtractNotSupported = errors.New("driver extraction not supported") DriverExtractNotSupported = errors.New("driver extraction not supported")
WrongShareCode = errors.New("wrong share code")
InvalidSharing = errors.New("invalid sharing")
SharingNotFound = errors.New("sharing not found")
) )
// NewErr wrap constant error with an extra message // NewErr wrap constant error with an extra message

View File

@ -168,7 +168,7 @@ func GetStorage(path string, args *GetStoragesArgs) (driver.Driver, error) {
func Other(ctx context.Context, args model.FsOtherArgs) (interface{}, error) { func Other(ctx context.Context, args model.FsOtherArgs) (interface{}, error) {
res, err := other(ctx, args) res, err := other(ctx, args)
if err != nil { if err != nil {
log.Errorf("failed get other %s: %+v", args.Path, err) log.Errorf("failed remove %s: %+v", args.Path, err)
} }
return res, err return res, err
} }

View File

@ -77,26 +77,6 @@ type ArchiveDecompressArgs struct {
PutIntoNewDir bool PutIntoNewDir bool
} }
type SharingListArgs struct {
Refresh bool
Pwd string
}
type SharingArchiveMetaArgs struct {
ArchiveMetaArgs
Pwd string
}
type SharingArchiveListArgs struct {
ArchiveListArgs
Pwd string
}
type SharingLinkArgs struct {
Pwd string
LinkArgs
}
type RangeReaderIF interface { type RangeReaderIF interface {
RangeRead(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) RangeRead(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error)
} }

View File

@ -1,47 +0,0 @@
package model
import "time"
type SharingDB struct {
ID string `json:"id" gorm:"type:char(12);primaryKey"`
FilesRaw string `json:"-" gorm:"type:text"`
Expires *time.Time `json:"expires"`
Pwd string `json:"pwd"`
Accessed int `json:"accessed"`
MaxAccessed int `json:"max_accessed"`
CreatorId uint `json:"-"`
Disabled bool `json:"disabled"`
Remark string `json:"remark"`
Readme string `json:"readme" gorm:"type:text"`
Header string `json:"header" gorm:"type:text"`
Sort
}
type Sharing struct {
*SharingDB
Files []string `json:"files"`
Creator *User `json:"-"`
}
func (s *Sharing) Valid() bool {
if s.Disabled {
return false
}
if s.MaxAccessed > 0 && s.Accessed >= s.MaxAccessed {
return false
}
if len(s.Files) == 0 {
return false
}
if !s.Creator.CanShare() {
return false
}
if s.Expires != nil && !s.Expires.IsZero() && s.Expires.Before(time.Now()) {
return false
}
return true
}
func (s *Sharing) Verify(pwd string) bool {
return s.Pwd == "" || s.Pwd == pwd
}

View File

@ -54,7 +54,6 @@ type User struct {
// 11: ftp/sftp write // 11: ftp/sftp write
// 12: can read archives // 12: can read archives
// 13: can decompress archives // 13: can decompress archives
// 14: can share
Permission int32 `json:"permission"` Permission int32 `json:"permission"`
OtpSecret string `json:"-"` OtpSecret string `json:"-"`
SsoID string `json:"sso_id"` // unique by sso platform SsoID string `json:"sso_id"` // unique by sso platform
@ -146,10 +145,6 @@ func (u *User) CanDecompress() bool {
return (u.Permission>>13)&1 == 1 return (u.Permission>>13)&1 == 1
} }
func (u *User) CanShare() bool {
return (u.Permission>>14)&1 == 1
}
func (u *User) JoinPath(reqPath string) (string, error) { func (u *User) JoinPath(reqPath string) (string, error) {
return utils.JoinBasePath(u.BasePath, reqPath) return utils.JoinBasePath(u.BasePath, reqPath)
} }

View File

@ -1,6 +1,7 @@
package net package net
import ( import (
"bytes"
"context" "context"
"errors" "errors"
"fmt" "fmt"
@ -14,7 +15,6 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/rclone/rclone/lib/mmap"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range" "github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/aws/aws-sdk-go/aws/awsutil" "github.com/aws/aws-sdk-go/aws/awsutil"
@ -255,10 +255,7 @@ func (d *downloader) sendChunkTask(newConcurrency bool) error {
finalSize += firstSize - minSize finalSize += firstSize - minSize
} }
} }
err := buf.Reset(int(finalSize)) buf.Reset(int(finalSize))
if err != nil {
return err
}
ch := chunk{ ch := chunk{
start: d.pos, start: d.pos,
size: finalSize, size: finalSize,
@ -648,13 +645,11 @@ func (mr MultiReadCloser) Close() error {
} }
type Buf struct { type Buf struct {
buffer *bytes.Buffer
size int //expected size size int //expected size
ctx context.Context ctx context.Context
offR int off int
offW int
rw sync.Mutex rw sync.Mutex
buf []byte
mmap bool
readSignal chan struct{} readSignal chan struct{}
readPending bool readPending bool
@ -663,62 +658,54 @@ type Buf struct {
// NewBuf is a buffer that can have 1 read & 1 write at the same time. // NewBuf is a buffer that can have 1 read & 1 write at the same time.
// when read is faster write, immediately feed data to read after written // when read is faster write, immediately feed data to read after written
func NewBuf(ctx context.Context, maxSize int) *Buf { func NewBuf(ctx context.Context, maxSize int) *Buf {
br := &Buf{ return &Buf{
ctx: ctx, ctx: ctx,
buffer: bytes.NewBuffer(make([]byte, 0, maxSize)),
size: maxSize, size: maxSize,
readSignal: make(chan struct{}, 1), readSignal: make(chan struct{}, 1),
} }
if conf.MmapThreshold > 0 && maxSize >= conf.MmapThreshold {
m, err := mmap.Alloc(maxSize)
if err == nil {
br.buf = m
br.mmap = true
return br
} }
} func (br *Buf) Reset(size int) {
br.buf = make([]byte, maxSize)
return br
}
func (br *Buf) Reset(size int) error {
br.rw.Lock() br.rw.Lock()
defer br.rw.Unlock() defer br.rw.Unlock()
if br.buf == nil { if br.buffer == nil {
return io.ErrClosedPipe return
}
if size > cap(br.buf) {
return fmt.Errorf("reset size %d exceeds max size %d", size, cap(br.buf))
} }
br.buffer.Reset()
br.size = size br.size = size
br.offR = 0 br.off = 0
br.offW = 0
return nil
} }
func (br *Buf) Read(p []byte) (int, error) { func (br *Buf) Read(p []byte) (n int, err error) {
if err := br.ctx.Err(); err != nil { if err := br.ctx.Err(); err != nil {
return 0, err return 0, err
} }
if len(p) == 0 { if len(p) == 0 {
return 0, nil return 0, nil
} }
if br.offR >= br.size { if br.off >= br.size {
return 0, io.EOF return 0, io.EOF
} }
for { for {
br.rw.Lock() br.rw.Lock()
if br.buf == nil { if br.buffer != nil {
br.rw.Unlock() n, err = br.buffer.Read(p)
return 0, io.ErrClosedPipe } else {
err = io.ErrClosedPipe
} }
if err != nil && err != io.EOF {
if br.offW < br.offR {
br.rw.Unlock() br.rw.Unlock()
return 0, io.ErrUnexpectedEOF return
}
if n > 0 {
br.off += n
br.rw.Unlock()
return n, nil
} }
if br.offW == br.offR {
br.readPending = true br.readPending = true
br.rw.Unlock() br.rw.Unlock()
// n==0, err==io.EOF
select { select {
case <-br.ctx.Done(): case <-br.ctx.Done():
return 0, br.ctx.Err() return 0, br.ctx.Err()
@ -729,34 +716,18 @@ func (br *Buf) Read(p []byte) (int, error) {
continue continue
} }
} }
n := copy(p, br.buf[br.offR:br.offW])
br.offR += n
br.rw.Unlock()
if n < len(p) && br.offR >= br.size {
return n, io.EOF
}
return n, nil
}
} }
func (br *Buf) Write(p []byte) (int, error) { func (br *Buf) Write(p []byte) (n int, err error) {
if err := br.ctx.Err(); err != nil { if err := br.ctx.Err(); err != nil {
return 0, err return 0, err
} }
if len(p) == 0 {
return 0, nil
}
br.rw.Lock() br.rw.Lock()
defer br.rw.Unlock() defer br.rw.Unlock()
if br.buf == nil { if br.buffer == nil {
return 0, io.ErrClosedPipe return 0, io.ErrClosedPipe
} }
if br.offW >= br.size { n, err = br.buffer.Write(p)
return 0, io.ErrShortWrite
}
n := copy(br.buf[br.offW:], p[:min(br.size-br.offW, len(p))])
br.offW += n
if br.readPending { if br.readPending {
br.readPending = false br.readPending = false
select { select {
@ -764,21 +735,12 @@ func (br *Buf) Write(p []byte) (int, error) {
default: default:
} }
} }
if n < len(p) { return
return n, io.ErrShortWrite
}
return n, nil
} }
func (br *Buf) Close() error { func (br *Buf) Close() {
br.rw.Lock() br.rw.Lock()
defer br.rw.Unlock() defer br.rw.Unlock()
var err error br.buffer = nil
if br.mmap {
err = mmap.Free(br.buf)
br.mmap = false
}
br.buf = nil
close(br.readSignal) close(br.readSignal)
return err
} }

View File

@ -486,18 +486,12 @@ func Rename(ctx context.Context, storage driver.Driver, srcPath, dstName string,
updateCacheObj(storage, srcDirPath, srcRawObj, model.WrapObjName(newObj)) updateCacheObj(storage, srcDirPath, srcRawObj, model.WrapObjName(newObj))
} else if !utils.IsBool(lazyCache...) { } else if !utils.IsBool(lazyCache...) {
DeleteCache(storage, srcDirPath) DeleteCache(storage, srcDirPath)
if srcRawObj.IsDir() {
ClearCache(storage, srcPath)
}
} }
} }
case driver.Rename: case driver.Rename:
err = s.Rename(ctx, srcObj, dstName) err = s.Rename(ctx, srcObj, dstName)
if err == nil && !utils.IsBool(lazyCache...) { if err == nil && !utils.IsBool(lazyCache...) {
DeleteCache(storage, srcDirPath) DeleteCache(storage, srcDirPath)
if srcRawObj.IsDir() {
ClearCache(storage, srcPath)
}
} }
default: default:
return errs.NotImplement return errs.NotImplement

View File

@ -1,139 +0,0 @@
package op
import (
"fmt"
stdpath "path"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/db"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/go-cache"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
func makeJoined(sdb []model.SharingDB) []model.Sharing {
creator := make(map[uint]*model.User)
return utils.MustSliceConvert(sdb, func(s model.SharingDB) model.Sharing {
var c *model.User
var ok bool
if c, ok = creator[s.CreatorId]; !ok {
var err error
if c, err = GetUserById(s.CreatorId); err != nil {
c = nil
} else {
creator[s.CreatorId] = c
}
}
var files []string
if err := utils.Json.UnmarshalFromString(s.FilesRaw, &files); err != nil {
files = make([]string, 0)
}
return model.Sharing{
SharingDB: &s,
Files: files,
Creator: c,
}
})
}
var sharingCache = cache.NewMemCache(cache.WithShards[*model.Sharing](8))
var sharingG singleflight.Group[*model.Sharing]
func GetSharingById(id string, refresh ...bool) (*model.Sharing, error) {
if !utils.IsBool(refresh...) {
if sharing, ok := sharingCache.Get(id); ok {
log.Debugf("use cache when get sharing %s", id)
return sharing, nil
}
}
sharing, err, _ := sharingG.Do(id, func() (*model.Sharing, error) {
s, err := db.GetSharingById(id)
if err != nil {
return nil, errors.WithMessagef(err, "failed get sharing [%s]", id)
}
creator, err := GetUserById(s.CreatorId)
if err != nil {
return nil, errors.WithMessagef(err, "failed get sharing creator [%s]", id)
}
var files []string
if err = utils.Json.UnmarshalFromString(s.FilesRaw, &files); err != nil {
files = make([]string, 0)
}
return &model.Sharing{
SharingDB: s,
Files: files,
Creator: creator,
}, nil
})
return sharing, err
}
func GetSharings(pageIndex, pageSize int) ([]model.Sharing, int64, error) {
s, cnt, err := db.GetSharings(pageIndex, pageSize)
if err != nil {
return nil, 0, errors.WithStack(err)
}
return makeJoined(s), cnt, nil
}
func GetSharingsByCreatorId(userId uint, pageIndex, pageSize int) ([]model.Sharing, int64, error) {
s, cnt, err := db.GetSharingsByCreatorId(userId, pageIndex, pageSize)
if err != nil {
return nil, 0, errors.WithStack(err)
}
return makeJoined(s), cnt, nil
}
func GetSharingUnwrapPath(sharing *model.Sharing, path string) (unwrapPath string, err error) {
if len(sharing.Files) == 0 {
return "", errors.New("cannot get actual path of an invalid sharing")
}
if len(sharing.Files) == 1 {
return stdpath.Join(sharing.Files[0], path), nil
}
path = utils.FixAndCleanPath(path)[1:]
if len(path) == 0 {
return "", errors.New("cannot get actual path of a sharing root path")
}
mapPath := ""
child, rest, _ := strings.Cut(path, "/")
for _, c := range sharing.Files {
if child == stdpath.Base(c) {
mapPath = c
break
}
}
if mapPath == "" {
return "", fmt.Errorf("failed find child [%s] of sharing [%s]", child, sharing.ID)
}
return stdpath.Join(mapPath, rest), nil
}
func CreateSharing(sharing *model.Sharing) (id string, err error) {
sharing.CreatorId = sharing.Creator.ID
sharing.FilesRaw, err = utils.Json.MarshalToString(utils.MustSliceConvert(sharing.Files, utils.FixAndCleanPath))
if err != nil {
return "", errors.WithStack(err)
}
return db.CreateSharing(sharing.SharingDB)
}
func UpdateSharing(sharing *model.Sharing, skipMarshal ...bool) (err error) {
if !utils.IsBool(skipMarshal...) {
sharing.CreatorId = sharing.Creator.ID
sharing.FilesRaw, err = utils.Json.MarshalToString(utils.MustSliceConvert(sharing.Files, utils.FixAndCleanPath))
if err != nil {
return errors.WithStack(err)
}
}
sharingCache.Del(sharing.ID)
return db.UpdateSharing(sharing.SharingDB)
}
func DeleteSharing(sid string) error {
sharingCache.Del(sid)
return db.DeleteSharingById(sid)
}

View File

@ -1,65 +0,0 @@
package sharing
import (
"context"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func archiveMeta(ctx context.Context, sid, path string, args model.SharingArchiveMetaArgs) (*model.Sharing, *model.ArchiveMetaProvider, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.GetArchiveMeta(ctx, storage, actualPath, args.ArchiveMetaArgs)
return sharing, obj, err
}
return nil, nil, errors.New("cannot get sharing root archive meta")
}
func archiveList(ctx context.Context, sid, path string, args model.SharingArchiveListArgs) (*model.Sharing, []model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.ListArchive(ctx, storage, actualPath, args.ArchiveListArgs)
return sharing, obj, err
}
return nil, nil, errors.New("cannot get sharing root archive list")
}

View File

@ -1,60 +0,0 @@
package sharing
import (
"context"
stdpath "path"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func get(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
if unwrapPath != "/" {
virtualFiles := op.GetStorageVirtualFilesByPath(stdpath.Dir(unwrapPath))
for _, f := range virtualFiles {
if f.GetName() == stdpath.Base(unwrapPath) {
return sharing, f, nil
}
}
} else {
return sharing, &model.Object{
Name: sid,
Size: 0,
Modified: time.Time{},
IsFolder: true,
}, nil
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.Get(ctx, storage, actualPath)
return sharing, obj, err
}
return sharing, &model.Object{
Name: sid,
Size: 0,
Modified: time.Time{},
IsFolder: true,
}, nil
}

View File

@ -1,46 +0,0 @@
package sharing
import (
"context"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/pkg/errors"
)
func link(ctx context.Context, sid, path string, args *LinkArgs) (*model.Sharing, *model.Link, model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.SharingListArgs.Refresh)
if err != nil {
return nil, nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing link")
}
l, obj, err := op.Link(ctx, storage, actualPath, args.LinkArgs)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing link")
}
if l.URL != "" && !strings.HasPrefix(l.URL, "http://") && !strings.HasPrefix(l.URL, "https://") {
l.URL = common.GetApiUrl(ctx) + l.URL
}
return sharing, l, obj, nil
}
return nil, nil, nil, errors.New("cannot get sharing root link")
}

View File

@ -1,83 +0,0 @@
package sharing
import (
"context"
stdpath "path"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func list(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, []model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
virtualFiles := op.GetStorageVirtualFilesByPath(unwrapPath)
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil && len(virtualFiles) == 0 {
return nil, nil, errors.WithMessage(err, "failed list sharing")
}
var objs []model.Obj
if storage != nil {
objs, err = op.List(ctx, storage, actualPath, model.ListArgs{
Refresh: args.Refresh,
ReqPath: stdpath.Join(sid, path),
})
if err != nil && len(virtualFiles) == 0 {
return nil, nil, errors.WithMessage(err, "failed list sharing")
}
}
om := model.NewObjMerge()
objs = om.Merge(objs, virtualFiles...)
model.SortFiles(objs, sharing.OrderBy, sharing.OrderDirection)
model.ExtractFolder(objs, sharing.ExtractFolder)
return sharing, objs, nil
}
objs := make([]model.Obj, 0, len(sharing.Files))
for _, f := range sharing.Files {
if f != "/" {
isVf := false
virtualFiles := op.GetStorageVirtualFilesByPath(stdpath.Dir(f))
for _, vf := range virtualFiles {
if vf.GetName() == stdpath.Base(f) {
objs = append(objs, vf)
isVf = true
break
}
}
if isVf {
continue
}
} else {
continue
}
storage, actualPath, err := op.GetStorageAndActualPath(f)
if err != nil {
continue
}
obj, err := op.Get(ctx, storage, actualPath)
if err != nil {
continue
}
objs = append(objs, obj)
}
model.SortFiles(objs, sharing.OrderBy, sharing.OrderDirection)
model.ExtractFolder(objs, sharing.ExtractFolder)
return sharing, objs, nil
}

View File

@ -1,58 +0,0 @@
package sharing
import (
"context"
"github.com/OpenListTeam/OpenList/v4/internal/model"
log "github.com/sirupsen/logrus"
)
func List(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, []model.Obj, error) {
sharing, res, err := list(ctx, sid, path, args)
if err != nil {
log.Errorf("failed list sharing %s/%s: %+v", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func Get(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, model.Obj, error) {
sharing, res, err := get(ctx, sid, path, args)
if err != nil {
log.Warnf("failed get sharing %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func ArchiveMeta(ctx context.Context, sid, path string, args model.SharingArchiveMetaArgs) (*model.Sharing, *model.ArchiveMetaProvider, error) {
sharing, res, err := archiveMeta(ctx, sid, path, args)
if err != nil {
log.Warnf("failed get sharing archive meta %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func ArchiveList(ctx context.Context, sid, path string, args model.SharingArchiveListArgs) (*model.Sharing, []model.Obj, error) {
sharing, res, err := archiveList(ctx, sid, path, args)
if err != nil {
log.Warnf("failed list sharing archive %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
type LinkArgs struct {
model.SharingListArgs
model.LinkArgs
}
func Link(ctx context.Context, sid, path string, args *LinkArgs) (*model.Sharing, *model.Link, model.Obj, error) {
sharing, res, file, err := link(ctx, sid, path, args)
if err != nil {
log.Errorf("failed get sharing link %s/%s: %+v", sid, path, err)
return nil, nil, nil, err
}
return sharing, res, file, nil
}

View File

@ -15,7 +15,6 @@ import (
"github.com/OpenListTeam/OpenList/v4/pkg/buffer" "github.com/OpenListTeam/OpenList/v4/pkg/buffer"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range" "github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/rclone/rclone/lib/mmap"
"go4.org/readerutil" "go4.org/readerutil"
) )
@ -61,12 +60,8 @@ func (f *FileStream) IsForceStreamUpload() bool {
} }
func (f *FileStream) Close() error { func (f *FileStream) Close() error {
if f.peekBuff != nil {
f.peekBuff.Reset()
f.peekBuff = nil
}
var err1, err2 error var err1, err2 error
err1 = f.Closers.Close() err1 = f.Closers.Close()
if errors.Is(err1, os.ErrClosed) { if errors.Is(err1, os.ErrClosed) {
err1 = nil err1 = nil
@ -79,6 +74,10 @@ func (f *FileStream) Close() error {
f.tmpFile = nil f.tmpFile = nil
} }
} }
if f.peekBuff != nil {
f.peekBuff.Reset()
f.peekBuff = nil
}
return errors.Join(err1, err2) return errors.Join(err1, err2)
} }
@ -195,19 +194,7 @@ func (f *FileStream) cache(maxCacheSize int64) (model.File, error) {
f.oriReader = f.Reader f.oriReader = f.Reader
} }
bufSize := maxCacheSize - int64(f.peekBuff.Len()) bufSize := maxCacheSize - int64(f.peekBuff.Len())
var buf []byte buf := make([]byte, bufSize)
if conf.MmapThreshold > 0 && bufSize >= int64(conf.MmapThreshold) {
m, err := mmap.Alloc(int(bufSize))
if err == nil {
f.Add(utils.CloseFunc(func() error {
return mmap.Free(m)
}))
buf = m
}
}
if buf == nil {
buf = make([]byte, bufSize)
}
n, err := io.ReadFull(f.oriReader, buf) n, err := io.ReadFull(f.oriReader, buf)
if bufSize != int64(n) { if bufSize != int64(n) {
return nil, fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", bufSize, n, err) return nil, fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", bufSize, n, err)

View File

@ -7,13 +7,11 @@ import (
"io" "io"
"testing" "testing"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range" "github.com/OpenListTeam/OpenList/v4/pkg/http_range"
) )
func TestFileStream_RangeRead(t *testing.T) { func TestFileStream_RangeRead(t *testing.T) {
conf.MaxBufferLimit = 16 * 1024 * 1024
type args struct { type args struct {
httpRange http_range.Range httpRange http_range.Range
} }
@ -73,7 +71,7 @@ func TestFileStream_RangeRead(t *testing.T) {
} }
}) })
} }
t.Run("after", func(t *testing.T) { t.Run("after check", func(t *testing.T) {
if f.GetFile() == nil { if f.GetFile() == nil {
t.Error("not cached") t.Error("not cached")
} }

View File

@ -8,14 +8,13 @@ import (
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"sync"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/model" "github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/net" "github.com/OpenListTeam/OpenList/v4/internal/net"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range" "github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/pool"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/rclone/rclone/lib/mmap"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
@ -154,49 +153,26 @@ func CacheFullAndHash(stream model.FileStreamer, up *model.UpdateProgress, hashT
type StreamSectionReader struct { type StreamSectionReader struct {
file model.FileStreamer file model.FileStreamer
off int64 off int64
bufPool *pool.Pool[[]byte] bufPool *sync.Pool
} }
func NewStreamSectionReader(file model.FileStreamer, maxBufferSize int, up *model.UpdateProgress) (*StreamSectionReader, error) { func NewStreamSectionReader(file model.FileStreamer, maxBufferSize int, up *model.UpdateProgress) (*StreamSectionReader, error) {
ss := &StreamSectionReader{file: file} ss := &StreamSectionReader{file: file}
if file.GetFile() != nil { if file.GetFile() == nil {
return ss, nil
}
maxBufferSize = min(maxBufferSize, int(file.GetSize())) maxBufferSize = min(maxBufferSize, int(file.GetSize()))
if maxBufferSize > conf.MaxBufferLimit { if maxBufferSize > conf.MaxBufferLimit {
_, err := file.CacheFullAndWriter(up, nil) _, err := file.CacheFullAndWriter(up, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return ss, nil
}
if conf.MmapThreshold > 0 && maxBufferSize >= conf.MmapThreshold {
ss.bufPool = &pool.Pool[[]byte]{
New: func() []byte {
buf, err := mmap.Alloc(maxBufferSize)
if err == nil {
file.Add(utils.CloseFunc(func() error {
return mmap.Free(buf)
}))
} else { } else {
buf = make([]byte, maxBufferSize) ss.bufPool = &sync.Pool{
} New: func() any {
return buf
},
}
} else {
ss.bufPool = &pool.Pool[[]byte]{
New: func() []byte {
return make([]byte, maxBufferSize) return make([]byte, maxBufferSize)
}, },
} }
} }
}
file.Add(utils.CloseFunc(func() error {
ss.bufPool.Reset()
return nil
}))
return ss, nil return ss, nil
} }
@ -208,7 +184,7 @@ func (ss *StreamSectionReader) GetSectionReader(off, length int64) (*SectionRead
if off != ss.off { if off != ss.off {
return nil, fmt.Errorf("stream not cached: request offset %d != current offset %d", off, ss.off) return nil, fmt.Errorf("stream not cached: request offset %d != current offset %d", off, ss.off)
} }
tempBuf := ss.bufPool.Get() tempBuf := ss.bufPool.Get().([]byte)
buf = tempBuf[:length] buf = tempBuf[:length]
n, err := io.ReadFull(ss.file, buf) n, err := io.ReadFull(ss.file, buf)
if int64(n) != length { if int64(n) != length {

View File

@ -1,37 +0,0 @@
package pool
import "sync"
type Pool[T any] struct {
New func() T
MaxCap int
cache []T
mu sync.Mutex
}
func (p *Pool[T]) Get() T {
p.mu.Lock()
defer p.mu.Unlock()
if len(p.cache) == 0 {
return p.New()
}
item := p.cache[len(p.cache)-1]
p.cache = p.cache[:len(p.cache)-1]
return item
}
func (p *Pool[T]) Put(item T) {
p.mu.Lock()
defer p.mu.Unlock()
if p.MaxCap == 0 || len(p.cache) < int(p.MaxCap) {
p.cache = append(p.cache, item)
}
}
func (p *Pool[T]) Reset() {
p.mu.Lock()
defer p.mu.Unlock()
clear(p.cache)
p.cache = nil
}

View File

@ -2,9 +2,6 @@ package common
import ( import (
"context" "context"
"fmt"
"html"
"net/http"
"strings" "strings"
"github.com/OpenListTeam/OpenList/v4/cmd/flags" "github.com/OpenListTeam/OpenList/v4/cmd/flags"
@ -41,41 +38,6 @@ func ErrorResp(c *gin.Context, err error, code int, l ...bool) {
//c.Abort() //c.Abort()
} }
// ErrorPage is used to return error page HTML.
// It also returns standard HTTP status code.
// @param l: if true, log error
func ErrorPage(c *gin.Context, err error, code int, l ...bool) {
if len(l) > 0 && l[0] {
if flags.Debug || flags.Dev {
log.Errorf("%+v", err)
} else {
log.Errorf("%v", err)
}
}
codes := fmt.Sprintf("%d %s", code, http.StatusText(code))
html := fmt.Sprintf(`<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="color-scheme" content="dark light" />
<meta name="robots" content="noindex" />
<title>%s</title>
</head>
<body>
<h1>%s</h1>
<hr>
<p>%s</p>
</body>
</html>`,
codes, codes, html.EscapeString(hidePrivacy(err.Error())))
c.Data(code, "text/html; charset=utf-8", []byte(html))
c.Abort()
}
func ErrorWithDataResp(c *gin.Context, err error, code int, data interface{}, l ...bool) { func ErrorWithDataResp(c *gin.Context, err error, code int, data interface{}, l ...bool) {
if len(l) > 0 && l[0] { if len(l) > 0 && l[0] {
if flags.Debug || flags.Dev { if flags.Debug || flags.Dev {

View File

@ -3,9 +3,9 @@ package handles
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
stdpath "path" stdpath "path"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/task"
"github.com/OpenListTeam/OpenList/v4/internal/archive/tool" "github.com/OpenListTeam/OpenList/v4/internal/archive/tool"
"github.com/OpenListTeam/OpenList/v4/internal/conf" "github.com/OpenListTeam/OpenList/v4/internal/conf"
@ -15,7 +15,6 @@ import (
"github.com/OpenListTeam/OpenList/v4/internal/op" "github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/setting" "github.com/OpenListTeam/OpenList/v4/internal/setting"
"github.com/OpenListTeam/OpenList/v4/internal/sign" "github.com/OpenListTeam/OpenList/v4/internal/sign"
"github.com/OpenListTeam/OpenList/v4/internal/task"
"github.com/OpenListTeam/OpenList/v4/pkg/utils" "github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common" "github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
@ -72,26 +71,13 @@ func toContentResp(objs []model.ObjTree) []ArchiveContentResp {
return ret return ret
} }
func FsArchiveMetaSplit(c *gin.Context) { func FsArchiveMeta(c *gin.Context) {
var req ArchiveMetaReq var req ArchiveMetaReq
if err := c.ShouldBind(&req); err != nil { if err := c.ShouldBind(&req); err != nil {
common.ErrorResp(c, err, 400) common.ErrorResp(c, err, 400)
return return
} }
if strings.HasPrefix(req.Path, "/@s") {
req.Path = strings.TrimPrefix(req.Path, "/@s")
SharingArchiveMeta(c, &req)
return
}
user := c.Request.Context().Value(conf.UserKey).(*model.User) user := c.Request.Context().Value(conf.UserKey).(*model.User)
if user.IsGuest() && user.Disabled {
common.ErrorStrResp(c, "Guest user is disabled, login please", 401)
return
}
FsArchiveMeta(c, &req, user)
}
func FsArchiveMeta(c *gin.Context, req *ArchiveMetaReq, user *model.User) {
if !user.CanReadArchives() { if !user.CanReadArchives() {
common.ErrorResp(c, errs.PermissionDenied, 403) common.ErrorResp(c, errs.PermissionDenied, 403)
return return
@ -156,27 +142,19 @@ type ArchiveListReq struct {
InnerPath string `json:"inner_path" form:"inner_path"` InnerPath string `json:"inner_path" form:"inner_path"`
} }
func FsArchiveListSplit(c *gin.Context) { type ArchiveListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
}
func FsArchiveList(c *gin.Context) {
var req ArchiveListReq var req ArchiveListReq
if err := c.ShouldBind(&req); err != nil { if err := c.ShouldBind(&req); err != nil {
common.ErrorResp(c, err, 400) common.ErrorResp(c, err, 400)
return return
} }
req.Validate() req.Validate()
if strings.HasPrefix(req.Path, "/@s") {
req.Path = strings.TrimPrefix(req.Path, "/@s")
SharingArchiveList(c, &req)
return
}
user := c.Request.Context().Value(conf.UserKey).(*model.User) user := c.Request.Context().Value(conf.UserKey).(*model.User)
if user.IsGuest() && user.Disabled {
common.ErrorStrResp(c, "Guest user is disabled, login please", 401)
return
}
FsArchiveList(c, &req, user)
}
func FsArchiveList(c *gin.Context, req *ArchiveListReq, user *model.User) {
if !user.CanReadArchives() { if !user.CanReadArchives() {
common.ErrorResp(c, errs.PermissionDenied, 403) common.ErrorResp(c, errs.PermissionDenied, 403)
return return
@ -223,7 +201,7 @@ func FsArchiveList(c *gin.Context, req *ArchiveListReq, user *model.User) {
ret, _ := utils.SliceConvert(objs, func(src model.Obj) (ObjResp, error) { ret, _ := utils.SliceConvert(objs, func(src model.Obj) (ObjResp, error) {
return toObjsRespWithoutSignAndThumb(src), nil return toObjsRespWithoutSignAndThumb(src), nil
}) })
common.SuccessResp(c, common.PageResp{ common.SuccessResp(c, ArchiveListResp{
Content: ret, Content: ret,
Total: int64(total), Total: int64(total),
}) })
@ -320,7 +298,7 @@ func ArchiveDown(c *gin.Context) {
filename := stdpath.Base(innerPath) filename := stdpath.Base(innerPath)
storage, err := fs.GetStorage(archiveRawPath, &fs.GetStoragesArgs{}) storage, err := fs.GetStorage(archiveRawPath, &fs.GetStoragesArgs{})
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
if common.ShouldProxy(storage, filename) { if common.ShouldProxy(storage, filename) {
@ -340,7 +318,7 @@ func ArchiveDown(c *gin.Context) {
InnerPath: innerPath, InnerPath: innerPath,
}) })
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
redirect(c, link) redirect(c, link)
@ -354,7 +332,7 @@ func ArchiveProxy(c *gin.Context) {
filename := stdpath.Base(innerPath) filename := stdpath.Base(innerPath)
storage, err := fs.GetStorage(archiveRawPath, &fs.GetStoragesArgs{}) storage, err := fs.GetStorage(archiveRawPath, &fs.GetStoragesArgs{})
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
if canProxy(storage, filename) { if canProxy(storage, filename) {
@ -370,34 +348,16 @@ func ArchiveProxy(c *gin.Context) {
InnerPath: innerPath, InnerPath: innerPath,
}) })
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
proxy(c, link, file, storage.GetStorage().ProxyRange) proxy(c, link, file, storage.GetStorage().ProxyRange)
} else { } else {
common.ErrorPage(c, errors.New("proxy not allowed"), 403) common.ErrorStrResp(c, "proxy not allowed", 403)
return return
} }
} }
func proxyInternalExtract(c *gin.Context, rc io.ReadCloser, size int64, fileName string) {
defer func() {
if err := rc.Close(); err != nil {
log.Errorf("failed to close file streamer, %v", err)
}
}()
headers := map[string]string{
"Referrer-Policy": "no-referrer",
"Cache-Control": "max-age=0, no-cache, no-store, must-revalidate",
}
headers["Content-Disposition"] = utils.GenerateContentDisposition(fileName)
contentType := c.Request.Header.Get("Content-Type")
if contentType == "" {
contentType = utils.GetMimeType(fileName)
}
c.DataFromReader(200, size, contentType, rc, headers)
}
func ArchiveInternalExtract(c *gin.Context) { func ArchiveInternalExtract(c *gin.Context) {
archiveRawPath := c.Request.Context().Value(conf.PathKey).(string) archiveRawPath := c.Request.Context().Value(conf.PathKey).(string)
innerPath := utils.FixAndCleanPath(c.Query("inner")) innerPath := utils.FixAndCleanPath(c.Query("inner"))
@ -413,11 +373,25 @@ func ArchiveInternalExtract(c *gin.Context) {
InnerPath: innerPath, InnerPath: innerPath,
}) })
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
defer func() {
if err := rc.Close(); err != nil {
log.Errorf("failed to close file streamer, %v", err)
}
}()
headers := map[string]string{
"Referrer-Policy": "no-referrer",
"Cache-Control": "max-age=0, no-cache, no-store, must-revalidate",
}
fileName := stdpath.Base(innerPath) fileName := stdpath.Base(innerPath)
proxyInternalExtract(c, rc, size, fileName) headers["Content-Disposition"] = utils.GenerateContentDisposition(fileName)
contentType := c.Request.Header.Get("Content-Type")
if contentType == "" {
contentType = utils.GetMimeType(fileName)
}
c.DataFromReader(200, size, contentType, rc, headers)
} }
func ArchiveExtensions(c *gin.Context) { func ArchiveExtensions(c *gin.Context) {

View File

@ -26,7 +26,7 @@ func Down(c *gin.Context) {
filename := stdpath.Base(rawPath) filename := stdpath.Base(rawPath)
storage, err := fs.GetStorage(rawPath, &fs.GetStoragesArgs{}) storage, err := fs.GetStorage(rawPath, &fs.GetStoragesArgs{})
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
if common.ShouldProxy(storage, filename) { if common.ShouldProxy(storage, filename) {
@ -40,7 +40,7 @@ func Down(c *gin.Context) {
Redirect: true, Redirect: true,
}) })
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
redirect(c, link) redirect(c, link)
@ -52,7 +52,7 @@ func Proxy(c *gin.Context) {
filename := stdpath.Base(rawPath) filename := stdpath.Base(rawPath)
storage, err := fs.GetStorage(rawPath, &fs.GetStoragesArgs{}) storage, err := fs.GetStorage(rawPath, &fs.GetStoragesArgs{})
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
if canProxy(storage, filename) { if canProxy(storage, filename) {
@ -67,12 +67,12 @@ func Proxy(c *gin.Context) {
Type: c.Query("type"), Type: c.Query("type"),
}) })
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
proxy(c, link, file, storage.GetStorage().ProxyRange) proxy(c, link, file, storage.GetStorage().ProxyRange)
} else { } else {
common.ErrorPage(c, errors.New("proxy not allowed"), 403) common.ErrorStrResp(c, "proxy not allowed", 403)
return return
} }
} }
@ -89,7 +89,7 @@ func redirect(c *gin.Context, link *model.Link) {
} }
link.URL, err = utils.InjectQuery(link.URL, query) link.URL, err = utils.InjectQuery(link.URL, query)
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
} }
@ -106,7 +106,7 @@ func proxy(c *gin.Context, link *model.Link, file model.Obj, proxyRange bool) {
} }
link.URL, err = utils.InjectQuery(link.URL, query) link.URL, err = utils.InjectQuery(link.URL, query)
if err != nil { if err != nil {
common.ErrorPage(c, err, 500) common.ErrorResp(c, err, 500)
return return
} }
} }
@ -114,8 +114,9 @@ func proxy(c *gin.Context, link *model.Link, file model.Obj, proxyRange bool) {
link = common.ProxyRange(c, link, file.GetSize()) link = common.ProxyRange(c, link, file.GetSize())
} }
Writer := &common.WrittenResponseWriter{ResponseWriter: c.Writer} Writer := &common.WrittenResponseWriter{ResponseWriter: c.Writer}
raw, _ := strconv.ParseBool(c.DefaultQuery("raw", "false"))
if utils.Ext(file.GetName()) == "md" && setting.GetBool(conf.FilterReadMeScripts) && !raw { //优先处理md文件
if utils.Ext(file.GetName()) == "md" && setting.GetBool(conf.FilterReadMeScripts) {
buf := bytes.NewBuffer(make([]byte, 0, file.GetSize())) buf := bytes.NewBuffer(make([]byte, 0, file.GetSize()))
w := &common.InterceptResponseWriter{ResponseWriter: Writer, Writer: buf} w := &common.InterceptResponseWriter{ResponseWriter: Writer, Writer: buf}
err = common.Proxy(w, c.Request, link, file) err = common.Proxy(w, c.Request, link, file)
@ -148,9 +149,9 @@ func proxy(c *gin.Context, link *model.Link, file model.Obj, proxyRange bool) {
log.Errorf("%s %s local proxy error: %+v", c.Request.Method, c.Request.URL.Path, err) log.Errorf("%s %s local proxy error: %+v", c.Request.Method, c.Request.URL.Path, err)
} else { } else {
if statusCode, ok := errors.Unwrap(err).(net.ErrorHttpStatusCode); ok { if statusCode, ok := errors.Unwrap(err).(net.ErrorHttpStatusCode); ok {
common.ErrorPage(c, err, int(statusCode), true) common.ErrorResp(c, err, int(statusCode), true)
} else { } else {
common.ErrorPage(c, err, 500, true) common.ErrorResp(c, err, 500, true)
} }
} }
} }

View File

@ -56,27 +56,14 @@ type FsListResp struct {
Provider string `json:"provider"` Provider string `json:"provider"`
} }
func FsListSplit(c *gin.Context) { func FsList(c *gin.Context) {
var req ListReq var req ListReq
if err := c.ShouldBind(&req); err != nil { if err := c.ShouldBind(&req); err != nil {
common.ErrorResp(c, err, 400) common.ErrorResp(c, err, 400)
return return
} }
req.Validate() req.Validate()
if strings.HasPrefix(req.Path, "/@s") {
req.Path = strings.TrimPrefix(req.Path, "/@s")
SharingList(c, &req)
return
}
user := c.Request.Context().Value(conf.UserKey).(*model.User) user := c.Request.Context().Value(conf.UserKey).(*model.User)
if user.IsGuest() && user.Disabled {
common.ErrorStrResp(c, "Guest user is disabled, login please", 401)
return
}
FsList(c, &req, user)
}
func FsList(c *gin.Context, req *ListReq, user *model.User) {
reqPath, err := user.JoinPath(req.Path) reqPath, err := user.JoinPath(req.Path)
if err != nil { if err != nil {
common.ErrorResp(c, err, 403) common.ErrorResp(c, err, 403)
@ -256,26 +243,13 @@ type FsGetResp struct {
Related []ObjResp `json:"related"` Related []ObjResp `json:"related"`
} }
func FsGetSplit(c *gin.Context) { func FsGet(c *gin.Context) {
var req FsGetReq var req FsGetReq
if err := c.ShouldBind(&req); err != nil { if err := c.ShouldBind(&req); err != nil {
common.ErrorResp(c, err, 400) common.ErrorResp(c, err, 400)
return return
} }
if strings.HasPrefix(req.Path, "/@s") {
req.Path = strings.TrimPrefix(req.Path, "/@s")
SharingGet(c, &req)
return
}
user := c.Request.Context().Value(conf.UserKey).(*model.User) user := c.Request.Context().Value(conf.UserKey).(*model.User)
if user.IsGuest() && user.Disabled {
common.ErrorStrResp(c, "Guest user is disabled, login please", 401)
return
}
FsGet(c, &req, user)
}
func FsGet(c *gin.Context, req *FsGetReq, user *model.User) {
reqPath, err := user.JoinPath(req.Path) reqPath, err := user.JoinPath(req.Path)
if err != nil { if err != nil {
common.ErrorResp(c, err, 403) common.ErrorResp(c, err, 403)

View File

@ -1,577 +0,0 @@
package handles
import (
"fmt"
stdpath "path"
"strings"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/setting"
"github.com/OpenListTeam/OpenList/v4/internal/sharing"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/OpenListTeam/go-cache"
"github.com/gin-gonic/gin"
"github.com/pkg/errors"
)
func SharingGet(c *gin.Context, req *FsGetReq) {
sid, path, _ := strings.Cut(strings.TrimPrefix(req.Path, "/"), "/")
if sid == "" {
common.ErrorStrResp(c, "invalid share id", 400)
return
}
s, obj, err := sharing.Get(c.Request.Context(), sid, path, model.SharingListArgs{
Refresh: false,
Pwd: req.Password,
})
if dealError(c, err) {
return
}
_ = countAccess(c.ClientIP(), s)
fakePath := fmt.Sprintf("/%s/%s", sid, path)
url := ""
if !obj.IsDir() {
url = fmt.Sprintf("%s/sd%s", common.GetApiUrl(c), utils.EncodePath(fakePath, true))
if s.Pwd != "" {
url += "?pwd=" + s.Pwd
}
}
thumb, _ := model.GetThumb(obj)
common.SuccessResp(c, FsGetResp{
ObjResp: ObjResp{
Id: "",
Path: fakePath,
Name: obj.GetName(),
Size: obj.GetSize(),
IsDir: obj.IsDir(),
Modified: obj.ModTime(),
Created: obj.CreateTime(),
HashInfoStr: obj.GetHash().String(),
HashInfo: obj.GetHash().Export(),
Sign: "",
Type: utils.GetFileType(obj.GetName()),
Thumb: thumb,
},
RawURL: url,
Readme: s.Readme,
Header: s.Header,
Provider: "unknown",
Related: nil,
})
}
func SharingList(c *gin.Context, req *ListReq) {
sid, path, _ := strings.Cut(strings.TrimPrefix(req.Path, "/"), "/")
if sid == "" {
common.ErrorStrResp(c, "invalid share id", 400)
return
}
s, objs, err := sharing.List(c.Request.Context(), sid, path, model.SharingListArgs{
Refresh: req.Refresh,
Pwd: req.Password,
})
if dealError(c, err) {
return
}
_ = countAccess(c.ClientIP(), s)
fakePath := fmt.Sprintf("/%s/%s", sid, path)
total, objs := pagination(objs, &req.PageReq)
common.SuccessResp(c, FsListResp{
Content: utils.MustSliceConvert(objs, func(obj model.Obj) ObjResp {
thumb, _ := model.GetThumb(obj)
return ObjResp{
Id: "",
Path: stdpath.Join(fakePath, obj.GetName()),
Name: obj.GetName(),
Size: obj.GetSize(),
IsDir: obj.IsDir(),
Modified: obj.ModTime(),
Created: obj.CreateTime(),
HashInfoStr: obj.GetHash().String(),
HashInfo: obj.GetHash().Export(),
Sign: "",
Thumb: thumb,
Type: utils.GetObjType(obj.GetName(), obj.IsDir()),
}
}),
Total: int64(total),
Readme: s.Readme,
Header: s.Header,
Write: false,
Provider: "unknown",
})
}
func SharingArchiveMeta(c *gin.Context, req *ArchiveMetaReq) {
if !setting.GetBool(conf.ShareArchivePreview) {
common.ErrorStrResp(c, "sharing archives previewing is not allowed", 403)
return
}
sid, path, _ := strings.Cut(strings.TrimPrefix(req.Path, "/"), "/")
if sid == "" {
common.ErrorStrResp(c, "invalid share id", 400)
return
}
archiveArgs := model.ArchiveArgs{
LinkArgs: model.LinkArgs{
Header: c.Request.Header,
Type: c.Query("type"),
},
Password: req.ArchivePass,
}
s, ret, err := sharing.ArchiveMeta(c.Request.Context(), sid, path, model.SharingArchiveMetaArgs{
ArchiveMetaArgs: model.ArchiveMetaArgs{
ArchiveArgs: archiveArgs,
Refresh: req.Refresh,
},
Pwd: req.Password,
})
if dealError(c, err) {
return
}
_ = countAccess(c.ClientIP(), s)
fakePath := fmt.Sprintf("/%s/%s", sid, path)
url := fmt.Sprintf("%s/sad%s", common.GetApiUrl(c), utils.EncodePath(fakePath, true))
if s.Pwd != "" {
url += "?pwd=" + s.Pwd
}
common.SuccessResp(c, ArchiveMetaResp{
Comment: ret.GetComment(),
IsEncrypted: ret.IsEncrypted(),
Content: toContentResp(ret.GetTree()),
Sort: ret.Sort,
RawURL: url,
Sign: "",
})
}
func SharingArchiveList(c *gin.Context, req *ArchiveListReq) {
if !setting.GetBool(conf.ShareArchivePreview) {
common.ErrorStrResp(c, "sharing archives previewing is not allowed", 403)
return
}
sid, path, _ := strings.Cut(strings.TrimPrefix(req.Path, "/"), "/")
if sid == "" {
common.ErrorStrResp(c, "invalid share id", 400)
return
}
innerArgs := model.ArchiveInnerArgs{
ArchiveArgs: model.ArchiveArgs{
LinkArgs: model.LinkArgs{
Header: c.Request.Header,
Type: c.Query("type"),
},
Password: req.ArchivePass,
},
InnerPath: utils.FixAndCleanPath(req.InnerPath),
}
s, objs, err := sharing.ArchiveList(c.Request.Context(), sid, path, model.SharingArchiveListArgs{
ArchiveListArgs: model.ArchiveListArgs{
ArchiveInnerArgs: innerArgs,
Refresh: req.Refresh,
},
Pwd: req.Password,
})
if dealError(c, err) {
return
}
_ = countAccess(c.ClientIP(), s)
total, objs := pagination(objs, &req.PageReq)
ret, _ := utils.SliceConvert(objs, func(src model.Obj) (ObjResp, error) {
return toObjsRespWithoutSignAndThumb(src), nil
})
common.SuccessResp(c, common.PageResp{
Content: ret,
Total: int64(total),
})
}
func SharingDown(c *gin.Context) {
sid := c.Request.Context().Value(conf.SharingIDKey).(string)
path := c.Request.Context().Value(conf.PathKey).(string)
path = utils.FixAndCleanPath(path)
pwd := c.Query("pwd")
s, err := op.GetSharingById(sid)
if err == nil {
if !s.Valid() {
err = errs.InvalidSharing
} else if !s.Verify(pwd) {
err = errs.WrongShareCode
} else if len(s.Files) != 1 && path == "/" {
err = errors.New("cannot get sharing root link")
}
}
if dealErrorPage(c, err) {
return
}
unwrapPath, err := op.GetSharingUnwrapPath(s, path)
if err != nil {
common.ErrorPage(c, errors.New("failed get sharing unwrap path"), 500)
return
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if dealErrorPage(c, err) {
return
}
if setting.GetBool(conf.ShareForceProxy) || common.ShouldProxy(storage, stdpath.Base(actualPath)) {
if _, ok := c.GetQuery("d"); !ok {
if url := common.GenerateDownProxyURL(storage.GetStorage(), unwrapPath); url != "" {
c.Redirect(302, url)
_ = countAccess(c.ClientIP(), s)
return
}
}
link, obj, err := op.Link(c.Request.Context(), storage, actualPath, model.LinkArgs{
Header: c.Request.Header,
Type: c.Query("type"),
})
if err != nil {
common.ErrorPage(c, errors.WithMessage(err, "failed get sharing link"), 500)
return
}
_ = countAccess(c.ClientIP(), s)
proxy(c, link, obj, storage.GetStorage().ProxyRange)
} else {
link, _, err := op.Link(c.Request.Context(), storage, actualPath, model.LinkArgs{
IP: c.ClientIP(),
Header: c.Request.Header,
Type: c.Query("type"),
Redirect: true,
})
if err != nil {
common.ErrorPage(c, errors.WithMessage(err, "failed get sharing link"), 500)
return
}
_ = countAccess(c.ClientIP(), s)
redirect(c, link)
}
}
func SharingArchiveExtract(c *gin.Context) {
if !setting.GetBool(conf.ShareArchivePreview) {
common.ErrorPage(c, errors.New("sharing archives previewing is not allowed"), 403)
return
}
sid := c.Request.Context().Value(conf.SharingIDKey).(string)
path := c.Request.Context().Value(conf.PathKey).(string)
path = utils.FixAndCleanPath(path)
pwd := c.Query("pwd")
innerPath := utils.FixAndCleanPath(c.Query("inner"))
archivePass := c.Query("pass")
s, err := op.GetSharingById(sid)
if err == nil {
if !s.Valid() {
err = errs.InvalidSharing
} else if !s.Verify(pwd) {
err = errs.WrongShareCode
} else if len(s.Files) != 1 && path == "/" {
err = errors.New("cannot extract sharing root")
}
}
if dealErrorPage(c, err) {
return
}
unwrapPath, err := op.GetSharingUnwrapPath(s, path)
if err != nil {
common.ErrorPage(c, errors.New("failed get sharing unwrap path"), 500)
return
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if dealErrorPage(c, err) {
return
}
args := model.ArchiveInnerArgs{
ArchiveArgs: model.ArchiveArgs{
LinkArgs: model.LinkArgs{
Header: c.Request.Header,
Type: c.Query("type"),
},
Password: archivePass,
},
InnerPath: innerPath,
}
if _, ok := storage.(driver.ArchiveReader); ok {
if setting.GetBool(conf.ShareForceProxy) || common.ShouldProxy(storage, stdpath.Base(actualPath)) {
link, obj, err := op.DriverExtract(c.Request.Context(), storage, actualPath, args)
if dealErrorPage(c, err) {
return
}
proxy(c, link, obj, storage.GetStorage().ProxyRange)
} else {
args.Redirect = true
link, _, err := op.DriverExtract(c.Request.Context(), storage, actualPath, args)
if dealErrorPage(c, err) {
return
}
redirect(c, link)
}
} else {
rc, size, err := op.InternalExtract(c.Request.Context(), storage, actualPath, args)
if dealErrorPage(c, err) {
return
}
fileName := stdpath.Base(innerPath)
proxyInternalExtract(c, rc, size, fileName)
}
}
func dealError(c *gin.Context, err error) bool {
if err == nil {
return false
} else if errors.Is(err, errs.SharingNotFound) {
common.ErrorStrResp(c, "the share does not exist", 500)
} else if errors.Is(err, errs.InvalidSharing) {
common.ErrorStrResp(c, "the share has expired or is no longer valid", 500)
} else if errors.Is(err, errs.WrongShareCode) {
common.ErrorResp(c, err, 403)
} else if errors.Is(err, errs.WrongArchivePassword) {
common.ErrorResp(c, err, 202)
} else {
common.ErrorResp(c, err, 500)
}
return true
}
func dealErrorPage(c *gin.Context, err error) bool {
if err == nil {
return false
} else if errors.Is(err, errs.SharingNotFound) {
common.ErrorPage(c, errors.New("the share does not exist"), 500)
} else if errors.Is(err, errs.InvalidSharing) {
common.ErrorPage(c, errors.New("the share has expired or is no longer valid"), 500)
} else if errors.Is(err, errs.WrongShareCode) {
common.ErrorPage(c, err, 403)
} else if errors.Is(err, errs.WrongArchivePassword) {
common.ErrorPage(c, err, 202)
} else {
common.ErrorPage(c, err, 500)
}
return true
}
type SharingResp struct {
*model.Sharing
CreatorName string `json:"creator"`
CreatorRole int `json:"creator_role"`
}
func GetSharing(c *gin.Context) {
sid := c.Query("id")
user := c.Request.Context().Value(conf.UserKey).(*model.User)
s, err := op.GetSharingById(sid)
if err != nil || (!user.IsAdmin() && s.Creator.ID != user.ID) {
common.ErrorStrResp(c, "sharing not found", 404)
return
}
common.SuccessResp(c, SharingResp{
Sharing: s,
CreatorName: s.Creator.Username,
CreatorRole: s.Creator.Role,
})
}
func ListSharings(c *gin.Context) {
var req model.PageReq
if err := c.ShouldBind(&req); err != nil {
common.ErrorResp(c, err, 400)
return
}
req.Validate()
user := c.Request.Context().Value(conf.UserKey).(*model.User)
var sharings []model.Sharing
var total int64
var err error
if user.IsAdmin() {
sharings, total, err = op.GetSharings(req.Page, req.PerPage)
} else {
sharings, total, err = op.GetSharingsByCreatorId(user.ID, req.Page, req.PerPage)
}
if err != nil {
common.ErrorResp(c, err, 500, true)
return
}
common.SuccessResp(c, common.PageResp{
Content: utils.MustSliceConvert(sharings, func(s model.Sharing) SharingResp {
return SharingResp{
Sharing: &s,
CreatorName: s.Creator.Username,
CreatorRole: s.Creator.Role,
}
}),
Total: total,
})
}
type CreateSharingReq struct {
Files []string `json:"files"`
Expires *time.Time `json:"expires"`
Pwd string `json:"pwd"`
MaxAccessed int `json:"max_accessed"`
Disabled bool `json:"disabled"`
Remark string `json:"remark"`
Readme string `json:"readme"`
Header string `json:"header"`
model.Sort
}
type UpdateSharingReq struct {
ID string `json:"id"`
Accessed int `json:"accessed"`
CreateSharingReq
}
func UpdateSharing(c *gin.Context) {
var req UpdateSharingReq
if err := c.ShouldBind(&req); err != nil {
common.ErrorResp(c, err, 400)
return
}
if len(req.Files) == 0 || (len(req.Files) == 1 && req.Files[0] == "") {
common.ErrorStrResp(c, "must add at least 1 object", 400)
return
}
user := c.Request.Context().Value(conf.UserKey).(*model.User)
if !user.CanShare() {
common.ErrorStrResp(c, "permission denied", 403)
return
}
for i, s := range req.Files {
s = utils.FixAndCleanPath(s)
req.Files[i] = s
if !user.IsAdmin() && !strings.HasPrefix(s, user.BasePath) {
common.ErrorStrResp(c, fmt.Sprintf("permission denied to share path [%s]", s), 500)
return
}
}
s, err := op.GetSharingById(req.ID)
if err != nil || (!user.IsAdmin() && s.CreatorId != user.ID) {
common.ErrorStrResp(c, "sharing not found", 404)
return
}
s.Files = req.Files
s.Expires = req.Expires
s.Pwd = req.Pwd
s.Accessed = req.Accessed
s.MaxAccessed = req.MaxAccessed
s.Disabled = req.Disabled
s.Sort = req.Sort
s.Header = req.Header
s.Readme = req.Readme
s.Remark = req.Remark
if err = op.UpdateSharing(s); err != nil {
common.ErrorResp(c, err, 500)
} else {
common.SuccessResp(c, SharingResp{
Sharing: s,
CreatorName: s.Creator.Username,
CreatorRole: s.Creator.Role,
})
}
}
func CreateSharing(c *gin.Context) {
var req CreateSharingReq
var err error
if err = c.ShouldBind(&req); err != nil {
common.ErrorResp(c, err, 400)
return
}
if len(req.Files) == 0 || (len(req.Files) == 1 && req.Files[0] == "") {
common.ErrorStrResp(c, "must add at least 1 object", 400)
return
}
user := c.Request.Context().Value(conf.UserKey).(*model.User)
if !user.CanShare() {
common.ErrorStrResp(c, "permission denied", 403)
return
}
for i, s := range req.Files {
s = utils.FixAndCleanPath(s)
req.Files[i] = s
if !user.IsAdmin() && !strings.HasPrefix(s, user.BasePath) {
common.ErrorStrResp(c, fmt.Sprintf("permission denied to share path [%s]", s), 500)
return
}
}
s := &model.Sharing{
SharingDB: &model.SharingDB{
Expires: req.Expires,
Pwd: req.Pwd,
Accessed: 0,
MaxAccessed: req.MaxAccessed,
Disabled: req.Disabled,
Sort: req.Sort,
Remark: req.Remark,
Readme: req.Readme,
Header: req.Header,
},
Files: req.Files,
Creator: user,
}
var id string
if id, err = op.CreateSharing(s); err != nil {
common.ErrorResp(c, err, 500)
} else {
s.ID = id
common.SuccessResp(c, SharingResp{
Sharing: s,
CreatorName: s.Creator.Username,
CreatorRole: s.Creator.Role,
})
}
}
func DeleteSharing(c *gin.Context) {
sid := c.Query("id")
user := c.Request.Context().Value(conf.UserKey).(*model.User)
s, err := op.GetSharingById(sid)
if err != nil || (!user.IsAdmin() && s.CreatorId != user.ID) {
common.ErrorResp(c, err, 404)
return
}
if err = op.DeleteSharing(sid); err != nil {
common.ErrorResp(c, err, 500)
} else {
common.SuccessResp(c)
}
}
func SetEnableSharing(disable bool) func(ctx *gin.Context) {
return func(c *gin.Context) {
sid := c.Query("id")
user := c.Request.Context().Value(conf.UserKey).(*model.User)
s, err := op.GetSharingById(sid)
if err != nil || (!user.IsAdmin() && s.CreatorId != user.ID) {
common.ErrorStrResp(c, "sharing not found", 404)
return
}
s.Disabled = disable
if err = op.UpdateSharing(s, true); err != nil {
common.ErrorResp(c, err, 500)
} else {
common.SuccessResp(c)
}
}
}
var (
AccessCache = cache.NewMemCache[interface{}]()
AccessCountDelay = 30 * time.Minute
)
func countAccess(ip string, s *model.Sharing) error {
key := fmt.Sprintf("%s:%s", s.ID, ip)
_, ok := AccessCache.Get(key)
if !ok {
AccessCache.Set(key, struct{}{}, cache.WithEx[interface{}](AccessCountDelay))
s.Accessed += 1
return op.UpdateSharing(s, true)
}
return nil
}

View File

@ -14,8 +14,7 @@ import (
// Auth is a middleware that checks if the user is logged in. // Auth is a middleware that checks if the user is logged in.
// if token is empty, set user to guest // if token is empty, set user to guest
func Auth(allowDisabledGuest bool) func(c *gin.Context) { func Auth(c *gin.Context) {
return func(c *gin.Context) {
token := c.GetHeader("Authorization") token := c.GetHeader("Authorization")
if subtle.ConstantTimeCompare([]byte(token), []byte(setting.GetStr(conf.Token))) == 1 { if subtle.ConstantTimeCompare([]byte(token), []byte(setting.GetStr(conf.Token))) == 1 {
admin, err := op.GetAdmin() admin, err := op.GetAdmin()
@ -36,7 +35,7 @@ func Auth(allowDisabledGuest bool) func(c *gin.Context) {
c.Abort() c.Abort()
return return
} }
if !allowDisabledGuest && guest.Disabled { if guest.Disabled {
common.ErrorStrResp(c, "Guest user is disabled, login please", 401) common.ErrorStrResp(c, "Guest user is disabled, login please", 401)
c.Abort() c.Abort()
return return
@ -73,7 +72,6 @@ func Auth(allowDisabledGuest bool) func(c *gin.Context) {
log.Debugf("use login token: %+v", user) log.Debugf("use login token: %+v", user)
c.Next() c.Next()
} }
}
func Authn(c *gin.Context) { func Authn(c *gin.Context) {
token := c.GetHeader("Authorization") token := c.GetHeader("Authorization")

View File

@ -15,19 +15,14 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
) )
func PathParse(c *gin.Context) {
rawPath := parsePath(c.Param("path"))
common.GinWithValue(c, conf.PathKey, rawPath)
c.Next()
}
func Down(verifyFunc func(string, string) error) func(c *gin.Context) { func Down(verifyFunc func(string, string) error) func(c *gin.Context) {
return func(c *gin.Context) { return func(c *gin.Context) {
rawPath := c.Request.Context().Value(conf.PathKey).(string) rawPath := parsePath(c.Param("path"))
common.GinWithValue(c, conf.PathKey, rawPath)
meta, err := op.GetNearestMeta(rawPath) meta, err := op.GetNearestMeta(rawPath)
if err != nil { if err != nil {
if !errors.Is(errors.Cause(err), errs.MetaNotFound) { if !errors.Is(errors.Cause(err), errs.MetaNotFound) {
common.ErrorPage(c, err, 500, true) common.ErrorResp(c, err, 500, true)
return return
} }
} }
@ -37,7 +32,7 @@ func Down(verifyFunc func(string, string) error) func(c *gin.Context) {
s := c.Query("sign") s := c.Query("sign")
err = verifyFunc(rawPath, strings.TrimSuffix(s, "/")) err = verifyFunc(rawPath, strings.TrimSuffix(s, "/"))
if err != nil { if err != nil {
common.ErrorPage(c, err, 401) common.ErrorResp(c, err, 401)
c.Abort() c.Abort()
return return
} }

View File

@ -1,18 +0,0 @@
package middlewares
import (
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/gin-gonic/gin"
)
func SharingIdParse(c *gin.Context) {
sid := c.Param("sid")
common.GinWithValue(c, conf.SharingIDKey, sid)
c.Next()
}
func EmptyPathParse(c *gin.Context) {
common.GinWithValue(c, conf.PathKey, "/")
c.Next()
}

View File

@ -44,29 +44,20 @@ func Init(e *gin.Engine) {
downloadLimiter := middlewares.DownloadRateLimiter(stream.ClientDownloadLimit) downloadLimiter := middlewares.DownloadRateLimiter(stream.ClientDownloadLimit)
signCheck := middlewares.Down(sign.Verify) signCheck := middlewares.Down(sign.Verify)
g.GET("/d/*path", middlewares.PathParse, signCheck, downloadLimiter, handles.Down) g.GET("/d/*path", signCheck, downloadLimiter, handles.Down)
g.GET("/p/*path", middlewares.PathParse, signCheck, downloadLimiter, handles.Proxy) g.GET("/p/*path", signCheck, downloadLimiter, handles.Proxy)
g.HEAD("/d/*path", middlewares.PathParse, signCheck, handles.Down) g.HEAD("/d/*path", signCheck, handles.Down)
g.HEAD("/p/*path", middlewares.PathParse, signCheck, handles.Proxy) g.HEAD("/p/*path", signCheck, handles.Proxy)
archiveSignCheck := middlewares.Down(sign.VerifyArchive) archiveSignCheck := middlewares.Down(sign.VerifyArchive)
g.GET("/ad/*path", middlewares.PathParse, archiveSignCheck, downloadLimiter, handles.ArchiveDown) g.GET("/ad/*path", archiveSignCheck, downloadLimiter, handles.ArchiveDown)
g.GET("/ap/*path", middlewares.PathParse, archiveSignCheck, downloadLimiter, handles.ArchiveProxy) g.GET("/ap/*path", archiveSignCheck, downloadLimiter, handles.ArchiveProxy)
g.GET("/ae/*path", middlewares.PathParse, archiveSignCheck, downloadLimiter, handles.ArchiveInternalExtract) g.GET("/ae/*path", archiveSignCheck, downloadLimiter, handles.ArchiveInternalExtract)
g.HEAD("/ad/*path", middlewares.PathParse, archiveSignCheck, handles.ArchiveDown) g.HEAD("/ad/*path", archiveSignCheck, handles.ArchiveDown)
g.HEAD("/ap/*path", middlewares.PathParse, archiveSignCheck, handles.ArchiveProxy) g.HEAD("/ap/*path", archiveSignCheck, handles.ArchiveProxy)
g.HEAD("/ae/*path", middlewares.PathParse, archiveSignCheck, handles.ArchiveInternalExtract) g.HEAD("/ae/*path", archiveSignCheck, handles.ArchiveInternalExtract)
g.GET("/sd/:sid", middlewares.EmptyPathParse, middlewares.SharingIdParse, downloadLimiter, handles.SharingDown)
g.GET("/sd/:sid/*path", middlewares.PathParse, middlewares.SharingIdParse, downloadLimiter, handles.SharingDown)
g.HEAD("/sd/:sid", middlewares.EmptyPathParse, middlewares.SharingIdParse, handles.SharingDown)
g.HEAD("/sd/:sid/*path", middlewares.PathParse, middlewares.SharingIdParse, handles.SharingDown)
g.GET("/sad/:sid", middlewares.EmptyPathParse, middlewares.SharingIdParse, downloadLimiter, handles.SharingArchiveExtract)
g.GET("/sad/:sid/*path", middlewares.PathParse, middlewares.SharingIdParse, downloadLimiter, handles.SharingArchiveExtract)
g.HEAD("/sad/:sid", middlewares.EmptyPathParse, middlewares.SharingIdParse, handles.SharingArchiveExtract)
g.HEAD("/sad/:sid/*path", middlewares.PathParse, middlewares.SharingIdParse, handles.SharingArchiveExtract)
api := g.Group("/api") api := g.Group("/api")
auth := api.Group("", middlewares.Auth(false)) auth := api.Group("", middlewares.Auth)
webauthn := api.Group("/authn", middlewares.Authn) webauthn := api.Group("/authn", middlewares.Authn)
api.POST("/auth/login", handles.Login) api.POST("/auth/login", handles.Login)
@ -102,9 +93,7 @@ func Init(e *gin.Engine) {
public.Any("/archive_extensions", handles.ArchiveExtensions) public.Any("/archive_extensions", handles.ArchiveExtensions)
_fs(auth.Group("/fs")) _fs(auth.Group("/fs"))
fsAndShare(api.Group("/fs", middlewares.Auth(true)))
_task(auth.Group("/task", middlewares.AuthNotGuest)) _task(auth.Group("/task", middlewares.AuthNotGuest))
_sharing(auth.Group("/share", middlewares.AuthNotGuest))
admin(auth.Group("/admin", middlewares.AuthAdmin)) admin(auth.Group("/admin", middlewares.AuthAdmin))
if flags.Debug || flags.Dev { if flags.Debug || flags.Dev {
debug(g.Group("/debug")) debug(g.Group("/debug"))
@ -180,16 +169,10 @@ func admin(g *gin.RouterGroup) {
index.GET("/progress", middlewares.SearchIndex, handles.GetProgress) index.GET("/progress", middlewares.SearchIndex, handles.GetProgress)
} }
func fsAndShare(g *gin.RouterGroup) {
g.Any("/list", handles.FsListSplit)
g.Any("/get", handles.FsGetSplit)
a := g.Group("/archive")
a.Any("/meta", handles.FsArchiveMetaSplit)
a.Any("/list", handles.FsArchiveListSplit)
}
func _fs(g *gin.RouterGroup) { func _fs(g *gin.RouterGroup) {
g.Any("/list", handles.FsList)
g.Any("/search", middlewares.SearchIndex, handles.Search) g.Any("/search", middlewares.SearchIndex, handles.Search)
g.Any("/get", handles.FsGet)
g.Any("/other", handles.FsOther) g.Any("/other", handles.FsOther)
g.Any("/dirs", handles.FsDirs) g.Any("/dirs", handles.FsDirs)
g.POST("/mkdir", handles.FsMkdir) g.POST("/mkdir", handles.FsMkdir)
@ -209,23 +192,16 @@ func _fs(g *gin.RouterGroup) {
// g.POST("/add_qbit", handles.AddQbittorrent) // g.POST("/add_qbit", handles.AddQbittorrent)
// g.POST("/add_transmission", handles.SetTransmission) // g.POST("/add_transmission", handles.SetTransmission)
g.POST("/add_offline_download", handles.AddOfflineDownload) g.POST("/add_offline_download", handles.AddOfflineDownload)
g.POST("/archive/decompress", handles.FsArchiveDecompress) a := g.Group("/archive")
a.Any("/meta", handles.FsArchiveMeta)
a.Any("/list", handles.FsArchiveList)
a.POST("/decompress", handles.FsArchiveDecompress)
} }
func _task(g *gin.RouterGroup) { func _task(g *gin.RouterGroup) {
handles.SetupTaskRoute(g) handles.SetupTaskRoute(g)
} }
func _sharing(g *gin.RouterGroup) {
g.Any("/list", handles.ListSharings)
g.GET("/get", handles.GetSharing)
g.POST("/create", handles.CreateSharing)
g.POST("/update", handles.UpdateSharing)
g.POST("/delete", handles.DeleteSharing)
g.POST("/enable", handles.SetEnableSharing(false))
g.POST("/disable", handles.SetEnableSharing(true))
}
func Cors(r *gin.Engine) { func Cors(r *gin.Engine) {
config := cors.DefaultConfig() config := cors.DefaultConfig()
// config.AllowAllOrigins = true // config.AllowAllOrigins = true