Compare commits

..

11 Commits
beta ... next

Author SHA1 Message Date
55d3827dee add(interface): driver&mamage 2025-08-14 22:16:19 +08:00
1fbc9427df add(interface): driver&mamage 2025-08-14 22:16:01 +08:00
bb3d139a47 add(interface): driver&mamage 2025-08-14 21:59:44 +08:00
d227ab85d6 add(trunk): base interface 2025-08-14 21:44:34 +08:00
5342ae96d0 add(trunk): base interface 2025-08-14 21:39:00 +08:00
273e15a050 add(trunk): base interface 2025-08-14 21:30:18 +08:00
13aad2c2fa add(trunk): base interface 2025-08-14 19:56:43 +08:00
368dc65a6e feat: Implement plugin architecture with gRPC support
- Added driver initialization for gRPC plugins in internal/bootstrap/driver.go.
- Introduced configuration structures and protobuf definitions for driver plugins in proto/driver/config.proto and proto/driver/driver.proto.
- Implemented gRPC server and client interfaces for driver plugins in shared/driver/grpc.go.
- Created common response handling utilities in server/common/common.go and server/common/resp.go.
- Developed plugin registration endpoint in server/handles/plugin.go.
- Added test cases for plugin functionality in shared/driver/plugin_test.go.
- Defined plugin reattachment configuration model in shared/model/plugin.go.
2025-08-13 19:04:38 +08:00
8b4b6ba970 feat(config): enhance configuration management and add CORS support
feat(server): implement server initialization with context and graceful shutdown
feat(utils): add utility functions for file and JSON operations
refactor(conf): restructure configuration types and improve default settings
2025-08-13 10:03:22 +08:00
4d28e838ce feat(cmd): initialize command structure and configuration management 2025-08-12 22:15:25 +08:00
3930d4789a add(trunk): next branch 2025-08-12 21:20:33 +08:00
748 changed files with 4814 additions and 9152 deletions

View File

@ -1,56 +0,0 @@
<!--
Provide a general summary of your changes in the Title above.
The PR title must start with `feat(): `, `docs(): `, `fix(): `, `style(): `, or `refactor(): `, `chore(): `. For example: `feat(component): add new feature`.
If it spans multiple components, use the main component as the prefix and enumerate in the title, describe in the body.
-->
<!--
在上方标题中提供您更改的总体摘要。
PR 标题需以 `feat(): `, `docs(): `, `fix(): `, `style(): `, `refactor(): `, `chore(): ` 其中之一开头,例如:`feat(component): 新增功能`
如果跨多个组件,请使用主要组件作为前缀,并在标题中枚举、描述中说明。
-->
## Description / 描述
<!-- Describe your changes in detail -->
<!-- 详细描述您的更改 -->
## Motivation and Context / 背景
<!-- Why is this change required? What problem does it solve? -->
<!-- 为什么需要此更改?它解决了什么问题? -->
<!-- If it fixes an open issue, please link to the issue here. -->
<!-- 如果修复了一个打开的issue请在此处链接到该issue -->
Closes #XXXX
<!-- or -->
<!-- 或者 -->
Relates to #XXXX
## How Has This Been Tested? / 测试
<!-- Please describe in detail how you tested your changes. -->
<!-- 请详细描述您如何测试更改 -->
## Checklist / 检查清单
<!-- Go over all the following points, and put an `x` in all the boxes that apply. -->
<!-- 检查以下所有要点,并在所有适用的框中打`x` -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
<!-- 如果您对其中任何一项不确定,请不要犹豫提问。我们会帮助您! -->
- [ ] I have read the [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) document.
我已阅读 [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) 文档。
- [ ] I have formatted my code with `go fmt` or [prettier](https://prettier.io/).
我已使用 `go fmt` 或 [prettier](https://prettier.io/) 格式化提交的代码。
- [ ] I have added appropriate labels to this PR (or mentioned needed labels in the description if lacking permissions).
我已为此 PR 添加了适当的标签(如无权限或需要的标签不存在,请在描述中说明,管理员将后续处理)。
- [ ] I have requested review from relevant code authors using the "Request review" feature when applicable.
我已在适当情况下使用"Request review"功能请求相关代码作者进行审查。
- [ ] I have updated the repository accordingly (If its needed).
我已相应更新了相关仓库(若适用)。
- [ ] [OpenList-Frontend](https://github.com/OpenListTeam/OpenList-Frontend) #XXXX
- [ ] [OpenList-Docs](https://github.com/OpenListTeam/OpenList-Docs) #XXXX

View File

@ -1,38 +0,0 @@
name: Sync to Gitee
on:
push:
branches:
- main
workflow_dispatch:
jobs:
sync:
runs-on: ubuntu-latest
name: Sync GitHub to Gitee
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.GITEE_SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan gitee.com >> ~/.ssh/known_hosts
- name: Create single commit and push
run: |
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
# Create a new branch
git checkout --orphan new-main
git add .
git commit -m "Sync from GitHub: $(date)"
# Add Gitee remote and force push
git remote add gitee ${{ vars.GITEE_REPO_URL }}
git push --force gitee new-main:main

View File

@ -1,77 +0,0 @@
# Contributing
## Setup your machine
`OpenList` is written in [Go](https://golang.org/) and [SolidJS](https://www.solidjs.com/).
Prerequisites:
- [git](https://git-scm.com)
- [Go 1.24+](https://golang.org/doc/install)
- [gcc](https://gcc.gnu.org/)
- [nodejs](https://nodejs.org/)
## Cloning a fork
Fork and clone `OpenList` and `OpenList-Frontend` anywhere:
```shell
$ git clone https://github.com/<your-username>/OpenList.git
$ git clone --recurse-submodules https://github.com/<your-username>/OpenList-Frontend.git
```
## Creating a branch
Create a new branch from the `main` branch, with an appropriate name.
```shell
$ git checkout -b <branch-name>
```
## Preview your change
### backend
```shell
$ go run main.go
```
### frontend
```shell
$ pnpm dev
```
## Add a new driver
Copy `drivers/template` folder and rename it, and follow the comments in it.
## Create a commit
Commit messages should be well formatted, and to make that "standardized".
Submit your pull request. For PR titles, follow [Conventional Commits](https://www.conventionalcommits.org).
https://github.com/OpenListTeam/OpenList/issues/376
It's suggested to sign your commits. See: [How to sign commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits)
## Submit a pull request
Please make sure your code has been formatted with `go fmt` or [prettier](https://prettier.io/) before submitting.
Push your branch to your `openlist` fork and open a pull request against the `main` branch.
## Merge your pull request
Your pull request will be merged after review. Please wait for the maintainer to merge your pull request after review.
At least 1 approving review is required by reviewers with write access. You can also request a review from maintainers.
## Delete your branch
(Optional) After your pull request is merged, you can delete your branch.
---
Thank you for your contribution! Let's make OpenList better together!

11
buf.gen.yaml Normal file
View File

@ -0,0 +1,11 @@
version: v1
plugins:
- plugin: buf.build/protocolbuffers/go:v1.36.7
out: .
opt:
- paths=source_relative
- plugin: buf.build/grpc/go:v1.5.1
out: .
opt:
- paths=source_relative
- require_unimplemented_servers=false

1
buf.yaml Normal file
View File

@ -0,0 +1 @@
version: v1

View File

@ -1,51 +1,42 @@
package cmd
import (
"os"
"path/filepath"
"strconv"
"context"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap/data"
"github.com/OpenListTeam/OpenList/v4/internal/db"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
log "github.com/sirupsen/logrus"
"github.com/OpenListTeam/OpenList/v5/cmd/flags"
"github.com/OpenListTeam/OpenList/v5/internal/bootstrap"
"github.com/sirupsen/logrus"
)
func Init() {
func Init(ctx context.Context) {
if flags.Dev {
flags.Debug = true
}
initLogrus()
bootstrap.InitConfig()
bootstrap.Log()
bootstrap.InitDB()
data.InitData()
bootstrap.InitStreamLimit()
bootstrap.InitIndex()
bootstrap.InitUpgradePatch()
bootstrap.InitDriverPlugins()
}
func Release() {
db.Close()
}
var pid = -1
var pidFile string
func initDaemon() {
ex, err := os.Executable()
if err != nil {
log.Fatal(err)
}
exPath := filepath.Dir(ex)
_ = os.MkdirAll(filepath.Join(exPath, "daemon"), 0700)
pidFile = filepath.Join(exPath, "daemon/pid")
if utils.Exists(pidFile) {
bytes, err := os.ReadFile(pidFile)
if err != nil {
log.Fatal("failed to read pid file", err)
}
id, err := strconv.Atoi(string(bytes))
if err != nil {
log.Fatal("failed to parse pid data", err)
}
pid = id
func initLog(l *logrus.Logger) {
if flags.Debug {
l.SetLevel(logrus.DebugLevel)
l.SetReportCaller(true)
} else {
l.SetLevel(logrus.InfoLevel)
l.SetReportCaller(false)
}
}
func initLogrus() {
formatter := logrus.TextFormatter{
ForceColors: true,
EnvironmentOverrideColors: true,
TimestampFormat: "2006-01-02 15:04:05",
FullTimestamp: true,
}
logrus.SetFormatter(&formatter)
initLog(logrus.StandardLogger())
}

View File

@ -1,10 +1,40 @@
package flags
import (
"os"
"path/filepath"
"github.com/sirupsen/logrus"
)
var (
DataDir string
ConfigFile string
Debug bool
NoPrefix bool
Dev bool
ForceBinDir bool
LogStd bool
pwd string
)
// Program working directory
func PWD() string {
if pwd != "" {
return pwd
}
if ForceBinDir {
ex, err := os.Executable()
if err != nil {
logrus.Fatal(err)
}
pwd = filepath.Dir(ex)
return pwd
}
d, err := os.Getwd()
if err != nil {
logrus.Fatal(err)
}
pwd = d
return d
}

View File

@ -4,10 +4,7 @@ import (
"fmt"
"os"
"github.com/OpenListTeam/OpenList/v4/cmd/flags"
_ "github.com/OpenListTeam/OpenList/v4/drivers"
_ "github.com/OpenListTeam/OpenList/v4/internal/archive"
_ "github.com/OpenListTeam/OpenList/v4/internal/offline_download"
"github.com/OpenListTeam/OpenList/v5/cmd/flags"
"github.com/spf13/cobra"
)
@ -27,10 +24,10 @@ func Execute() {
}
func init() {
RootCmd.PersistentFlags().StringVar(&flags.DataDir, "data", "data", "data folder")
RootCmd.PersistentFlags().StringVarP(&flags.ConfigFile, "config", "c", "data/config.json", "config file")
RootCmd.PersistentFlags().BoolVar(&flags.Debug, "debug", false, "start with debug mode")
RootCmd.PersistentFlags().BoolVar(&flags.NoPrefix, "no-prefix", false, "disable env prefix")
RootCmd.PersistentFlags().BoolVar(&flags.Dev, "dev", false, "start with dev mode")
RootCmd.PersistentFlags().BoolVar(&flags.ForceBinDir, "force-bin-dir", false, "Force to use the directory where the binary file is located as data directory")
RootCmd.PersistentFlags().BoolVar(&flags.LogStd, "log-std", false, "Force to log to std")
RootCmd.PersistentFlags().BoolVarP(&flags.ForceBinDir, "force-bin-dir", "f", false, "force to use the directory where the binary file is located as data directory")
RootCmd.PersistentFlags().BoolVar(&flags.LogStd, "log-std", false, "force to log to std")
}

View File

@ -13,15 +13,9 @@ import (
"syscall"
"time"
"github.com/OpenListTeam/OpenList/v4/cmd/flags"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/fs"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server"
"github.com/OpenListTeam/OpenList/v4/server/middlewares"
"github.com/OpenListTeam/sftpd-openlist"
ftpserver "github.com/fclairamb/ftpserverlib"
"github.com/OpenListTeam/OpenList/v5/cmd/flags"
"github.com/OpenListTeam/OpenList/v5/internal/conf"
"github.com/OpenListTeam/OpenList/v5/server"
"github.com/gin-gonic/gin"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@ -35,220 +29,127 @@ var ServerCmd = &cobra.Command{
Short: "Start the server at the specified address",
Long: `Start the server at the specified address
the address is defined in config file`,
Run: func(cmd *cobra.Command, args []string) {
Init()
if conf.Conf.DelayedStart != 0 {
utils.Log.Infof("delayed start for %d seconds", conf.Conf.DelayedStart)
time.Sleep(time.Duration(conf.Conf.DelayedStart) * time.Second)
}
bootstrap.InitOfflineDownloadTools()
bootstrap.LoadStorages()
bootstrap.InitTaskManager()
if !flags.Debug && !flags.Dev {
Run: func(_ *cobra.Command, args []string) {
serverCtx, serverCancel := context.WithCancel(context.Background())
defer serverCancel()
Init(serverCtx)
if !flags.Debug {
gin.SetMode(gin.ReleaseMode)
}
r := gin.New()
// gin log
if conf.Conf.Log.Filter.Enable {
r.Use(middlewares.FilteredLogger())
} else {
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out))
}
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out))
r.Use(gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r)
var httpHandler http.Handler = r
if conf.Conf.Scheme.EnableH2c {
httpHandler = h2c.NewHandler(r, &http2.Server{})
}
var httpSrv, httpsSrv, unixSrv *http.Server
if conf.Conf.Scheme.HttpPort != -1 {
if conf.Conf.Scheme.HttpPort > 0 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
fmt.Printf("start HTTP server @ %s\n", httpBase)
utils.Log.Infof("start HTTP server @ %s", httpBase)
log.Infoln("start HTTP server", "@", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler}
go func() {
err := httpSrv.ListenAndServe()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start http: %s", err.Error())
log.Errorln("start HTTP server", ":", err)
serverCancel()
}
}()
}
if conf.Conf.Scheme.HttpsPort != -1 {
if conf.Conf.Scheme.HttpsPort > 0 {
httpsBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpsPort)
fmt.Printf("start HTTPS server @ %s\n", httpsBase)
utils.Log.Infof("start HTTPS server @ %s", httpsBase)
log.Infoln("start HTTPS server", "@", httpsBase)
httpsSrv = &http.Server{Addr: httpsBase, Handler: r}
go func() {
err := httpsSrv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start https: %s", err.Error())
log.Errorln("start HTTPS server", ":", err)
serverCancel()
}
}()
}
if conf.Conf.Scheme.UnixFile != "" {
fmt.Printf("start unix server @ %s\n", conf.Conf.Scheme.UnixFile)
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
log.Infoln("start Unix server", "@", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: httpHandler}
go func() {
listener, err := net.Listen("unix", conf.Conf.Scheme.UnixFile)
if err != nil {
utils.Log.Fatalf("failed to listen unix: %+v", err)
log.Errorln("start Unix server", ":", err)
serverCancel()
return
}
// set socket file permission
mode, err := strconv.ParseUint(conf.Conf.Scheme.UnixFilePerm, 8, 32)
if err != nil {
utils.Log.Errorf("failed to parse socket file permission: %+v", err)
log.Errorln("parse unix_file_perm", ":", err)
} else {
err = os.Chmod(conf.Conf.Scheme.UnixFile, os.FileMode(mode))
if err != nil {
utils.Log.Errorf("failed to chmod socket file: %+v", err)
log.Errorln("chmod socket file", ":", err)
}
}
err = unixSrv.Serve(listener)
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start unix: %s", err.Error())
log.Errorln("start Unix server", ":", err)
serverCancel()
}
}()
}
if conf.Conf.S3.Port != -1 && conf.Conf.S3.Enable {
s3r := gin.New()
s3r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.InitS3(s3r)
s3Base := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.S3.Port)
fmt.Printf("start S3 server @ %s\n", s3Base)
utils.Log.Infof("start S3 server @ %s", s3Base)
go func() {
var err error
if conf.Conf.S3.SSL {
httpsSrv = &http.Server{Addr: s3Base, Handler: s3r}
err = httpsSrv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
}
if !conf.Conf.S3.SSL {
httpSrv = &http.Server{Addr: s3Base, Handler: s3r}
err = httpSrv.ListenAndServe()
}
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start s3 server: %s", err.Error())
}
}()
}
var ftpDriver *server.FtpMainDriver
var ftpServer *ftpserver.FtpServer
if conf.Conf.FTP.Listen != "" && conf.Conf.FTP.Enable {
var err error
ftpDriver, err = server.NewMainDriver()
if err != nil {
utils.Log.Fatalf("failed to start ftp driver: %s", err.Error())
} else {
fmt.Printf("start ftp server on %s\n", conf.Conf.FTP.Listen)
utils.Log.Infof("start ftp server on %s", conf.Conf.FTP.Listen)
go func() {
ftpServer = ftpserver.NewFtpServer(ftpDriver)
err = ftpServer.ListenAndServe()
if err != nil {
utils.Log.Fatalf("problem ftp server listening: %s", err.Error())
}
}()
}
}
var sftpDriver *server.SftpDriver
var sftpServer *sftpd.SftpServer
if conf.Conf.SFTP.Listen != "" && conf.Conf.SFTP.Enable {
var err error
sftpDriver, err = server.NewSftpDriver()
if err != nil {
utils.Log.Fatalf("failed to start sftp driver: %s", err.Error())
} else {
fmt.Printf("start sftp server on %s", conf.Conf.SFTP.Listen)
utils.Log.Infof("start sftp server on %s", conf.Conf.SFTP.Listen)
go func() {
sftpServer = sftpd.NewSftpServer(sftpDriver)
err = sftpServer.RunServer()
if err != nil {
utils.Log.Fatalf("problem sftp server listening: %s", err.Error())
}
}()
}
}
// Wait for interrupt signal to gracefully shutdown the server with
// a timeout of 1 second.
quit := make(chan os.Signal, 1)
// kill (no param) default send syscanll.SIGTERM
// kill -2 is syscall.SIGINT
// kill -9 is syscall. SIGKILL but can"t be catch, so don't need add it
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
utils.Log.Println("Shutdown server...")
fs.ArchiveContentUploadTaskManager.RemoveAll()
select {
case <-quit:
case <-serverCtx.Done():
}
log.Println("shutdown server...")
Release()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
quitCtx, quitCancel := context.WithTimeout(context.Background(), time.Second)
defer quitCancel()
var wg sync.WaitGroup
if conf.Conf.Scheme.HttpPort != -1 {
if httpSrv != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := httpSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTP server shutdown err: ", err)
if err := httpSrv.Shutdown(quitCtx); err != nil {
log.Errorln("shutdown HTTP server", ":", err)
}
}()
}
if conf.Conf.Scheme.HttpsPort != -1 {
if httpsSrv != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := httpsSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTPS server shutdown err: ", err)
if err := httpsSrv.Shutdown(quitCtx); err != nil {
log.Errorln("shutdown HTTPS server", ":", err)
}
}()
}
if conf.Conf.Scheme.UnixFile != "" {
if unixSrv != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := unixSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("Unix server shutdown err: ", err)
}
}()
}
if conf.Conf.FTP.Listen != "" && conf.Conf.FTP.Enable && ftpServer != nil && ftpDriver != nil {
wg.Add(1)
go func() {
defer wg.Done()
ftpDriver.Stop()
if err := ftpServer.Stop(); err != nil {
utils.Log.Fatal("FTP server shutdown err: ", err)
}
}()
}
if conf.Conf.SFTP.Listen != "" && conf.Conf.SFTP.Enable && sftpServer != nil && sftpDriver != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := sftpServer.Close(); err != nil {
utils.Log.Fatal("SFTP server shutdown err: ", err)
if err := unixSrv.Shutdown(quitCtx); err != nil {
log.Errorln("shutdown Unix server", ":", err)
}
}()
}
wg.Wait()
utils.Log.Println("Server exit")
log.Println("server exit")
},
}
func init() {
RootCmd.AddCommand(ServerCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// serverCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// serverCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
// OutOpenListInit 暴露用于外部启动server的函数

View File

@ -1,60 +0,0 @@
package _115
import (
"errors"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
driver115 "github.com/SheltonZhu/115driver/pkg/driver"
log "github.com/sirupsen/logrus"
)
var (
md5Salt = "Qclm8MGWUv59TnrR0XPg"
appVer = "35.6.0.3"
)
func (d *Pan115) getAppVersion() (string, error) {
result := VersionResp{}
res, err := base.RestyClient.R().Get(driver115.ApiGetVersion)
if err != nil {
return "", err
}
err = utils.Json.Unmarshal(res.Body(), &result)
if err != nil {
return "", err
}
if len(result.Error) > 0 {
return "", errors.New(result.Error)
}
return result.Data.Win.Version, nil
}
func (d *Pan115) getAppVer() string {
ver, err := d.getAppVersion()
if err != nil {
log.Warnf("[115] get app version failed: %v", err)
return appVer
}
if len(ver) > 0 {
return ver
}
return appVer
}
func (d *Pan115) initAppVer() {
appVer = d.getAppVer()
log.Debugf("use app version: %v", appVer)
}
type VersionResp struct {
Error string `json:"error,omitempty"`
Data Versions `json:"data"`
}
type Versions struct {
Win Version `json:"win"`
}
type Version struct {
Version string `json:"version_code"`
}

View File

@ -1,488 +0,0 @@
package chunk
import (
"bytes"
"context"
"errors"
"fmt"
"io"
stdpath "path"
"strconv"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/fs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/sign"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
)
type Chunk struct {
model.Storage
Addition
}
func (d *Chunk) Config() driver.Config {
return config
}
func (d *Chunk) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Chunk) Init(ctx context.Context) error {
if d.PartSize <= 0 {
return errors.New("part size must be positive")
}
d.RemotePath = utils.FixAndCleanPath(d.RemotePath)
return nil
}
func (d *Chunk) Drop(ctx context.Context) error {
return nil
}
func (d *Chunk) Get(ctx context.Context, path string) (model.Obj, error) {
if utils.PathEqual(path, "/") {
return &model.Object{
Name: "Root",
IsFolder: true,
Path: "/",
}, nil
}
remoteStorage, remoteActualPath, err := op.GetStorageAndActualPath(d.RemotePath)
if err != nil {
return nil, err
}
remoteActualPath = stdpath.Join(remoteActualPath, path)
if remoteObj, err := op.Get(ctx, remoteStorage, remoteActualPath); err == nil {
return &model.Object{
Path: path,
Name: remoteObj.GetName(),
Size: remoteObj.GetSize(),
Modified: remoteObj.ModTime(),
IsFolder: remoteObj.IsDir(),
HashInfo: remoteObj.GetHash(),
}, nil
}
remoteActualDir, name := stdpath.Split(remoteActualPath)
chunkName := "[openlist_chunk]" + name
chunkObjs, err := op.List(ctx, remoteStorage, stdpath.Join(remoteActualDir, chunkName), model.ListArgs{})
if err != nil {
return nil, err
}
var totalSize int64 = 0
// 0号块必须存在
chunkSizes := []int64{-1}
h := make(map[*utils.HashType]string)
var first model.Obj
for _, o := range chunkObjs {
if o.IsDir() {
continue
}
if after, ok := strings.CutPrefix(o.GetName(), "hash_"); ok {
hn, value, ok := strings.Cut(strings.TrimSuffix(after, d.CustomExt), "_")
if ok {
ht, ok := utils.GetHashByName(hn)
if ok {
h[ht] = value
}
}
continue
}
idx, err := strconv.Atoi(strings.TrimSuffix(o.GetName(), d.CustomExt))
if err != nil {
continue
}
totalSize += o.GetSize()
if len(chunkSizes) > idx {
if idx == 0 {
first = o
}
chunkSizes[idx] = o.GetSize()
} else if len(chunkSizes) == idx {
chunkSizes = append(chunkSizes, o.GetSize())
} else {
newChunkSizes := make([]int64, idx+1)
copy(newChunkSizes, chunkSizes)
chunkSizes = newChunkSizes
chunkSizes[idx] = o.GetSize()
}
}
// 检查0号块不等于-1 以支持空文件
// 如果块数量大于1 最后一块不可能为0
// 只检查中间块是否有0
for i, l := 0, len(chunkSizes)-2; ; i++ {
if i == 0 {
if chunkSizes[i] == -1 {
return nil, fmt.Errorf("chunk part[%d] are missing", i)
}
} else if chunkSizes[i] == 0 {
return nil, fmt.Errorf("chunk part[%d] are missing", i)
}
if i >= l {
break
}
}
reqDir, _ := stdpath.Split(path)
objRes := chunkObject{
Object: model.Object{
Path: stdpath.Join(reqDir, chunkName),
Name: name,
Size: totalSize,
Modified: first.ModTime(),
Ctime: first.CreateTime(),
},
chunkSizes: chunkSizes,
}
if len(h) > 0 {
objRes.HashInfo = utils.NewHashInfoByMap(h)
}
return &objRes, nil
}
func (d *Chunk) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
remoteStorage, remoteActualPath, err := op.GetStorageAndActualPath(d.RemotePath)
if err != nil {
return nil, err
}
remoteActualDir := stdpath.Join(remoteActualPath, dir.GetPath())
remoteObjs, err := op.List(ctx, remoteStorage, remoteActualDir, model.ListArgs{
ReqPath: args.ReqPath,
Refresh: args.Refresh,
})
if err != nil {
return nil, err
}
result := make([]model.Obj, 0, len(remoteObjs))
for _, obj := range remoteObjs {
rawName := obj.GetName()
if obj.IsDir() {
if name, ok := strings.CutPrefix(rawName, "[openlist_chunk]"); ok {
chunkObjs, err := op.List(ctx, remoteStorage, stdpath.Join(remoteActualDir, rawName), model.ListArgs{
ReqPath: stdpath.Join(args.ReqPath, rawName),
Refresh: args.Refresh,
})
if err != nil {
return nil, err
}
totalSize := int64(0)
h := make(map[*utils.HashType]string)
first := obj
for _, o := range chunkObjs {
if o.IsDir() {
continue
}
if after, ok := strings.CutPrefix(strings.TrimSuffix(o.GetName(), d.CustomExt), "hash_"); ok {
hn, value, ok := strings.Cut(after, "_")
if ok {
ht, ok := utils.GetHashByName(hn)
if ok {
h[ht] = value
}
continue
}
}
idx, err := strconv.Atoi(strings.TrimSuffix(o.GetName(), d.CustomExt))
if err != nil {
continue
}
if idx == 0 {
first = o
}
totalSize += o.GetSize()
}
objRes := model.Object{
Name: name,
Size: totalSize,
Modified: first.ModTime(),
Ctime: first.CreateTime(),
}
if len(h) > 0 {
objRes.HashInfo = utils.NewHashInfoByMap(h)
}
if !d.Thumbnail {
result = append(result, &objRes)
} else {
thumbPath := stdpath.Join(args.ReqPath, ".thumbnails", name+".webp")
thumb := fmt.Sprintf("%s/d%s?sign=%s",
common.GetApiUrl(ctx),
utils.EncodePath(thumbPath, true),
sign.Sign(thumbPath))
result = append(result, &model.ObjThumb{
Object: objRes,
Thumbnail: model.Thumbnail{
Thumbnail: thumb,
},
})
}
continue
}
}
if !d.ShowHidden && strings.HasPrefix(rawName, ".") {
continue
}
thumb, ok := model.GetThumb(obj)
objRes := model.Object{
Name: rawName,
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}
if !ok {
result = append(result, &objRes)
} else {
result = append(result, &model.ObjThumb{
Object: objRes,
Thumbnail: model.Thumbnail{
Thumbnail: thumb,
},
})
}
}
return result, nil
}
func (d *Chunk) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
remoteStorage, remoteActualPath, err := op.GetStorageAndActualPath(d.RemotePath)
if err != nil {
return nil, err
}
chunkFile, ok := file.(*chunkObject)
remoteActualPath = stdpath.Join(remoteActualPath, file.GetPath())
if !ok {
l, _, err := op.Link(ctx, remoteStorage, remoteActualPath, args)
if err != nil {
return nil, err
}
resultLink := *l
resultLink.SyncClosers = utils.NewSyncClosers(l)
return &resultLink, nil
}
fileSize := chunkFile.GetSize()
mergedRrf := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
start := httpRange.Start
length := httpRange.Length
if length < 0 || start+length > fileSize {
length = fileSize - start
}
if length == 0 {
return io.NopCloser(strings.NewReader("")), nil
}
rs := make([]io.Reader, 0)
cs := make(utils.Closers, 0)
var (
rc io.ReadCloser
readFrom bool
)
for idx, chunkSize := range chunkFile.chunkSizes {
if readFrom {
l, o, err := op.Link(ctx, remoteStorage, stdpath.Join(remoteActualPath, d.getPartName(idx)), args)
if err != nil {
_ = cs.Close()
return nil, err
}
cs = append(cs, l)
chunkSize2 := l.ContentLength
if chunkSize2 <= 0 {
chunkSize2 = o.GetSize()
}
if chunkSize2 != chunkSize {
_ = cs.Close()
return nil, fmt.Errorf("chunk part[%d] size not match", idx)
}
rrf, err := stream.GetRangeReaderFromLink(chunkSize2, l)
if err != nil {
_ = cs.Close()
return nil, err
}
newLength := length - chunkSize2
if newLength >= 0 {
length = newLength
rc, err = rrf.RangeRead(ctx, http_range.Range{Length: -1})
} else {
rc, err = rrf.RangeRead(ctx, http_range.Range{Length: length})
}
if err != nil {
_ = cs.Close()
return nil, err
}
rs = append(rs, rc)
cs = append(cs, rc)
if newLength <= 0 {
return utils.ReadCloser{
Reader: io.MultiReader(rs...),
Closer: &cs,
}, nil
}
} else if newStart := start - chunkSize; newStart >= 0 {
start = newStart
} else {
l, o, err := op.Link(ctx, remoteStorage, stdpath.Join(remoteActualPath, d.getPartName(idx)), args)
if err != nil {
_ = cs.Close()
return nil, err
}
cs = append(cs, l)
chunkSize2 := l.ContentLength
if chunkSize2 <= 0 {
chunkSize2 = o.GetSize()
}
if chunkSize2 != chunkSize {
_ = cs.Close()
return nil, fmt.Errorf("chunk part[%d] size not match", idx)
}
rrf, err := stream.GetRangeReaderFromLink(chunkSize2, l)
if err != nil {
_ = cs.Close()
return nil, err
}
rc, err = rrf.RangeRead(ctx, http_range.Range{Start: start, Length: -1})
if err != nil {
_ = cs.Close()
return nil, err
}
length -= chunkSize2 - start
cs = append(cs, rc)
if length <= 0 {
return utils.ReadCloser{
Reader: rc,
Closer: &cs,
}, nil
}
rs = append(rs, rc)
readFrom = true
}
}
return nil, fmt.Errorf("invalid range: start=%d,length=%d,fileSize=%d", httpRange.Start, httpRange.Length, fileSize)
}
return &model.Link{
RangeReader: stream.RangeReaderFunc(mergedRrf),
}, nil
}
func (d *Chunk) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
path := stdpath.Join(d.RemotePath, parentDir.GetPath(), dirName)
return fs.MakeDir(ctx, path)
}
func (d *Chunk) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
src := stdpath.Join(d.RemotePath, srcObj.GetPath())
dst := stdpath.Join(d.RemotePath, dstDir.GetPath())
_, err := fs.Move(ctx, src, dst)
return err
}
func (d *Chunk) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
if _, ok := srcObj.(*chunkObject); ok {
newName = "[openlist_chunk]" + newName
}
return fs.Rename(ctx, stdpath.Join(d.RemotePath, srcObj.GetPath()), newName)
}
func (d *Chunk) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
dst := stdpath.Join(d.RemotePath, dstDir.GetPath())
src := stdpath.Join(d.RemotePath, srcObj.GetPath())
_, err := fs.Copy(ctx, src, dst)
return err
}
func (d *Chunk) Remove(ctx context.Context, obj model.Obj) error {
return fs.Remove(ctx, stdpath.Join(d.RemotePath, obj.GetPath()))
}
func (d *Chunk) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
remoteStorage, remoteActualPath, err := op.GetStorageAndActualPath(d.RemotePath)
if err != nil {
return err
}
if d.Thumbnail && dstDir.GetName() == ".thumbnails" {
return op.Put(ctx, remoteStorage, stdpath.Join(remoteActualPath, dstDir.GetPath()), file, up)
}
upReader := &driver.ReaderUpdatingProgress{
Reader: file,
UpdateProgress: up,
}
dst := stdpath.Join(remoteActualPath, dstDir.GetPath(), "[openlist_chunk]"+file.GetName())
if d.StoreHash {
for ht, value := range file.GetHash().All() {
_ = op.Put(ctx, remoteStorage, dst, &stream.FileStream{
Obj: &model.Object{
Name: fmt.Sprintf("hash_%s_%s%s", ht.Name, value, d.CustomExt),
Size: 1,
Modified: file.ModTime(),
},
Mimetype: "application/octet-stream",
Reader: bytes.NewReader([]byte{0}), // 兼容不支持空文件的驱动
}, nil, true)
}
}
fullPartCount := int(file.GetSize() / d.PartSize)
tailSize := file.GetSize() % d.PartSize
if tailSize == 0 && fullPartCount > 0 {
fullPartCount--
tailSize = d.PartSize
}
partIndex := 0
for partIndex < fullPartCount {
err = op.Put(ctx, remoteStorage, dst, &stream.FileStream{
Obj: &model.Object{
Name: d.getPartName(partIndex),
Size: d.PartSize,
Modified: file.ModTime(),
},
Mimetype: file.GetMimetype(),
Reader: io.LimitReader(upReader, d.PartSize),
}, nil, true)
if err != nil {
_ = op.Remove(ctx, remoteStorage, dst)
return err
}
partIndex++
}
err = op.Put(ctx, remoteStorage, dst, &stream.FileStream{
Obj: &model.Object{
Name: d.getPartName(fullPartCount),
Size: tailSize,
Modified: file.ModTime(),
},
Mimetype: file.GetMimetype(),
Reader: upReader,
}, nil)
if err != nil {
_ = op.Remove(ctx, remoteStorage, dst)
}
return err
}
func (d *Chunk) getPartName(part int) string {
return fmt.Sprintf("%d%s", part, d.CustomExt)
}
func (d *Chunk) GetDetails(ctx context.Context) (*model.StorageDetails, error) {
remoteStorage, err := fs.GetStorage(d.RemotePath, &fs.GetStoragesArgs{})
if err != nil {
return nil, errs.NotImplement
}
wd, ok := remoteStorage.(driver.WithDetails)
if !ok {
return nil, errs.NotImplement
}
remoteDetails, err := wd.GetDetails(ctx)
if err != nil {
return nil, err
}
return &model.StorageDetails{
DiskUsage: remoteDetails.DiskUsage,
}, nil
}
var _ driver.Driver = (*Chunk)(nil)

View File

@ -1,31 +0,0 @@
package chunk
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
RemotePath string `json:"remote_path" required:"true"`
PartSize int64 `json:"part_size" required:"true" type:"number" help:"bytes"`
CustomExt string `json:"custom_ext" type:"string"`
StoreHash bool `json:"store_hash" type:"bool" default:"true"`
Thumbnail bool `json:"thumbnail" required:"true" default:"false" help:"enable thumbnail which pre-generated under .thumbnails folder"`
ShowHidden bool `json:"show_hidden" default:"true" required:"false" help:"show hidden directories and files"`
}
var config = driver.Config{
Name: "Chunk",
LocalSort: true,
OnlyProxy: true,
NoCache: true,
DefaultRoot: "/",
NoLinkURL: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Chunk{}
})
}

View File

@ -1,8 +0,0 @@
package chunk
import "github.com/OpenListTeam/OpenList/v4/internal/model"
type chunkObject struct {
model.Object
chunkSizes []int64
}

View File

@ -1,230 +0,0 @@
package cnb_releases
import (
"bytes"
"context"
"fmt"
"io"
"mime/multipart"
"net/http"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
)
type CnbReleases struct {
model.Storage
Addition
ref *CnbReleases
}
func (d *CnbReleases) Config() driver.Config {
return config
}
func (d *CnbReleases) GetAddition() driver.Additional {
return &d.Addition
}
func (d *CnbReleases) Init(ctx context.Context) error {
return nil
}
func (d *CnbReleases) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*CnbReleases)
if ok {
d.ref = refStorage
return nil
}
return fmt.Errorf("ref: storage is not CnbReleases")
}
func (d *CnbReleases) Drop(ctx context.Context) error {
d.ref = nil
return nil
}
func (d *CnbReleases) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if dir.GetPath() == "/" {
// get all releases for root dir
var resp ReleaseList
err := d.Request(http.MethodGet, "/{repo}/-/releases", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
}, &resp)
if err != nil {
return nil, err
}
return utils.SliceConvert(resp, func(src Release) (model.Obj, error) {
name := src.Name
if d.UseTagName {
name = src.TagName
}
return &model.Object{
ID: src.ID,
Name: name,
Size: d.sumAssetsSize(src.Assets),
Ctime: src.CreatedAt,
Modified: src.UpdatedAt,
IsFolder: true,
}, nil
})
} else {
// get release info by release id
releaseID := dir.GetID()
if releaseID == "" {
return nil, errs.ObjectNotFound
}
var resp Release
err := d.Request(http.MethodGet, "/{repo}/-/releases/{release_id}", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", releaseID)
}, &resp)
if err != nil {
return nil, err
}
return utils.SliceConvert(resp.Assets, func(src ReleaseAsset) (model.Obj, error) {
return &Object{
Object: model.Object{
ID: src.ID,
Path: src.Path,
Name: src.Name,
Size: src.Size,
Ctime: src.CreatedAt,
Modified: src.UpdatedAt,
IsFolder: false,
},
ParentID: dir.GetID(),
}, nil
})
}
}
func (d *CnbReleases) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
return &model.Link{
URL: "https://cnb.cool" + file.GetPath(),
}, nil
}
func (d *CnbReleases) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if parentDir.GetPath() == "/" {
// create a new release
branch := d.DefaultBranch
if branch == "" {
branch = "main" // fallback to "main" if not set
}
return d.Request(http.MethodPost, "/{repo}/-/releases", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetBody(base.Json{
"name": dirName,
"tag_name": dirName,
"target_commitish": branch,
})
}, nil)
}
return errs.NotImplement
}
func (d *CnbReleases) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return nil, errs.NotImplement
}
func (d *CnbReleases) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
if srcObj.IsDir() && !d.UseTagName {
return d.Request(http.MethodPatch, "/{repo}/-/releases/{release_id}", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", srcObj.GetID())
req.SetFormData(map[string]string{
"name": newName,
})
}, nil)
}
return errs.NotImplement
}
func (d *CnbReleases) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return nil, errs.NotImplement
}
func (d *CnbReleases) Remove(ctx context.Context, obj model.Obj) error {
if obj.IsDir() {
return d.Request(http.MethodDelete, "/{repo}/-/releases/{release_id}", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", obj.GetID())
}, nil)
}
if o, ok := obj.(*Object); ok {
return d.Request(http.MethodDelete, "/{repo}/-/releases/{release_id}/assets/{asset_id}", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", o.ParentID)
req.SetPathParam("asset_id", obj.GetID())
}, nil)
} else {
return fmt.Errorf("unable to get release ID")
}
}
func (d *CnbReleases) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
// 1. get upload info
var resp ReleaseAssetUploadURL
err := d.Request(http.MethodPost, "/{repo}/-/releases/{release_id}/asset-upload-url", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", dstDir.GetID())
req.SetBody(base.Json{
"asset_name": file.GetName(),
"overwrite": true,
"size": file.GetSize(),
})
}, &resp)
if err != nil {
return err
}
// 2. upload file
// use multipart to create form file
var b bytes.Buffer
w := multipart.NewWriter(&b)
_, err = w.CreateFormFile("file", file.GetName())
if err != nil {
return err
}
headSize := b.Len()
err = w.Close()
if err != nil {
return err
}
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.MultiReader(head, file, tail))
// use net/http to upload file
ctxWithTimeout, cancel := context.WithTimeout(ctx, time.Duration(resp.ExpiresInSec+1)*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctxWithTimeout, http.MethodPost, resp.UploadURL, rateLimitedRd)
if err != nil {
return err
}
req.Header.Set("Content-Type", w.FormDataContentType())
req.Header.Set("User-Agent", base.UserAgent)
httpResp, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer httpResp.Body.Close()
if httpResp.StatusCode != http.StatusNoContent {
return fmt.Errorf("upload file failed: %s", httpResp.Status)
}
// 3. verify upload
return d.Request(http.MethodPost, resp.VerifyURL, nil, nil)
}
var _ driver.Driver = (*CnbReleases)(nil)

View File

@ -1,26 +0,0 @@
package cnb_releases
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Repo string `json:"repo" type:"string" required:"true"`
Token string `json:"token" type:"string" required:"true"`
UseTagName bool `json:"use_tag_name" type:"bool" default:"false" help:"Use tag name instead of release name"`
DefaultBranch string `json:"default_branch" type:"string" default:"main" help:"Default branch for new releases"`
}
var config = driver.Config{
Name: "CNB Releases",
LocalSort: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &CnbReleases{}
})
}

View File

@ -1,100 +0,0 @@
package cnb_releases
import (
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
)
type Object struct {
model.Object
ParentID string
}
type TagList []Tag
type Tag struct {
Commit struct {
Author UserInfo `json:"author"`
Commit CommitObject `json:"commit"`
Committer UserInfo `json:"committer"`
Parents []CommitParent `json:"parents"`
Sha string `json:"sha"`
} `json:"commit"`
Name string `json:"name"`
Target string `json:"target"`
TargetType string `json:"target_type"`
Verification TagObjectVerification `json:"verification"`
}
type UserInfo struct {
Freeze bool `json:"freeze"`
Nickname string `json:"nickname"`
Username string `json:"username"`
}
type CommitObject struct {
Author Signature `json:"author"`
CommentCount int `json:"comment_count"`
Committer Signature `json:"committer"`
Message string `json:"message"`
Tree CommitObjectTree `json:"tree"`
Verification CommitObjectVerification `json:"verification"`
}
type Signature struct {
Date time.Time `json:"date"`
Email string `json:"email"`
Name string `json:"name"`
}
type CommitObjectTree struct {
Sha string `json:"sha"`
}
type CommitObjectVerification struct {
Payload string `json:"payload"`
Reason string `json:"reason"`
Signature string `json:"signature"`
Verified bool `json:"verified"`
VerifiedAt string `json:"verified_at"`
}
type CommitParent = CommitObjectTree
type TagObjectVerification = CommitObjectVerification
type ReleaseList []Release
type Release struct {
Assets []ReleaseAsset `json:"assets"`
Author UserInfo `json:"author"`
Body string `json:"body"`
CreatedAt time.Time `json:"created_at"`
Draft bool `json:"draft"`
ID string `json:"id"`
IsLatest bool `json:"is_latest"`
Name string `json:"name"`
Prerelease bool `json:"prerelease"`
PublishedAt time.Time `json:"published_at"`
TagCommitish string `json:"tag_commitish"`
TagName string `json:"tag_name"`
UpdatedAt time.Time `json:"updated_at"`
}
type ReleaseAsset struct {
ContentType string `json:"content_type"`
CreatedAt time.Time `json:"created_at"`
ID string `json:"id"`
Name string `json:"name"`
Path string `json:"path"`
Size int64 `json:"size"`
UpdatedAt time.Time `json:"updated_at"`
Uploader UserInfo `json:"uploader"`
}
type ReleaseAssetUploadURL struct {
UploadURL string `json:"upload_url"`
ExpiresInSec int `json:"expires_in_sec"`
VerifyURL string `json:"verify_url"`
}

View File

@ -1,58 +0,0 @@
package cnb_releases
import (
"encoding/json"
"fmt"
"net/http"
"strings"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
log "github.com/sirupsen/logrus"
)
// do others that not defined in Driver interface
func (d *CnbReleases) Request(method string, path string, callback base.ReqCallback, resp any) error {
if d.ref != nil {
return d.ref.Request(method, path, callback, resp)
}
var url string
if strings.HasPrefix(path, "http") {
url = path
} else {
url = "https://api.cnb.cool" + path
}
req := base.RestyClient.R()
req.SetHeader("Accept", "application/json")
req.SetAuthScheme("Bearer")
req.SetAuthToken(d.Token)
if callback != nil {
callback(req)
}
res, err := req.Execute(method, url)
log.Debugln(res.String())
if err != nil {
return err
}
if res.StatusCode() != http.StatusOK && res.StatusCode() != http.StatusCreated && res.StatusCode() != http.StatusNoContent {
return fmt.Errorf("failed to request %s, status code: %d, message: %s", url, res.StatusCode(), res.String())
}
if resp != nil {
err = json.Unmarshal(res.Body(), resp)
if err != nil {
return err
}
}
return nil
}
func (d *CnbReleases) sumAssetsSize(assets []ReleaseAsset) int64 {
var size int64
for _, asset := range assets {
size += asset.Size
}
return size
}

View File

@ -1,203 +0,0 @@
package degoo
import (
"context"
"fmt"
"net/http"
"strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
type Degoo struct {
model.Storage
Addition
client *http.Client
}
func (d *Degoo) Config() driver.Config {
return config
}
func (d *Degoo) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Degoo) Init(ctx context.Context) error {
d.client = base.HttpClient
// Ensure we have a valid token (will login if needed or refresh if expired)
if err := d.ensureValidToken(ctx); err != nil {
return fmt.Errorf("failed to initialize token: %w", err)
}
return d.getDevices(ctx)
}
func (d *Degoo) Drop(ctx context.Context) error {
return nil
}
func (d *Degoo) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
items, err := d.getAllFileChildren5(ctx, dir.GetID())
if err != nil {
return nil, err
}
return utils.MustSliceConvert(items, func(s DegooFileItem) model.Obj {
isFolder := s.Category == 2 || s.Category == 1 || s.Category == 10
createTime, modTime, _ := humanReadableTimes(s.CreationTime, s.LastModificationTime, s.LastUploadTime)
size, err := strconv.ParseInt(s.Size, 10, 64)
if err != nil {
size = 0 // Default to 0 if size parsing fails
}
return &model.Object{
ID: s.ID,
Path: s.FilePath,
Name: s.Name,
Size: size,
Modified: modTime,
Ctime: createTime,
IsFolder: isFolder,
}
}), nil
}
func (d *Degoo) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
item, err := d.getOverlay4(ctx, file.GetID())
if err != nil {
return nil, err
}
return &model.Link{URL: item.URL}, nil
}
func (d *Degoo) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
// This is done by calling the setUploadFile3 API with a special checksum and size.
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) { setUploadFile3(Token: $Token, FileInfos: $FileInfos) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileInfos": []map[string]interface{}{
{
"Checksum": folderChecksum,
"Name": dirName,
"CreationTime": time.Now().UnixMilli(),
"ParentID": parentDir.GetID(),
"Size": 0,
},
},
}
_, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
if err != nil {
return err
}
return nil
}
func (d *Degoo) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
const query = `mutation SetMoveFile($Token: String!, $Copy: Boolean, $NewParentID: String!, $FileIDs: [String]!) { setMoveFile(Token: $Token, Copy: $Copy, NewParentID: $NewParentID, FileIDs: $FileIDs) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"Copy": false,
"NewParentID": dstDir.GetID(),
"FileIDs": []string{srcObj.GetID()},
}
_, err := d.apiCall(ctx, "SetMoveFile", query, variables)
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Degoo) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
const query = `mutation SetRenameFile($Token: String!, $FileRenames: [FileRenameInfo]!) { setRenameFile(Token: $Token, FileRenames: $FileRenames) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileRenames": []DegooFileRenameInfo{
{
ID: srcObj.GetID(),
NewName: newName,
},
},
}
_, err := d.apiCall(ctx, "SetRenameFile", query, variables)
if err != nil {
return err
}
return nil
}
func (d *Degoo) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// Copy is not implemented, Degoo API does not support direct copy.
return nil, errs.NotImplement
}
func (d *Degoo) Remove(ctx context.Context, obj model.Obj) error {
// Remove deletes a file or folder (moves to trash).
const query = `mutation SetDeleteFile5($Token: String!, $IsInRecycleBin: Boolean!, $IDs: [IDType]!) { setDeleteFile5(Token: $Token, IsInRecycleBin: $IsInRecycleBin, IDs: $IDs) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"IsInRecycleBin": false,
"IDs": []map[string]string{{"FileID": obj.GetID()}},
}
_, err := d.apiCall(ctx, "SetDeleteFile5", query, variables)
return err
}
func (d *Degoo) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
tmpF, err := file.CacheFullAndWriter(&up, nil)
if err != nil {
return err
}
parentID := dstDir.GetID()
// Calculate the checksum for the file.
checksum, err := d.checkSum(tmpF)
if err != nil {
return err
}
// 1. Get upload authorization via getBucketWriteAuth4.
auths, err := d.getBucketWriteAuth4(ctx, file, parentID, checksum)
if err != nil {
return err
}
// 2. Upload file.
// support rapid upload
if auths.GetBucketWriteAuth4[0].Error != "Already exist!" {
err = d.uploadS3(ctx, auths, tmpF, file, checksum)
if err != nil {
return err
}
}
// 3. Register metadata with setUploadFile3.
data, err := d.SetUploadFile3(ctx, file, parentID, checksum)
if err != nil {
return err
}
if !data.SetUploadFile3 {
return fmt.Errorf("setUploadFile3 failed: %v", data)
}
return nil
}

View File

@ -1,27 +0,0 @@
package degoo
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootID
Username string `json:"username" help:"Your Degoo account email"`
Password string `json:"password" help:"Your Degoo account password"`
RefreshToken string `json:"refresh_token" help:"Refresh token for automatic token renewal, obtained automatically"`
AccessToken string `json:"access_token" help:"Access token for Degoo API, obtained automatically"`
}
var config = driver.Config{
Name: "Degoo",
LocalSort: true,
DefaultRoot: "0",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Degoo{}
})
}

View File

@ -1,110 +0,0 @@
package degoo
import (
"encoding/json"
)
// DegooLoginRequest represents the login request body.
type DegooLoginRequest struct {
GenerateToken bool `json:"GenerateToken"`
Username string `json:"Username"`
Password string `json:"Password"`
}
// DegooLoginResponse represents a successful login response.
type DegooLoginResponse struct {
Token string `json:"Token"`
RefreshToken string `json:"RefreshToken"`
}
// DegooAccessTokenRequest represents the token refresh request body.
type DegooAccessTokenRequest struct {
RefreshToken string `json:"RefreshToken"`
}
// DegooAccessTokenResponse represents the token refresh response.
type DegooAccessTokenResponse struct {
AccessToken string `json:"AccessToken"`
}
// DegooFileItem represents a Degoo file or folder.
type DegooFileItem struct {
ID string `json:"ID"`
ParentID string `json:"ParentID"`
Name string `json:"Name"`
Category int `json:"Category"`
Size string `json:"Size"`
URL string `json:"URL"`
CreationTime string `json:"CreationTime"`
LastModificationTime string `json:"LastModificationTime"`
LastUploadTime string `json:"LastUploadTime"`
MetadataID string `json:"MetadataID"`
DeviceID int64 `json:"DeviceID"`
FilePath string `json:"FilePath"`
IsInRecycleBin bool `json:"IsInRecycleBin"`
}
type DegooErrors struct {
Path []string `json:"path"`
Data interface{} `json:"data"`
ErrorType string `json:"errorType"`
ErrorInfo interface{} `json:"errorInfo"`
Message string `json:"message"`
}
// DegooGraphqlResponse is the common structure for GraphQL API responses.
type DegooGraphqlResponse struct {
Data json.RawMessage `json:"data"`
Errors []DegooErrors `json:"errors,omitempty"`
}
// DegooGetChildren5Data is the data field for getFileChildren5.
type DegooGetChildren5Data struct {
GetFileChildren5 struct {
Items []DegooFileItem `json:"Items"`
NextToken string `json:"NextToken"`
} `json:"getFileChildren5"`
}
// DegooGetOverlay4Data is the data field for getOverlay4.
type DegooGetOverlay4Data struct {
GetOverlay4 DegooFileItem `json:"getOverlay4"`
}
// DegooFileRenameInfo represents a file rename operation.
type DegooFileRenameInfo struct {
ID string `json:"ID"`
NewName string `json:"NewName"`
}
// DegooFileIDs represents a list of file IDs for move operations.
type DegooFileIDs struct {
FileIDs []string `json:"FileIDs"`
}
// DegooGetBucketWriteAuth4Data is the data field for GetBucketWriteAuth4.
type DegooGetBucketWriteAuth4Data struct {
GetBucketWriteAuth4 []struct {
AuthData struct {
PolicyBase64 string `json:"PolicyBase64"`
Signature string `json:"Signature"`
BaseURL string `json:"BaseURL"`
KeyPrefix string `json:"KeyPrefix"`
AccessKey struct {
Key string `json:"Key"`
Value string `json:"Value"`
} `json:"AccessKey"`
ACL string `json:"ACL"`
AdditionalBody []struct {
Key string `json:"Key"`
Value string `json:"Value"`
} `json:"AdditionalBody"`
} `json:"AuthData"`
Error interface{} `json:"Error"`
} `json:"getBucketWriteAuth4"`
}
// DegooSetUploadFile3Data is the data field for SetUploadFile3.
type DegooSetUploadFile3Data struct {
SetUploadFile3 bool `json:"setUploadFile3"`
}

View File

@ -1,198 +0,0 @@
package degoo
import (
"bytes"
"context"
"crypto/sha1"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"mime/multipart"
"net/http"
"strconv"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
func (d *Degoo) getBucketWriteAuth4(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooGetBucketWriteAuth4Data, error) {
const query = `query GetBucketWriteAuth4(
$Token: String!
$ParentID: String!
$StorageUploadInfos: [StorageUploadInfo2]
) {
getBucketWriteAuth4(
Token: $Token
ParentID: $ParentID
StorageUploadInfos: $StorageUploadInfos
) {
AuthData {
PolicyBase64
Signature
BaseURL
KeyPrefix
AccessKey {
Key
Value
}
ACL
AdditionalBody {
Key
Value
}
}
Error
}
}`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": parentID,
"StorageUploadInfos": []map[string]string{{
"FileName": file.GetName(),
"Checksum": checksum,
"Size": strconv.FormatInt(file.GetSize(), 10),
}}}
data, err := d.apiCall(ctx, "GetBucketWriteAuth4", query, variables)
if err != nil {
return nil, err
}
var resp DegooGetBucketWriteAuth4Data
err = json.Unmarshal(data, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
// checkSum calculates the SHA1-based checksum for Degoo upload API.
func (d *Degoo) checkSum(file io.Reader) (string, error) {
seed := []byte{13, 7, 2, 2, 15, 40, 75, 117, 13, 10, 19, 16, 29, 23, 3, 36}
hasher := sha1.New()
hasher.Write(seed)
if _, err := utils.CopyWithBuffer(hasher, file); err != nil {
return "", err
}
cs := hasher.Sum(nil)
csBytes := []byte{10, byte(len(cs))}
csBytes = append(csBytes, cs...)
csBytes = append(csBytes, 16, 0)
return strings.ReplaceAll(base64.StdEncoding.EncodeToString(csBytes), "/", "_"), nil
}
func (d *Degoo) uploadS3(ctx context.Context, auths *DegooGetBucketWriteAuth4Data, tmpF model.File, file model.FileStreamer, checksum string) error {
a := auths.GetBucketWriteAuth4[0].AuthData
_, err := tmpF.Seek(0, io.SeekStart)
if err != nil {
return err
}
ext := utils.Ext(file.GetName())
key := fmt.Sprintf("%s%s/%s.%s", a.KeyPrefix, ext, checksum, ext)
var b bytes.Buffer
w := multipart.NewWriter(&b)
err = w.WriteField("key", key)
if err != nil {
return err
}
err = w.WriteField("acl", a.ACL)
if err != nil {
return err
}
err = w.WriteField("policy", a.PolicyBase64)
if err != nil {
return err
}
err = w.WriteField("signature", a.Signature)
if err != nil {
return err
}
err = w.WriteField(a.AccessKey.Key, a.AccessKey.Value)
if err != nil {
return err
}
for _, additional := range a.AdditionalBody {
err = w.WriteField(additional.Key, additional.Value)
if err != nil {
return err
}
}
err = w.WriteField("Content-Type", "")
if err != nil {
return err
}
_, err = w.CreateFormFile("file", key)
if err != nil {
return err
}
headSize := b.Len()
err = w.Close()
if err != nil {
return err
}
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.MultiReader(head, tmpF, tail))
req, err := http.NewRequestWithContext(ctx, http.MethodPost, a.BaseURL, rateLimitedRd)
if err != nil {
return err
}
req.Header.Add("ngsw-bypass", "1")
req.Header.Add("Content-Type", w.FormDataContentType())
res, err := d.client.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusNoContent {
return fmt.Errorf("upload failed with status code %d", res.StatusCode)
}
return nil
}
var _ driver.Driver = (*Degoo)(nil)
func (d *Degoo) SetUploadFile3(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooSetUploadFile3Data, error) {
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) {
setUploadFile3(Token: $Token, FileInfos: $FileInfos)
}`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileInfos": []map[string]string{{
"Checksum": checksum,
"CreationTime": strconv.FormatInt(file.CreateTime().UnixMilli(), 10),
"Name": file.GetName(),
"ParentID": parentID,
"Size": strconv.FormatInt(file.GetSize(), 10),
}}}
data, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
if err != nil {
return nil, err
}
var resp DegooSetUploadFile3Data
err = json.Unmarshal(data, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}

View File

@ -1,462 +0,0 @@
package degoo
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"fmt"
"net/http"
"strconv"
"strings"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
// Thanks to https://github.com/bernd-wechner/Degoo for API research.
const (
// API endpoints
loginURL = "https://rest-api.degoo.com/login"
accessTokenURL = "https://rest-api.degoo.com/access-token/v2"
apiURL = "https://production-appsync.degoo.com/graphql"
// API configuration
apiKey = "da2-vs6twz5vnjdavpqndtbzg3prra"
folderChecksum = "CgAQAg"
// Token management
tokenRefreshThreshold = 5 * time.Minute
// Rate limiting
minRequestInterval = 1 * time.Second
// Error messages
errRateLimited = "rate limited (429), please try again later"
errUnauthorized = "unauthorized access"
)
var (
// Global rate limiting - protects against concurrent API calls
lastRequestTime time.Time
requestMutex sync.Mutex
)
// JWT payload structure for token expiration checking
type JWTPayload struct {
UserID string `json:"userID"`
Exp int64 `json:"exp"`
Iat int64 `json:"iat"`
}
// Rate limiting helper functions
// applyRateLimit ensures minimum interval between API requests
func applyRateLimit() {
requestMutex.Lock()
defer requestMutex.Unlock()
if !lastRequestTime.IsZero() {
if elapsed := time.Since(lastRequestTime); elapsed < minRequestInterval {
time.Sleep(minRequestInterval - elapsed)
}
}
lastRequestTime = time.Now()
}
// HTTP request helper functions
// createJSONRequest creates a new HTTP request with JSON body
func createJSONRequest(ctx context.Context, method, url string, body interface{}) (*http.Request, error) {
jsonBody, err := json.Marshal(body)
if err != nil {
return nil, fmt.Errorf("failed to marshal request body: %w", err)
}
req, err := http.NewRequestWithContext(ctx, method, url, bytes.NewBuffer(jsonBody))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", base.UserAgent)
return req, nil
}
// checkHTTPResponse checks for common HTTP error conditions
func checkHTTPResponse(resp *http.Response, operation string) error {
if resp.StatusCode == http.StatusTooManyRequests {
return fmt.Errorf("%s %s", operation, errRateLimited)
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("%s failed: %s", operation, resp.Status)
}
return nil
}
// isTokenExpired checks if the JWT token is expired or will expire soon
func (d *Degoo) isTokenExpired() bool {
if d.AccessToken == "" {
return true
}
payload, err := extractJWTPayload(d.AccessToken)
if err != nil {
return true // Invalid token format
}
// Check if token expires within the threshold
expireTime := time.Unix(payload.Exp, 0)
return time.Now().Add(tokenRefreshThreshold).After(expireTime)
}
// extractJWTPayload extracts and parses JWT payload
func extractJWTPayload(token string) (*JWTPayload, error) {
parts := strings.Split(token, ".")
if len(parts) != 3 {
return nil, fmt.Errorf("invalid JWT format")
}
// Decode the payload (second part)
payload, err := base64.RawURLEncoding.DecodeString(parts[1])
if err != nil {
return nil, fmt.Errorf("failed to decode JWT payload: %w", err)
}
var jwtPayload JWTPayload
if err := json.Unmarshal(payload, &jwtPayload); err != nil {
return nil, fmt.Errorf("failed to parse JWT payload: %w", err)
}
return &jwtPayload, nil
}
// refreshToken attempts to refresh the access token using the refresh token
func (d *Degoo) refreshToken(ctx context.Context) error {
if d.RefreshToken == "" {
return fmt.Errorf("no refresh token available")
}
// Create request
tokenReq := DegooAccessTokenRequest{RefreshToken: d.RefreshToken}
req, err := createJSONRequest(ctx, "POST", accessTokenURL, tokenReq)
if err != nil {
return fmt.Errorf("failed to create refresh token request: %w", err)
}
// Execute request
resp, err := d.client.Do(req)
if err != nil {
return fmt.Errorf("refresh token request failed: %w", err)
}
defer resp.Body.Close()
// Check response
if err := checkHTTPResponse(resp, "refresh token"); err != nil {
return err
}
var accessTokenResp DegooAccessTokenResponse
if err := json.NewDecoder(resp.Body).Decode(&accessTokenResp); err != nil {
return fmt.Errorf("failed to parse access token response: %w", err)
}
if accessTokenResp.AccessToken == "" {
return fmt.Errorf("empty access token received")
}
d.AccessToken = accessTokenResp.AccessToken
// Save the updated token to storage
op.MustSaveDriverStorage(d)
return nil
}
// ensureValidToken ensures we have a valid, non-expired token
func (d *Degoo) ensureValidToken(ctx context.Context) error {
// Check if token is expired or will expire soon
if d.isTokenExpired() {
// Try to refresh token first if we have a refresh token
if d.RefreshToken != "" {
if refreshErr := d.refreshToken(ctx); refreshErr == nil {
return nil // Successfully refreshed
} else {
// If refresh failed, fall back to full login
fmt.Printf("Token refresh failed, falling back to full login: %v\n", refreshErr)
}
}
// Perform full login
if d.Username != "" && d.Password != "" {
return d.login(ctx)
}
}
return nil
}
// login performs the login process and retrieves the access token.
func (d *Degoo) login(ctx context.Context) error {
if d.Username == "" || d.Password == "" {
return fmt.Errorf("username or password not provided")
}
creds := DegooLoginRequest{
GenerateToken: true,
Username: d.Username,
Password: d.Password,
}
jsonCreds, err := json.Marshal(creds)
if err != nil {
return fmt.Errorf("failed to serialize login credentials: %w", err)
}
req, err := http.NewRequestWithContext(ctx, "POST", loginURL, bytes.NewBuffer(jsonCreds))
if err != nil {
return fmt.Errorf("failed to create login request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", base.UserAgent)
req.Header.Set("Origin", "https://app.degoo.com")
resp, err := d.client.Do(req)
if err != nil {
return fmt.Errorf("login request failed: %w", err)
}
defer resp.Body.Close()
// Handle rate limiting (429 Too Many Requests)
if resp.StatusCode == http.StatusTooManyRequests {
return fmt.Errorf("login rate limited (429), please try again later")
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("login failed: %s", resp.Status)
}
var loginResp DegooLoginResponse
if err := json.NewDecoder(resp.Body).Decode(&loginResp); err != nil {
return fmt.Errorf("failed to parse login response: %w", err)
}
if loginResp.RefreshToken != "" {
tokenReq := DegooAccessTokenRequest{RefreshToken: loginResp.RefreshToken}
jsonTokenReq, err := json.Marshal(tokenReq)
if err != nil {
return fmt.Errorf("failed to serialize access token request: %w", err)
}
tokenReqHTTP, err := http.NewRequestWithContext(ctx, "POST", accessTokenURL, bytes.NewBuffer(jsonTokenReq))
if err != nil {
return fmt.Errorf("failed to create access token request: %w", err)
}
tokenReqHTTP.Header.Set("User-Agent", base.UserAgent)
tokenResp, err := d.client.Do(tokenReqHTTP)
if err != nil {
return fmt.Errorf("failed to get access token: %w", err)
}
defer tokenResp.Body.Close()
var accessTokenResp DegooAccessTokenResponse
if err := json.NewDecoder(tokenResp.Body).Decode(&accessTokenResp); err != nil {
return fmt.Errorf("failed to parse access token response: %w", err)
}
d.AccessToken = accessTokenResp.AccessToken
d.RefreshToken = loginResp.RefreshToken // Save refresh token
} else if loginResp.Token != "" {
d.AccessToken = loginResp.Token
d.RefreshToken = "" // Direct token, no refresh token available
} else {
return fmt.Errorf("login failed, no valid token returned")
}
// Save the updated tokens to storage
op.MustSaveDriverStorage(d)
return nil
}
// apiCall performs a Degoo GraphQL API request.
func (d *Degoo) apiCall(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
// Apply rate limiting
applyRateLimit()
// Ensure we have a valid token before making the API call
if err := d.ensureValidToken(ctx); err != nil {
return nil, fmt.Errorf("failed to ensure valid token: %w", err)
}
// Update the Token in variables if it exists (after potential refresh)
d.updateTokenInVariables(variables)
return d.executeGraphQLRequest(ctx, operationName, query, variables)
}
// updateTokenInVariables updates the Token field in GraphQL variables
func (d *Degoo) updateTokenInVariables(variables map[string]interface{}) {
if variables != nil {
if _, hasToken := variables["Token"]; hasToken {
variables["Token"] = d.AccessToken
}
}
}
// executeGraphQLRequest executes a GraphQL request with retry logic
func (d *Degoo) executeGraphQLRequest(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
reqBody := map[string]interface{}{
"operationName": operationName,
"query": query,
"variables": variables,
}
// Create and configure request
req, err := createJSONRequest(ctx, "POST", apiURL, reqBody)
if err != nil {
return nil, err
}
// Set Degoo-specific headers
req.Header.Set("x-api-key", apiKey)
if d.AccessToken != "" {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", d.AccessToken))
}
// Execute request
resp, err := d.client.Do(req)
if err != nil {
return nil, fmt.Errorf("GraphQL API request failed: %w", err)
}
defer resp.Body.Close()
// Check for HTTP errors
if err := checkHTTPResponse(resp, "GraphQL API"); err != nil {
return nil, err
}
// Parse GraphQL response
var degooResp DegooGraphqlResponse
if err := json.NewDecoder(resp.Body).Decode(&degooResp); err != nil {
return nil, fmt.Errorf("failed to decode GraphQL response: %w", err)
}
// Handle GraphQL errors
if len(degooResp.Errors) > 0 {
return d.handleGraphQLError(ctx, degooResp.Errors[0], operationName, query, variables)
}
return degooResp.Data, nil
}
// handleGraphQLError handles GraphQL-level errors with retry logic
func (d *Degoo) handleGraphQLError(ctx context.Context, gqlError DegooErrors, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
if gqlError.ErrorType == "Unauthorized" {
// Re-login and retry
if err := d.login(ctx); err != nil {
return nil, fmt.Errorf("%s, login failed: %w", errUnauthorized, err)
}
// Update token in variables and retry
d.updateTokenInVariables(variables)
return d.apiCall(ctx, operationName, query, variables)
}
return nil, fmt.Errorf("GraphQL API error: %s", gqlError.Message)
}
// humanReadableTimes converts Degoo timestamps to Go time.Time.
func humanReadableTimes(creation, modification, upload string) (cTime, mTime, uTime time.Time) {
cTime, _ = time.Parse(time.RFC3339, creation)
if modification != "" {
modMillis, _ := strconv.ParseInt(modification, 10, 64)
mTime = time.Unix(0, modMillis*int64(time.Millisecond))
}
if upload != "" {
upMillis, _ := strconv.ParseInt(upload, 10, 64)
uTime = time.Unix(0, upMillis*int64(time.Millisecond))
}
return cTime, mTime, uTime
}
// getDevices fetches and caches top-level devices and folders.
func (d *Degoo) getDevices(ctx context.Context) error {
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ParentID } NextToken } }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": "0",
"Limit": 10,
"Order": 3,
}
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
if err != nil {
return err
}
var resp DegooGetChildren5Data
if err := json.Unmarshal(data, &resp); err != nil {
return fmt.Errorf("failed to parse device list: %w", err)
}
if d.RootFolderID == "0" {
if len(resp.GetFileChildren5.Items) > 0 {
d.RootFolderID = resp.GetFileChildren5.Items[0].ParentID
}
op.MustSaveDriverStorage(d)
}
return nil
}
// getAllFileChildren5 fetches all children of a directory with pagination.
func (d *Degoo) getAllFileChildren5(ctx context.Context, parentID string) ([]DegooFileItem, error) {
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime FilePath IsInRecycleBin DeviceID MetadataID } NextToken } }`
var allItems []DegooFileItem
nextToken := ""
for {
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": parentID,
"Limit": 1000,
"Order": 3,
}
if nextToken != "" {
variables["NextToken"] = nextToken
}
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
if err != nil {
return nil, err
}
var resp DegooGetChildren5Data
if err := json.Unmarshal(data, &resp); err != nil {
return nil, err
}
allItems = append(allItems, resp.GetFileChildren5.Items...)
if resp.GetFileChildren5.NextToken == "" {
break
}
nextToken = resp.GetFileChildren5.NextToken
}
return allItems, nil
}
// getOverlay4 fetches metadata for a single item by ID.
func (d *Degoo) getOverlay4(ctx context.Context, id string) (DegooFileItem, error) {
const query = `query GetOverlay4($Token: String!, $ID: IDType!) { getOverlay4(Token: $Token, ID: $ID) { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime URL FilePath IsInRecycleBin DeviceID MetadataID } }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ID": map[string]string{
"FileID": id,
},
}
data, err := d.apiCall(ctx, "GetOverlay4", query, variables)
if err != nil {
return DegooFileItem{}, err
}
var resp DegooGetOverlay4Data
if err := json.Unmarshal(data, &resp); err != nil {
return DegooFileItem{}, fmt.Errorf("failed to parse item metadata: %w", err)
}
return resp.GetOverlay4, nil
}

View File

@ -1,29 +0,0 @@
//go:build !windows
package local
import (
"io/fs"
"strings"
"syscall"
"github.com/OpenListTeam/OpenList/v4/internal/model"
)
func isHidden(f fs.FileInfo, _ string) bool {
return strings.HasPrefix(f.Name(), ".")
}
func getDiskUsage(path string) (model.DiskUsage, error) {
var stat syscall.Statfs_t
err := syscall.Statfs(path, &stat)
if err != nil {
return model.DiskUsage{}, err
}
total := stat.Blocks * uint64(stat.Bsize)
free := stat.Bfree * uint64(stat.Bsize)
return model.DiskUsage{
TotalSpace: total,
FreeSpace: free,
}, nil
}

View File

@ -1,51 +0,0 @@
//go:build windows
package local
import (
"errors"
"io/fs"
"path/filepath"
"syscall"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"golang.org/x/sys/windows"
)
func isHidden(f fs.FileInfo, fullPath string) bool {
filePath := filepath.Join(fullPath, f.Name())
namePtr, err := syscall.UTF16PtrFromString(filePath)
if err != nil {
return false
}
attrs, err := syscall.GetFileAttributes(namePtr)
if err != nil {
return false
}
return attrs&syscall.FILE_ATTRIBUTE_HIDDEN != 0
}
func getDiskUsage(path string) (model.DiskUsage, error) {
abs, err := filepath.Abs(path)
if err != nil {
return model.DiskUsage{}, err
}
root := filepath.VolumeName(abs)
if len(root) != 2 || root[1] != ':' {
return model.DiskUsage{}, errors.New("cannot get disk label")
}
var freeBytes, totalBytes, totalFreeBytes uint64
err = windows.GetDiskFreeSpaceEx(
windows.StringToUTF16Ptr(root),
&freeBytes,
&totalBytes,
&totalFreeBytes,
)
if err != nil {
return model.DiskUsage{}, err
}
return model.DiskUsage{
TotalSpace: totalBytes,
FreeSpace: freeBytes,
}, nil
}

View File

@ -1,181 +0,0 @@
package openlist_share
import (
"context"
"fmt"
"net/http"
"net/url"
stdpath "path"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/go-resty/resty/v2"
)
type OpenListShare struct {
model.Storage
Addition
serverArchivePreview bool
}
func (d *OpenListShare) Config() driver.Config {
return config
}
func (d *OpenListShare) GetAddition() driver.Additional {
return &d.Addition
}
func (d *OpenListShare) Init(ctx context.Context) error {
d.Addition.Address = strings.TrimSuffix(d.Addition.Address, "/")
var settings common.Resp[map[string]string]
_, _, err := d.request("/public/settings", http.MethodGet, func(req *resty.Request) {
req.SetResult(&settings)
})
if err != nil {
return err
}
d.serverArchivePreview = settings.Data["share_archive_preview"] == "true"
return nil
}
func (d *OpenListShare) Drop(ctx context.Context) error {
return nil
}
func (d *OpenListShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var resp common.Resp[FsListResp]
_, _, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ListReq{
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), dir.GetPath()),
Password: d.Pwd,
Refresh: false,
})
})
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *OpenListShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
path := utils.FixAndCleanPath(stdpath.Join(d.ShareId, file.GetPath()))
u := fmt.Sprintf("%s/sd%s?pwd=%s", d.Address, path, d.Pwd)
return &model.Link{URL: u}, nil
}
func (d *OpenListShare) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveMetaResp]
_, code, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), obj.GetPath()),
Password: d.Pwd,
Refresh: false,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var tree []model.ObjTree
if resp.Data.Content != nil {
tree = make([]model.ObjTree, 0, len(resp.Data.Content))
for _, content := range resp.Data.Content {
tree = append(tree, &content)
}
}
return &model.ArchiveMetaInfo{
Comment: resp.Data.Comment,
Encrypted: resp.Data.Encrypted,
Tree: tree,
}, nil
}
func (d *OpenListShare) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveListResp]
_, code, err := d.request("/fs/archive/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveListReq{
ArchiveMetaReq: ArchiveMetaReq{
ArchivePass: args.Password,
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), obj.GetPath()),
Password: d.Pwd,
Refresh: false,
},
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
InnerPath: args.InnerPath,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *OpenListShare) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotSupport
}
path := utils.FixAndCleanPath(stdpath.Join(d.ShareId, obj.GetPath()))
u := fmt.Sprintf("%s/sad%s?pwd=%s&inner=%s&pass=%s",
d.Address,
path,
d.Pwd,
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password))
return &model.Link{URL: u}, nil
}
var _ driver.Driver = (*OpenListShare)(nil)

View File

@ -1,27 +0,0 @@
package openlist_share
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
ShareId string `json:"sid" required:"true"`
Pwd string `json:"pwd"`
ForwardArchiveReq bool `json:"forward_archive_requests" default:"true"`
}
var config = driver.Config{
Name: "OpenListShare",
LocalSort: true,
NoUpload: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &OpenListShare{}
})
}

View File

@ -1,111 +0,0 @@
package openlist_share
import (
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
type ListReq struct {
model.PageReq
Path string `json:"path" form:"path"`
Password string `json:"password" form:"password"`
Refresh bool `json:"refresh"`
}
type ObjResp struct {
Name string `json:"name"`
Size int64 `json:"size"`
IsDir bool `json:"is_dir"`
Modified time.Time `json:"modified"`
Created time.Time `json:"created"`
Sign string `json:"sign"`
Thumb string `json:"thumb"`
Type int `json:"type"`
HashInfo string `json:"hashinfo"`
}
type FsListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
Readme string `json:"readme"`
Write bool `json:"write"`
Provider string `json:"provider"`
}
type ArchiveMetaReq struct {
ArchivePass string `json:"archive_pass"`
Password string `json:"password"`
Path string `json:"path"`
Refresh bool `json:"refresh"`
}
type TreeResp struct {
ObjResp
Children []TreeResp `json:"children"`
hashCache *utils.HashInfo
}
func (t *TreeResp) GetSize() int64 {
return t.Size
}
func (t *TreeResp) GetName() string {
return t.Name
}
func (t *TreeResp) ModTime() time.Time {
return t.Modified
}
func (t *TreeResp) CreateTime() time.Time {
return t.Created
}
func (t *TreeResp) IsDir() bool {
return t.ObjResp.IsDir
}
func (t *TreeResp) GetHash() utils.HashInfo {
return utils.FromString(t.HashInfo)
}
func (t *TreeResp) GetID() string {
return ""
}
func (t *TreeResp) GetPath() string {
return ""
}
func (t *TreeResp) GetChildren() []model.ObjTree {
ret := make([]model.ObjTree, 0, len(t.Children))
for _, child := range t.Children {
ret = append(ret, &child)
}
return ret
}
func (t *TreeResp) Thumb() string {
return t.ObjResp.Thumb
}
type ArchiveMetaResp struct {
Comment string `json:"comment"`
Encrypted bool `json:"encrypted"`
Content []TreeResp `json:"content"`
RawURL string `json:"raw_url"`
Sign string `json:"sign"`
}
type ArchiveListReq struct {
model.PageReq
ArchiveMetaReq
InnerPath string `json:"inner_path"`
}
type ArchiveListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
}

View File

@ -1,32 +0,0 @@
package openlist_share
import (
"fmt"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
func (d *OpenListShare) request(api, method string, callback base.ReqCallback) ([]byte, int, error) {
url := d.Address + "/api" + api
req := base.RestyClient.R()
if callback != nil {
callback(req)
}
res, err := req.Execute(method, url)
if err != nil {
code := 0
if res != nil {
code = res.StatusCode()
}
return nil, code, err
}
if res.StatusCode() >= 400 {
return nil, res.StatusCode(), fmt.Errorf("request failed, status: %s", res.Status())
}
code := utils.Json.Get(res.Body(), "code").ToInt()
if code != 200 {
return nil, code, fmt.Errorf("request failed, code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
}
return res.Body(), 200, nil
}

View File

@ -1,137 +0,0 @@
package teldrive
import (
"fmt"
"net/http"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
"golang.org/x/net/context"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
func NewCopyManager(ctx context.Context, concurrent int, d *Teldrive) *CopyManager {
g, ctx := errgroup.WithContext(ctx)
return &CopyManager{
TaskChan: make(chan CopyTask, concurrent*2),
Sem: semaphore.NewWeighted(int64(concurrent)),
G: g,
Ctx: ctx,
d: d,
}
}
func (cm *CopyManager) startWorkers() {
workerCount := cap(cm.TaskChan) / 2
for i := 0; i < workerCount; i++ {
cm.G.Go(func() error {
return cm.worker()
})
}
}
func (cm *CopyManager) worker() error {
for {
select {
case task, ok := <-cm.TaskChan:
if !ok {
return nil
}
if err := cm.Sem.Acquire(cm.Ctx, 1); err != nil {
return err
}
var err error
err = cm.processFile(task)
cm.Sem.Release(1)
if err != nil {
return fmt.Errorf("task processing failed: %w", err)
}
case <-cm.Ctx.Done():
return cm.Ctx.Err()
}
}
}
func (cm *CopyManager) generateTasks(ctx context.Context, srcObj, dstDir model.Obj) error {
if srcObj.IsDir() {
return cm.generateFolderTasks(ctx, srcObj, dstDir)
} else {
// add single file task directly
select {
case cm.TaskChan <- CopyTask{SrcObj: srcObj, DstDir: dstDir}:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
}
func (cm *CopyManager) generateFolderTasks(ctx context.Context, srcDir, dstDir model.Obj) error {
objs, err := cm.d.List(ctx, srcDir, model.ListArgs{})
if err != nil {
return fmt.Errorf("failed to list directory %s: %w", srcDir.GetPath(), err)
}
err = cm.d.MakeDir(cm.Ctx, dstDir, srcDir.GetName())
if err != nil || len(objs) == 0 {
return err
}
newDstDir := &model.Object{
ID: dstDir.GetID(),
Path: dstDir.GetPath() + "/" + srcDir.GetName(),
Name: srcDir.GetName(),
IsFolder: true,
}
for _, file := range objs {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
srcFile := &model.Object{
ID: file.GetID(),
Path: srcDir.GetPath() + "/" + file.GetName(),
Name: file.GetName(),
IsFolder: file.IsDir(),
}
// 递归生成任务
if err := cm.generateTasks(ctx, srcFile, newDstDir); err != nil {
return err
}
}
return nil
}
func (cm *CopyManager) processFile(task CopyTask) error {
return cm.copySingleFile(cm.Ctx, task.SrcObj, task.DstDir)
}
func (cm *CopyManager) copySingleFile(ctx context.Context, srcObj, dstDir model.Obj) error {
// `override copy mode` should delete the existing file
if obj, err := cm.d.getFile(dstDir.GetPath(), srcObj.GetName(), srcObj.IsDir()); err == nil {
if err := cm.d.Remove(ctx, obj); err != nil {
return fmt.Errorf("failed to remove existing file: %w", err)
}
}
// Do copy
return cm.d.request(http.MethodPost, "/api/files/{id}/copy", func(req *resty.Request) {
req.SetPathParam("id", srcObj.GetID())
req.SetBody(base.Json{
"newName": srcObj.GetName(),
"destination": dstDir.GetPath(),
})
}, nil)
}

View File

@ -1,217 +0,0 @@
package teldrive
import (
"context"
"fmt"
"math"
"net/http"
"net/url"
"strings"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Teldrive struct {
model.Storage
Addition
}
func (d *Teldrive) Config() driver.Config {
return config
}
func (d *Teldrive) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Teldrive) Init(ctx context.Context) error {
d.Address = strings.TrimSuffix(d.Address, "/")
if d.Cookie == "" || !strings.HasPrefix(d.Cookie, "access_token=") {
return fmt.Errorf("cookie must start with 'access_token='")
}
if d.UploadConcurrency == 0 {
d.UploadConcurrency = 4
}
if d.ChunkSize == 0 {
d.ChunkSize = 10
}
op.MustSaveDriverStorage(d)
return nil
}
func (d *Teldrive) Drop(ctx context.Context) error {
return nil
}
func (d *Teldrive) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var listResp ListResp
err := d.request(http.MethodGet, "/api/files", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"path": dir.GetPath(),
"limit": "1000", // overide default 500, TODO pagination
})
}, &listResp)
if err != nil {
return nil, err
}
return utils.SliceConvert(listResp.Items, func(src Object) (model.Obj, error) {
return &model.Object{
ID: src.ID,
Name: src.Name,
Size: func() int64 {
if src.Type == "folder" {
return 0
}
return src.Size
}(),
IsFolder: src.Type == "folder",
Modified: src.UpdatedAt,
}, nil
})
}
func (d *Teldrive) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.UseShareLink {
shareObj, err := d.getShareFileById(file.GetID())
if err != nil || shareObj == nil {
if err := d.createShareFile(file.GetID()); err != nil {
return nil, err
}
shareObj, err = d.getShareFileById(file.GetID())
if err != nil {
return nil, err
}
}
return &model.Link{
URL: d.Address + "/api/shares/" + url.PathEscape(shareObj.Id) + "/files/" + url.PathEscape(file.GetID()) + "/" + url.PathEscape(file.GetName()),
}, nil
}
return &model.Link{
URL: d.Address + "/api/files/" + url.PathEscape(file.GetID()) + "/" + url.PathEscape(file.GetName()),
Header: http.Header{
"Cookie": {d.Cookie},
},
}, nil
}
func (d *Teldrive) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
return d.request(http.MethodPost, "/api/files/mkdir", func(req *resty.Request) {
req.SetBody(map[string]interface{}{
"path": parentDir.GetPath() + "/" + dirName,
})
}, nil)
}
func (d *Teldrive) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
body := base.Json{
"ids": []string{srcObj.GetID()},
"destinationParent": dstDir.GetID(),
}
return d.request(http.MethodPost, "/api/files/move", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
body := base.Json{
"name": newName,
}
return d.request(http.MethodPatch, "/api/files/{id}", func(req *resty.Request) {
req.SetPathParam("id", srcObj.GetID())
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
copyConcurrentLimit := 4
copyManager := NewCopyManager(ctx, copyConcurrentLimit, d)
copyManager.startWorkers()
copyManager.G.Go(func() error {
defer close(copyManager.TaskChan)
return copyManager.generateTasks(ctx, srcObj, dstDir)
})
return copyManager.G.Wait()
}
func (d *Teldrive) Remove(ctx context.Context, obj model.Obj) error {
body := base.Json{
"ids": []string{obj.GetID()},
}
return d.request(http.MethodPost, "/api/files/delete", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
fileId := uuid.New().String()
chunkSizeInMB := d.ChunkSize
chunkSize := chunkSizeInMB * 1024 * 1024 // Convert MB to bytes
totalSize := file.GetSize()
totalParts := int(math.Ceil(float64(totalSize) / float64(chunkSize)))
maxRetried := 3
// delete the upload task when finished or failed
defer func() {
_ = d.request(http.MethodDelete, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, nil)
}()
if obj, err := d.getFile(dstDir.GetPath(), file.GetName(), file.IsDir()); err == nil {
if err = d.Remove(ctx, obj); err != nil {
return err
}
}
// start the upload process
if err := d.request(http.MethodGet, "/api/uploads/fileId", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, nil); err != nil {
return err
}
if totalSize == 0 {
return d.touch(file.GetName(), dstDir.GetPath())
}
if totalParts <= 1 {
return d.doSingleUpload(ctx, dstDir, file, up, totalParts, chunkSize, fileId)
}
return d.doMultiUpload(ctx, dstDir, file, up, maxRetried, totalParts, chunkSize, fileId)
}
func (d *Teldrive) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *Teldrive) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Teldrive)(nil)

View File

@ -1,26 +0,0 @@
package teldrive
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
Cookie string `json:"cookie" type:"string" required:"true" help:"access_token=xxx"`
UseShareLink bool `json:"use_share_link" type:"bool" default:"false" help:"Create share link when getting link to support 302. If disabled, you need to enable web proxy."`
ChunkSize int64 `json:"chunk_size" type:"number" default:"10" help:"Chunk size in MiB"`
UploadConcurrency int64 `json:"upload_concurrency" type:"number" default:"4" help:"Concurrency upload requests"`
}
var config = driver.Config{
Name: "Teldrive",
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Teldrive{}
})
}

View File

@ -1,77 +0,0 @@
package teldrive
import (
"context"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
type ErrResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
type Object struct {
ID string `json:"id"`
Name string `json:"name"`
Type string `json:"type"`
MimeType string `json:"mimeType"`
Category string `json:"category,omitempty"`
ParentId string `json:"parentId"`
Size int64 `json:"size"`
Encrypted bool `json:"encrypted"`
UpdatedAt time.Time `json:"updatedAt"`
}
type ListResp struct {
Items []Object `json:"items"`
Meta struct {
Count int `json:"count"`
TotalPages int `json:"totalPages"`
CurrentPage int `json:"currentPage"`
} `json:"meta"`
}
type FilePart struct {
Name string `json:"name"`
PartId int `json:"partId"`
PartNo int `json:"partNo"`
ChannelId int `json:"channelId"`
Size int `json:"size"`
Encrypted bool `json:"encrypted"`
Salt string `json:"salt"`
}
type chunkTask struct {
chunkIdx int
fileName string
chunkSize int64
reader *stream.SectionReader
ss *stream.StreamSectionReader
}
type CopyManager struct {
TaskChan chan CopyTask
Sem *semaphore.Weighted
G *errgroup.Group
Ctx context.Context
d *Teldrive
}
type CopyTask struct {
SrcObj model.Obj
DstDir model.Obj
}
type ShareObj struct {
Id string `json:"id"`
Protected bool `json:"protected"`
UserId int `json:"userId"`
Type string `json:"type"`
Name string `json:"name"`
ExpiresAt time.Time `json:"expiresAt"`
}

View File

@ -1,373 +0,0 @@
package teldrive
import (
"fmt"
"io"
"net/http"
"sort"
"strconv"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2"
"github.com/pkg/errors"
"golang.org/x/net/context"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
// create empty file
func (d *Teldrive) touch(name, path string) error {
uploadBody := base.Json{
"name": name,
"type": "file",
"path": path,
}
if err := d.request(http.MethodPost, "/api/files", func(req *resty.Request) {
req.SetBody(uploadBody)
}, nil); err != nil {
return err
}
return nil
}
func (d *Teldrive) createFileOnUploadSuccess(name, id, path string, uploadedFileParts []FilePart, totalSize int64) error {
remoteFileParts, err := d.getFilePart(id)
if err != nil {
return err
}
// check if the uploaded file parts match the remote file parts
if len(remoteFileParts) != len(uploadedFileParts) {
return fmt.Errorf("[Teldrive] file parts count mismatch: expected %d, got %d", len(uploadedFileParts), len(remoteFileParts))
}
formatParts := make([]base.Json, 0)
for _, p := range remoteFileParts {
formatParts = append(formatParts, base.Json{
"id": p.PartId,
"salt": p.Salt,
})
}
uploadBody := base.Json{
"name": name,
"type": "file",
"path": path,
"parts": formatParts,
"size": totalSize,
}
// create file here
if err := d.request(http.MethodPost, "/api/files", func(req *resty.Request) {
req.SetBody(uploadBody)
}, nil); err != nil {
return err
}
return nil
}
func (d *Teldrive) checkFilePartExist(fileId string, partId int) (FilePart, error) {
var uploadedParts []FilePart
var filePart FilePart
if err := d.request(http.MethodGet, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &uploadedParts); err != nil {
return filePart, err
}
for _, part := range uploadedParts {
if part.PartId == partId {
return part, nil
}
}
return filePart, nil
}
func (d *Teldrive) getFilePart(fileId string) ([]FilePart, error) {
var uploadedParts []FilePart
if err := d.request(http.MethodGet, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &uploadedParts); err != nil {
return nil, err
}
return uploadedParts, nil
}
func (d *Teldrive) singleUploadRequest(fileId string, callback base.ReqCallback, resp interface{}) error {
url := d.Address + "/api/uploads/" + fileId
client := resty.New().SetTimeout(0)
ctx := context.Background()
req := client.R().
SetContext(ctx)
req.SetHeader("Cookie", d.Cookie)
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.AddRetryCondition(func(r *resty.Response, err error) bool {
return false
})
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
var e ErrResp
req.SetError(&e)
_req, err := req.Execute(http.MethodPost, url)
if err != nil {
return err
}
if _req.IsError() {
return &e
}
return nil
}
func (d *Teldrive) doSingleUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up model.UpdateProgress,
totalParts int, chunkSize int64, fileId string) error {
totalSize := file.GetSize()
var fileParts []FilePart
var uploaded int64 = 0
ss, err := stream.NewStreamSectionReader(file, int(totalSize), &up)
if err != nil {
return err
}
for uploaded < totalSize {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
curChunkSize := min(totalSize-uploaded, chunkSize)
rd, err := ss.GetSectionReader(uploaded, curChunkSize)
if err != nil {
return err
}
filePart := &FilePart{}
if err := retry.Do(func() error {
if _, err := rd.Seek(0, io.SeekStart); err != nil {
return err
}
if err := d.singleUploadRequest(fileId, func(req *resty.Request) {
uploadParams := map[string]string{
"partName": func() string {
digits := len(fmt.Sprintf("%d", totalParts))
return file.GetName() + fmt.Sprintf(".%0*d", digits, 1)
}(),
"partNo": strconv.Itoa(1),
"fileName": file.GetName(),
}
req.SetQueryParams(uploadParams)
req.SetBody(driver.NewLimitedUploadStream(ctx, rd))
req.SetHeader("Content-Length", strconv.FormatInt(curChunkSize, 10))
}, filePart); err != nil {
return err
}
return nil
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second)); err != nil {
return err
}
if filePart.Name != "" {
fileParts = append(fileParts, *filePart)
uploaded += curChunkSize
up(float64(uploaded) / float64(totalSize))
ss.FreeSectionReader(rd)
}
}
return d.createFileOnUploadSuccess(file.GetName(), fileId, dstDir.GetPath(), fileParts, totalSize)
}
func (d *Teldrive) doMultiUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up model.UpdateProgress,
maxRetried, totalParts int, chunkSize int64, fileId string) error {
concurrent := d.UploadConcurrency
g, ctx := errgroup.WithContext(ctx)
sem := semaphore.NewWeighted(int64(concurrent))
chunkChan := make(chan chunkTask, concurrent*2)
resultChan := make(chan FilePart, concurrent)
totalSize := file.GetSize()
ss, err := stream.NewStreamSectionReader(file, int(totalSize), &up)
if err != nil {
return err
}
ssLock := sync.Mutex{}
g.Go(func() error {
defer close(chunkChan)
chunkIdx := 0
for chunkIdx < totalParts {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
offset := int64(chunkIdx) * chunkSize
curChunkSize := min(totalSize-offset, chunkSize)
ssLock.Lock()
reader, err := ss.GetSectionReader(offset, curChunkSize)
ssLock.Unlock()
if err != nil {
return err
}
task := chunkTask{
chunkIdx: chunkIdx + 1,
chunkSize: curChunkSize,
fileName: file.GetName(),
reader: reader,
ss: ss,
}
// freeSectionReader will be called in d.uploadSingleChunk
select {
case chunkChan <- task:
chunkIdx++
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
for i := 0; i < int(concurrent); i++ {
g.Go(func() error {
for task := range chunkChan {
if err := sem.Acquire(ctx, 1); err != nil {
return err
}
filePart, err := d.uploadSingleChunk(ctx, fileId, task, totalParts, maxRetried)
sem.Release(1)
if err != nil {
return fmt.Errorf("upload chunk %d failed: %w", task.chunkIdx, err)
}
select {
case resultChan <- *filePart:
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
}
var fileParts []FilePart
var collectErr error
collectDone := make(chan struct{})
go func() {
defer close(collectDone)
fileParts = make([]FilePart, 0, totalParts)
done := make(chan error, 1)
go func() {
done <- g.Wait()
close(resultChan)
}()
for {
select {
case filePart, ok := <-resultChan:
if !ok {
collectErr = <-done
return
}
fileParts = append(fileParts, filePart)
case err := <-done:
collectErr = err
return
}
}
}()
<-collectDone
if collectErr != nil {
return fmt.Errorf("multi-upload failed: %w", collectErr)
}
sort.Slice(fileParts, func(i, j int) bool {
return fileParts[i].PartNo < fileParts[j].PartNo
})
return d.createFileOnUploadSuccess(file.GetName(), fileId, dstDir.GetPath(), fileParts, totalSize)
}
func (d *Teldrive) uploadSingleChunk(ctx context.Context, fileId string, task chunkTask, totalParts, maxRetried int) (*FilePart, error) {
filePart := &FilePart{}
retryCount := 0
defer task.ss.FreeSectionReader(task.reader)
for {
select {
case <-ctx.Done():
return nil, ctx.Err()
default:
}
if existingPart, err := d.checkFilePartExist(fileId, task.chunkIdx); err == nil && existingPart.Name != "" {
return &existingPart, nil
}
err := d.singleUploadRequest(fileId, func(req *resty.Request) {
uploadParams := map[string]string{
"partName": func() string {
digits := len(fmt.Sprintf("%d", totalParts))
return task.fileName + fmt.Sprintf(".%0*d", digits, task.chunkIdx)
}(),
"partNo": strconv.Itoa(task.chunkIdx),
"fileName": task.fileName,
}
req.SetQueryParams(uploadParams)
req.SetBody(driver.NewLimitedUploadStream(ctx, task.reader))
req.SetHeader("Content-Length", strconv.Itoa(int(task.chunkSize)))
}, filePart)
if err == nil {
return filePart, nil
}
if retryCount >= maxRetried {
return nil, fmt.Errorf("upload failed after %d retries: %w", maxRetried, err)
}
if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) {
continue
}
retryCount++
utils.Log.Errorf("[Teldrive] upload error: %v, retrying %d times", err, retryCount)
backoffDuration := time.Duration(retryCount*retryCount) * time.Second
if backoffDuration > 30*time.Second {
backoffDuration = 30 * time.Second
}
select {
case <-time.After(backoffDuration):
case <-ctx.Done():
return nil, ctx.Err()
}
}
}

View File

@ -1,109 +0,0 @@
package teldrive
import (
"fmt"
"net/http"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/go-resty/resty/v2"
)
// do others that not defined in Driver interface
func (d *Teldrive) request(method string, pathname string, callback base.ReqCallback, resp interface{}) error {
url := d.Address + pathname
req := base.RestyClient.R()
req.SetHeader("Cookie", d.Cookie)
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
var e ErrResp
req.SetError(&e)
_req, err := req.Execute(method, url)
if err != nil {
return err
}
if _req.IsError() {
return &e
}
return nil
}
func (d *Teldrive) getFile(path, name string, isFolder bool) (model.Obj, error) {
resp := &ListResp{}
err := d.request(http.MethodGet, "/api/files", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"path": path,
"name": name,
"type": func() string {
if isFolder {
return "folder"
}
return "file"
}(),
"operation": "find",
})
}, resp)
if err != nil {
return nil, err
}
if len(resp.Items) == 0 {
return nil, fmt.Errorf("file not found: %s/%s", path, name)
}
obj := resp.Items[0]
return &model.Object{
ID: obj.ID,
Name: obj.Name,
Size: obj.Size,
IsFolder: obj.Type == "folder",
}, err
}
func (err *ErrResp) Error() string {
if err == nil {
return ""
}
return fmt.Sprintf("[Teldrive] message:%s Error code:%d", err.Message, err.Code)
}
func (d *Teldrive) createShareFile(fileId string) error {
var errResp ErrResp
if err := d.request(http.MethodPost, "/api/files/{id}/share", func(req *resty.Request) {
req.SetPathParam("id", fileId)
req.SetBody(base.Json{
"expiresAt": getDateTime(),
})
}, &errResp); err != nil {
return err
}
if errResp.Message != "" {
return &errResp
}
return nil
}
func (d *Teldrive) getShareFileById(fileId string) (*ShareObj, error) {
var shareObj ShareObj
if err := d.request(http.MethodGet, "/api/files/{id}/share", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &shareObj); err != nil {
return nil, err
}
return &shareObj, nil
}
func getDateTime() string {
now := time.Now().UTC()
formattedWithMs := now.Add(time.Hour * 1).Format("2006-01-02T15:04:05.000Z")
return formattedWithMs
}

View File

@ -1,39 +0,0 @@
#!/bin/sh
umask ${UMASK}
if [ "$1" = "version" ]; then
./openlist version
else
# Check file of /opt/openlist/data permissions for current user
# 检查当前用户是否有当前目录的写和执行权限
if [ -d ./data ]; then
if ! [ -w ./data ] || ! [ -x ./data ]; then
cat <<EOF
Error: Current user does not have write and/or execute permissions for the ./data directory: $(pwd)/data
Please visit https://doc.oplist.org/guide/installation/docker#for-version-after-v4-1-0 for more information.
错误:当前用户没有 ./data 目录($(pwd)/data的写和/或执行权限。
请访问 https://doc.oplist.org/guide/installation/docker#v4-1-0-%E4%BB%A5%E5%90%8E%E7%89%88%E6%9C%AC 获取更多信息。
Exiting...
EOF
exit 1
fi
fi
# Define the target directory path for aria2 service
ARIA2_DIR="/opt/service/start/aria2"
if [ "$RUN_ARIA2" = "true" ]; then
# If aria2 should run and target directory doesn't exist, copy it
if [ ! -d "$ARIA2_DIR" ]; then
mkdir -p "$ARIA2_DIR"
cp -r /opt/service/stop/aria2/* "$ARIA2_DIR" 2>/dev/null
fi
runsvdir /opt/service/start &
else
# If aria2 should NOT run and target directory exists, remove it
if [ -d "$ARIA2_DIR" ]; then
rm -rf "$ARIA2_DIR"
fi
fi
exec ./openlist server --no-prefix
fi

265
go.mod
View File

@ -1,271 +1,50 @@
module github.com/OpenListTeam/OpenList/v4
module github.com/OpenListTeam/OpenList/v5
go 1.23.4
go 1.24
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.2
github.com/OpenListTeam/go-cache v0.1.0
github.com/OpenListTeam/sftpd-openlist v1.0.1
github.com/OpenListTeam/tache v0.2.0
github.com/OpenListTeam/times v0.1.0
github.com/OpenListTeam/wopan-sdk-go v0.1.5
github.com/ProtonMail/go-crypto v1.3.0
github.com/SheltonZhu/115driver v1.1.1
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
github.com/avast/retry-go v3.0.0+incompatible
github.com/aws/aws-sdk-go v1.55.7
github.com/blevesearch/bleve/v2 v2.5.2
github.com/caarlos0/env/v9 v9.0.0
github.com/charmbracelet/bubbles v0.21.0
github.com/charmbracelet/bubbletea v1.3.6
github.com/charmbracelet/lipgloss v1.1.0
github.com/city404/v6-public-rpc-proto/go v0.0.0-20240817070657-90f8e24b653e
github.com/coreos/go-oidc v2.3.0+incompatible
github.com/deckarep/golang-set/v2 v2.8.0
github.com/dhowden/tag v0.0.0-20240417053706-3d75831295e8
github.com/disintegration/imaging v1.6.2
github.com/dlclark/regexp2 v1.11.5
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564
github.com/fclairamb/ftpserverlib v0.26.1-0.20250709223522-4a925d79caf6
github.com/foxxorcat/mopan-sdk-go v0.1.6
github.com/foxxorcat/weiyun-sdk-go v0.1.3
github.com/gin-contrib/cors v1.7.6
github.com/gin-gonic/gin v1.10.1
github.com/go-resty/resty/v2 v2.16.5
github.com/go-webauthn/webauthn v0.13.4
github.com/golang-jwt/jwt/v4 v4.5.2
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/hekmon/transmissionrpc/v3 v3.0.0
github.com/hirochachacha/go-smb2 v1.1.0
github.com/ipfs/go-ipfs-api v0.7.0
github.com/itsHenry35/gofakes3 v0.0.8
github.com/jlaffaye/ftp v0.2.1-0.20240918233326-1b970516f5d3
github.com/hashicorp/go-plugin v1.7.0
github.com/json-iterator/go v1.1.12
github.com/kdomanski/iso9660 v0.4.0
github.com/maruel/natural v1.1.1
github.com/meilisearch/meilisearch-go v0.32.0
github.com/mholt/archives v0.1.3
github.com/natefinch/lumberjack v2.0.0+incompatible
github.com/ncw/swift/v2 v2.0.4
github.com/pkg/errors v0.9.1
github.com/pkg/sftp v1.13.9
github.com/pquerna/otp v1.5.0
github.com/rclone/rclone v1.70.3
github.com/saintfish/chardet v0.0.0-20230101081208-5e3ef4b5456d
github.com/shirou/gopsutil/v4 v4.25.5
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.14.0
github.com/spf13/cobra v1.9.1
github.com/stretchr/testify v1.10.0
github.com/t3rm1n4l/go-mega v0.0.0-20241213151442-a19cff0ec7b5
github.com/u2takey/ffmpeg-go v0.5.0
github.com/upyun/go-sdk/v3 v3.0.4
github.com/winfsp/cgofuse v1.6.0
github.com/yeka/zip v0.0.0-20231116150916-03d6312748a9
github.com/zzzhr1990/go-common-entity v0.0.0-20250202070650-1a200048f0d3
golang.org/x/crypto v0.40.0
golang.org/x/image v0.29.0
golang.org/x/net v0.42.0
golang.org/x/oauth2 v0.30.0
golang.org/x/time v0.12.0
google.golang.org/appengine v1.6.8
gopkg.in/ldap.v3 v3.1.0
gorm.io/driver/mysql v1.5.7
gorm.io/driver/postgres v1.5.9
gorm.io/driver/sqlite v1.5.6
gorm.io/gorm v1.25.11
golang.org/x/net v0.43.0
google.golang.org/grpc v1.74.2
google.golang.org/protobuf v1.36.7
)
require (
cloud.google.com/go/compute/metadata v0.7.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/RoaringBitmap/roaring/v2 v2.4.5 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/lanrat/extsort v1.0.2 // indirect
github.com/mikelolasagasti/xz v1.0.1 // indirect
github.com/minio/minlz v1.0.0 // indirect
github.com/minio/xxml v0.0.3 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/otiai10/mint v1.6.3 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
gopkg.in/go-jose/go-jose.v2 v2.6.3 // indirect
)
require (
github.com/OpenListTeam/115-sdk-go v0.2.2
github.com/STARRY-S/zip v0.2.1 // indirect
github.com/aymerick/douceur v0.2.0 // indirect
github.com/blevesearch/go-faiss v1.0.25 // indirect
github.com/blevesearch/zapx/v16 v16.2.4 // indirect
github.com/bodgit/plumbing v1.3.0 // indirect
github.com/bodgit/sevenzip v1.6.1
github.com/bodgit/windows v1.0.1 // indirect
github.com/bytedance/sonic/loader v0.2.4 // indirect
github.com/charmbracelet/x/ansi v0.9.3 // indirect
github.com/charmbracelet/x/term v0.2.1 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/cloudwego/base64x v0.1.5 // indirect
github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
github.com/fclairamb/go-log v0.6.0 // indirect
github.com/gorilla/css v1.0.1 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hekmon/cunits/v2 v2.1.0 // indirect
github.com/ipfs/boxo v0.12.0 // indirect
github.com/jackc/puddle/v2 v2.2.1 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/matoous/go-nanoid/v2 v2.1.0 // indirect
github.com/microcosm-cc/bluemonday v1.0.27
github.com/nwaples/rardecode/v2 v2.1.1
github.com/sorairolake/lzip-go v0.3.5 // indirect
github.com/taruti/bytepool v0.0.0-20160310082835-5e3a9ea56543 // indirect
github.com/ulikunitz/xz v0.5.12 // indirect
github.com/yuin/goldmark v1.7.13
go4.org v0.0.0-20230225012048-214862532bf5
resty.dev/v3 v3.0.0-beta.2 // indirect
)
require (
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd // indirect
github.com/OpenListTeam/gsync v0.1.0 // indirect
github.com/abbot/go-http-auth v0.4.0 // indirect
github.com/aead/ecdh v0.2.0 // indirect
github.com/andreburgaud/crypt2go v1.8.0 // indirect
github.com/andybalholm/brotli v1.1.2-0.20250424173009-453214e765f3 // indirect
github.com/axgle/mahonia v0.0.0-20180208002826-3358181d7394
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/benbjohnson/clock v1.3.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bits-and-blooms/bitset v1.22.0 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/blevesearch/bleve_index_api v1.2.8 // indirect
github.com/blevesearch/geo v0.2.3 // indirect
github.com/blevesearch/go-porterstemmer v1.0.3 // indirect
github.com/blevesearch/gtreap v0.1.1 // indirect
github.com/blevesearch/mmap-go v1.0.4 // indirect
github.com/blevesearch/scorch_segment_api/v2 v2.3.10 // indirect
github.com/blevesearch/segment v0.9.1 // indirect
github.com/blevesearch/snowballstem v0.9.0 // indirect
github.com/blevesearch/upsidedown_store_api v1.0.2 // indirect
github.com/blevesearch/vellum v1.1.0 // indirect
github.com/blevesearch/zapx/v11 v11.4.2 // indirect
github.com/blevesearch/zapx/v12 v12.4.2 // indirect
github.com/blevesearch/zapx/v13 v13.4.2 // indirect
github.com/blevesearch/zapx/v14 v14.4.2 // indirect
github.com/blevesearch/zapx/v15 v15.4.2 // indirect
github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc // indirect
github.com/bytedance/sonic v1.13.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.1.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/bytedance/sonic v1.14.0 // indirect
github.com/bytedance/sonic/loader v0.3.0 // indirect
github.com/cloudwego/base64x v0.1.6 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/geoffgarside/ber v1.2.0 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/go-chi/chi/v5 v5.2.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.26.0 // indirect
github.com/go-sql-driver/mysql v1.7.0 // indirect
github.com/go-webauthn/x v0.1.23 // indirect
github.com/go-playground/validator/v10 v10.27.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/golang-jwt/jwt/v5 v5.2.3 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/go-tpm v0.9.5 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-version v1.6.0 // indirect
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/yamux v0.1.2 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/ipfs/go-cid v0.5.0
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect
github.com/jackc/pgx/v5 v5.5.5 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/jzelinskie/whirlpool v0.0.0-20201016144138-0675e54bb004 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
github.com/libp2p/go-libp2p v0.27.8 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 // indirect
github.com/mailru/easyjson v0.9.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-localereader v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mattn/go-sqlite3 v1.14.22 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/mschoch/smat v0.2.0 // indirect
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/muesli/termenv v0.16.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.9.0 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.0 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-multistream v0.4.1 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/otiai10/copy v1.14.1
github.com/oklog/run v1.2.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/pquerna/cachecontrol v0.1.0 // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.64.0 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/rfjakob/eme v1.1.2 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46 // indirect
github.com/shabbyrobe/gocovmerge v0.0.0-20230507112040-c3350d9342df // indirect
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/tklauser/go-sysconf v0.3.15 // indirect
github.com/tklauser/numcpus v0.10.0 // indirect
github.com/spf13/pflag v1.0.7 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/u2takey/go-utils v0.3.1 // indirect
github.com/ugorji/go/codec v1.3.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.etcd.io/bbolt v1.4.0 // indirect
golang.org/x/arch v0.18.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/term v0.33.0 // indirect
golang.org/x/text v0.27.0
golang.org/x/tools v0.34.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/grpc v1.73.0
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
golang.org/x/arch v0.20.0 // indirect
golang.org/x/crypto v0.41.0 // indirect
golang.org/x/sys v0.35.0 // indirect
golang.org/x/text v0.28.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250811230008-5f3141c8851a // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
lukechampine.com/blake3 v1.1.7 // indirect
)
// replace github.com/OpenListTeam/115-sdk-go => ../../OpenListTeam/115-sdk-go

947
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -6,162 +6,68 @@ import (
"path/filepath"
"strings"
"github.com/OpenListTeam/OpenList/v4/cmd/flags"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/net"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/caarlos0/env/v9"
"github.com/shirou/gopsutil/v4/mem"
"github.com/OpenListTeam/OpenList/v5/cmd/flags"
"github.com/OpenListTeam/OpenList/v5/internal/conf"
"github.com/OpenListTeam/OpenList/v5/pkg/utils"
log "github.com/sirupsen/logrus"
)
// Program working directory
func PWD() string {
if flags.ForceBinDir {
ex, err := os.Executable()
if err != nil {
log.Fatal(err)
}
pwd := filepath.Dir(ex)
return pwd
}
d, err := os.Getwd()
if err != nil {
d = "."
}
return d
}
func InitConfig() {
pwd := PWD()
dataDir := flags.DataDir
if !filepath.IsAbs(dataDir) {
flags.DataDir = filepath.Join(pwd, flags.DataDir)
if !filepath.IsAbs(flags.ConfigFile) {
flags.ConfigFile = filepath.Join(flags.PWD(), flags.ConfigFile)
}
configPath := filepath.Join(flags.DataDir, "config.json")
log.Infof("reading config file: %s", configPath)
if !utils.Exists(configPath) {
log.Infof("config file not exists, creating default config file")
_, err := utils.CreateNestedFile(configPath)
log.Infoln("reading config file", "@", flags.ConfigFile)
if !utils.Exists(flags.ConfigFile) {
log.Infoln("config file not exists, creating default config file")
_, err := utils.CreateNestedFile(flags.ConfigFile)
if err != nil {
log.Fatalf("failed to create config file: %+v", err)
log.Fatalln("create config file", ":", err)
}
conf.Conf = conf.DefaultConfig(dataDir)
LastLaunchedVersion = conf.Version
conf.Conf.LastLaunchedVersion = conf.Version
if !utils.WriteJsonToFile(configPath, conf.Conf) {
log.Fatalf("failed to create default config file")
conf.Conf = conf.DefaultConfig()
err = utils.WriteJsonToFile(flags.ConfigFile, conf.Conf)
if err != nil {
log.Fatalln("save default config file", ":", err)
}
} else {
configBytes, err := os.ReadFile(configPath)
configBytes, err := os.ReadFile(flags.ConfigFile)
if err != nil {
log.Fatalf("reading config file error: %+v", err)
log.Fatalln("reading config file", ":", err)
}
conf.Conf = conf.DefaultConfig(dataDir)
conf.Conf = conf.DefaultConfig()
err = utils.Json.Unmarshal(configBytes, conf.Conf)
if err != nil {
log.Fatalf("load config error: %+v", err)
log.Fatalln("unmarshal config", ":", err)
}
LastLaunchedVersion = conf.Conf.LastLaunchedVersion
if strings.HasPrefix(conf.Version, "v") || LastLaunchedVersion == "" {
conf.Conf.LastLaunchedVersion = conf.Version
}
// update config.json struct
confBody, err := utils.Json.MarshalIndent(conf.Conf, "", " ")
err = utils.WriteJsonToFile(flags.ConfigFile, conf.Conf)
if err != nil {
log.Fatalf("marshal config error: %+v", err)
log.Fatalln("update config file", ":", err)
}
err = os.WriteFile(configPath, confBody, 0o777)
if err != nil {
log.Fatalf("update config struct error: %+v", err)
}
}
if !conf.Conf.Force {
confFromEnv()
}
if conf.Conf.MaxConcurrency > 0 {
net.DefaultConcurrencyLimit = &net.ConcurrencyLimit{Limit: conf.Conf.MaxConcurrency}
}
if conf.Conf.MaxBufferLimit < 0 {
m, _ := mem.VirtualMemory()
if m != nil {
conf.MaxBufferLimit = max(int(float64(m.Total)*0.05), 4*utils.MB)
conf.MaxBufferLimit -= conf.MaxBufferLimit % utils.MB
} else {
conf.MaxBufferLimit = 16 * utils.MB
}
} else {
conf.MaxBufferLimit = conf.Conf.MaxBufferLimit * utils.MB
}
log.Infof("max buffer limit: %dMB", conf.MaxBufferLimit/utils.MB)
if conf.Conf.MmapThreshold > 0 {
conf.MmapThreshold = conf.Conf.MmapThreshold * utils.MB
} else {
conf.MmapThreshold = 0
}
log.Infof("mmap threshold: %dMB", conf.Conf.MmapThreshold)
if len(conf.Conf.Log.Filter.Filters) == 0 {
conf.Conf.Log.Filter.Enable = false
}
// convert abs path
configDir := filepath.Dir(flags.ConfigFile)
convertAbsPath := func(path *string) {
if *path != "" && !filepath.IsAbs(*path) {
*path = filepath.Join(pwd, *path)
*path = filepath.Join(configDir, *path)
}
}
convertAbsPath(&conf.Conf.Database.DBFile)
convertAbsPath(&conf.Conf.TempDir)
convertAbsPath(&conf.Conf.Scheme.CertFile)
convertAbsPath(&conf.Conf.Scheme.KeyFile)
convertAbsPath(&conf.Conf.Scheme.UnixFile)
convertAbsPath(&conf.Conf.Log.Name)
convertAbsPath(&conf.Conf.TempDir)
convertAbsPath(&conf.Conf.BleveDir)
convertAbsPath(&conf.Conf.DistDir)
err := os.MkdirAll(conf.Conf.TempDir, 0o777)
if err != nil {
log.Fatalf("create temp dir error: %+v", err)
}
log.Debugf("config: %+v", conf.Conf)
base.InitClient()
initURL()
initSitePath()
}
func confFromEnv() {
prefix := "OPENLIST_"
if flags.NoPrefix {
prefix = ""
}
log.Infof("load config from env with prefix: %s", prefix)
if err := env.ParseWithOptions(conf.Conf, env.Options{
Prefix: prefix,
}); err != nil {
log.Fatalf("load config from env error: %+v", err)
}
}
func initURL() {
func initSitePath() {
if !strings.Contains(conf.Conf.SiteURL, "://") {
conf.Conf.SiteURL = utils.FixAndCleanPath(conf.Conf.SiteURL)
}
u, err := url.Parse(conf.Conf.SiteURL)
if err != nil {
utils.Log.Fatalf("can't parse site_url: %+v", err)
}
conf.URL = u
}
func CleanTempDir() {
files, err := os.ReadDir(conf.Conf.TempDir)
if err != nil {
log.Errorln("failed list temp file: ", err)
}
for _, file := range files {
if err := os.RemoveAll(filepath.Join(conf.Conf.TempDir, file.Name())); err != nil {
log.Errorln("failed delete temp file: ", err)
}
log.Fatalln("parse site_url", ":", err)
}
conf.SitePath = u.Path
}

View File

@ -0,0 +1,13 @@
package bootstrap
import (
"github.com/OpenListTeam/OpenList/v5/internal/driver"
driverS "github.com/OpenListTeam/OpenList/v5/shared/driver"
"github.com/hashicorp/go-plugin"
)
func InitDriverPlugins() {
driver.PluginMap = map[string]plugin.Plugin{
"grpc": &driverS.Plugin{},
}
}

View File

@ -1,34 +1,9 @@
package conf
import (
"path/filepath"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random"
)
type Database struct {
Type string `json:"type" env:"TYPE"`
Host string `json:"host" env:"HOST"`
Port int `json:"port" env:"PORT"`
User string `json:"user" env:"USER"`
Password string `json:"password" env:"PASS"`
Name string `json:"name" env:"NAME"`
DBFile string `json:"db_file" env:"FILE"`
TablePrefix string `json:"table_prefix" env:"TABLE_PREFIX"`
SSLMode string `json:"ssl_mode" env:"SSL_MODE"`
DSN string `json:"dsn" env:"DSN"`
}
type Meilisearch struct {
Host string `json:"host" env:"HOST"`
APIKey string `json:"api_key" env:"API_KEY"`
Index string `json:"index" env:"INDEX"`
}
type Scheme struct {
Address string `json:"address" env:"ADDR"`
HttpPort int `json:"http_port" env:"HTTP_PORT"`
HttpsPort int `json:"https_port" env:"HTTPS_PORT"`
HttpPort uint16 `json:"http_port" env:"HTTP_PORT"`
HttpsPort uint16 `json:"https_port" env:"HTTPS_PORT"`
ForceHttps bool `json:"force_https" env:"FORCE_HTTPS"`
CertFile string `json:"cert_file" env:"CERT_FILE"`
KeyFile string `json:"key_file" env:"KEY_FILE"`
@ -36,212 +11,30 @@ type Scheme struct {
UnixFilePerm string `json:"unix_file_perm" env:"UNIX_FILE_PERM"`
EnableH2c bool `json:"enable_h2c" env:"ENABLE_H2C"`
}
type LogConfig struct {
Enable bool `json:"enable" env:"ENABLE"`
Name string `json:"name" env:"NAME"`
MaxSize int `json:"max_size" env:"MAX_SIZE"`
MaxBackups int `json:"max_backups" env:"MAX_BACKUPS"`
MaxAge int `json:"max_age" env:"MAX_AGE"`
Compress bool `json:"compress" env:"COMPRESS"`
Filter LogFilterConfig `json:"filter" envPrefix:"FILTER_"`
}
type LogFilterConfig struct {
Enable bool `json:"enable" env:"ENABLE"`
Filters []Filter `json:"filters"`
}
type Filter struct {
CIDR string `json:"cidr"`
Path string `json:"path"`
Method string `json:"method"`
}
type TaskConfig struct {
Workers int `json:"workers" env:"WORKERS"`
MaxRetry int `json:"max_retry" env:"MAX_RETRY"`
TaskPersistant bool `json:"task_persistant" env:"TASK_PERSISTANT"`
}
type TasksConfig struct {
Download TaskConfig `json:"download" envPrefix:"DOWNLOAD_"`
Transfer TaskConfig `json:"transfer" envPrefix:"TRANSFER_"`
Upload TaskConfig `json:"upload" envPrefix:"UPLOAD_"`
Copy TaskConfig `json:"copy" envPrefix:"COPY_"`
Move TaskConfig `json:"move" envPrefix:"MOVE_"`
Decompress TaskConfig `json:"decompress" envPrefix:"DECOMPRESS_"`
DecompressUpload TaskConfig `json:"decompress_upload" envPrefix:"DECOMPRESS_UPLOAD_"`
AllowRetryCanceled bool `json:"allow_retry_canceled" env:"ALLOW_RETRY_CANCELED"`
}
type Cors struct {
AllowOrigins []string `json:"allow_origins" env:"ALLOW_ORIGINS"`
AllowMethods []string `json:"allow_methods" env:"ALLOW_METHODS"`
AllowHeaders []string `json:"allow_headers" env:"ALLOW_HEADERS"`
}
type S3 struct {
Enable bool `json:"enable" env:"ENABLE"`
Port int `json:"port" env:"PORT"`
SSL bool `json:"ssl" env:"SSL"`
}
type FTP struct {
Enable bool `json:"enable" env:"ENABLE"`
Listen string `json:"listen" env:"LISTEN"`
FindPasvPortAttempts int `json:"find_pasv_port_attempts" env:"FIND_PASV_PORT_ATTEMPTS"`
ActiveTransferPortNon20 bool `json:"active_transfer_port_non_20" env:"ACTIVE_TRANSFER_PORT_NON_20"`
IdleTimeout int `json:"idle_timeout" env:"IDLE_TIMEOUT"`
ConnectionTimeout int `json:"connection_timeout" env:"CONNECTION_TIMEOUT"`
DisableActiveMode bool `json:"disable_active_mode" env:"DISABLE_ACTIVE_MODE"`
DefaultTransferBinary bool `json:"default_transfer_binary" env:"DEFAULT_TRANSFER_BINARY"`
EnableActiveConnIPCheck bool `json:"enable_active_conn_ip_check" env:"ENABLE_ACTIVE_CONN_IP_CHECK"`
EnablePasvConnIPCheck bool `json:"enable_pasv_conn_ip_check" env:"ENABLE_PASV_CONN_IP_CHECK"`
}
type SFTP struct {
Enable bool `json:"enable" env:"ENABLE"`
Listen string `json:"listen" env:"LISTEN"`
}
type Config struct {
Force bool `json:"force" env:"FORCE"`
SiteURL string `json:"site_url" env:"SITE_URL"`
Cdn string `json:"cdn" env:"CDN"`
JwtSecret string `json:"jwt_secret" env:"JWT_SECRET"`
TokenExpiresIn int `json:"token_expires_in" env:"TOKEN_EXPIRES_IN"`
Database Database `json:"database" envPrefix:"DB_"`
Meilisearch Meilisearch `json:"meilisearch" envPrefix:"MEILISEARCH_"`
Scheme Scheme `json:"scheme"`
TempDir string `json:"temp_dir" env:"TEMP_DIR"`
BleveDir string `json:"bleve_dir" env:"BLEVE_DIR"`
DistDir string `json:"dist_dir"`
Log LogConfig `json:"log" envPrefix:"LOG_"`
DelayedStart int `json:"delayed_start" env:"DELAYED_START"`
MaxBufferLimit int `json:"max_buffer_limitMB" env:"MAX_BUFFER_LIMIT_MB"`
MmapThreshold int `json:"mmap_thresholdMB" env:"MMAP_THRESHOLD_MB"`
MaxConnections int `json:"max_connections" env:"MAX_CONNECTIONS"`
MaxConcurrency int `json:"max_concurrency" env:"MAX_CONCURRENCY"`
TlsInsecureSkipVerify bool `json:"tls_insecure_skip_verify" env:"TLS_INSECURE_SKIP_VERIFY"`
Tasks TasksConfig `json:"tasks" envPrefix:"TASKS_"`
Cors Cors `json:"cors" envPrefix:"CORS_"`
S3 S3 `json:"s3" envPrefix:"S3_"`
FTP FTP `json:"ftp" envPrefix:"FTP_"`
SFTP SFTP `json:"sftp" envPrefix:"SFTP_"`
LastLaunchedVersion string `json:"last_launched_version"`
TempDir string `json:"temp_dir" env:"TEMP_DIR"`
SiteURL string `json:"site_url" env:"SITE_URL"`
Scheme Scheme `json:"scheme"`
Cors Cors `json:"cors" envPrefix:"CORS_"`
}
func DefaultConfig(dataDir string) *Config {
tempDir := filepath.Join(dataDir, "temp")
indexDir := filepath.Join(dataDir, "bleve")
logPath := filepath.Join(dataDir, "log/log.log")
dbPath := filepath.Join(dataDir, "data.db")
func DefaultConfig() *Config {
return &Config{
TempDir: "temp",
Scheme: Scheme{
Address: "0.0.0.0",
UnixFile: "",
HttpPort: 5244,
HttpsPort: -1,
ForceHttps: false,
CertFile: "",
KeyFile: "",
},
JwtSecret: random.String(16),
TokenExpiresIn: 48,
TempDir: tempDir,
Database: Database{
Type: "sqlite3",
Port: 0,
TablePrefix: "x_",
DBFile: dbPath,
},
Meilisearch: Meilisearch{
Host: "http://localhost:7700",
Index: "openlist",
},
BleveDir: indexDir,
Log: LogConfig{
Enable: true,
Name: logPath,
MaxSize: 50,
MaxBackups: 30,
MaxAge: 28,
Filter: LogFilterConfig{
Enable: false,
Filters: []Filter{
{Path: "/ping"},
{Method: "HEAD"},
{Path: "/dav/", Method: "PROPFIND"},
},
},
},
MaxBufferLimit: -1,
MmapThreshold: 4,
MaxConnections: 0,
MaxConcurrency: 64,
TlsInsecureSkipVerify: true,
Tasks: TasksConfig{
Download: TaskConfig{
Workers: 5,
MaxRetry: 1,
// TaskPersistant: true,
},
Transfer: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
Upload: TaskConfig{
Workers: 5,
},
Copy: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
Move: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
Decompress: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
DecompressUpload: TaskConfig{
Workers: 5,
MaxRetry: 2,
},
AllowRetryCanceled: false,
Address: "0.0.0.0",
HttpPort: 5244,
},
Cors: Cors{
AllowOrigins: []string{"*"},
AllowMethods: []string{"*"},
AllowHeaders: []string{"*"},
},
S3: S3{
Enable: false,
Port: 5246,
SSL: false,
},
FTP: FTP{
Enable: false,
Listen: ":5221",
FindPasvPortAttempts: 50,
ActiveTransferPortNon20: false,
IdleTimeout: 900,
ConnectionTimeout: 30,
DisableActiveMode: false,
DefaultTransferBinary: false,
EnableActiveConnIPCheck: true,
EnablePasvConnIPCheck: true,
},
SFTP: SFTP{
Enable: false,
Listen: ":5222",
},
LastLaunchedVersion: "",
}
}

View File

@ -1,37 +1,10 @@
package conf
import (
"net/url"
"regexp"
)
import "regexp"
var (
BuiltAt string = "unknown"
GitAuthor string = "unknown"
GitCommit string = "unknown"
Version string = "dev"
WebVersion string = "rolling"
Conf *Config
SitePath string
)
var (
Conf *Config
URL *url.URL
)
var SlicesMap = make(map[string][]string)
var FilenameCharMap = make(map[string]string)
var PrivacyReg []*regexp.Regexp
var (
// StoragesLoaded loaded success if empty
StoragesLoaded = false
// 单个Buffer最大限制
MaxBufferLimit = 16 * 1024 * 1024
// 超过该阈值的Buffer将使用 mmap 分配,可主动释放内存
MmapThreshold = 4 * 1024 * 1024
)
var (
RawIndexHtml string
ManageHtml string
IndexHtml string
)

View File

@ -1,62 +0,0 @@
package db
import (
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random"
"github.com/pkg/errors"
)
func GetSharingById(id string) (*model.SharingDB, error) {
s := model.SharingDB{ID: id}
if err := db.Where(s).First(&s).Error; err != nil {
return nil, errors.Wrapf(err, "failed get sharing")
}
return &s, nil
}
func GetSharings(pageIndex, pageSize int) (sharings []model.SharingDB, count int64, err error) {
sharingDB := db.Model(&model.SharingDB{})
if err := sharingDB.Count(&count).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get sharings count")
}
if err := sharingDB.Order(columnName("id")).Offset((pageIndex - 1) * pageSize).Limit(pageSize).Find(&sharings).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get find sharings")
}
return sharings, count, nil
}
func GetSharingsByCreatorId(creator uint, pageIndex, pageSize int) (sharings []model.SharingDB, count int64, err error) {
sharingDB := db.Model(&model.SharingDB{})
cond := model.SharingDB{CreatorId: creator}
if err := sharingDB.Where(cond).Count(&count).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get sharings count")
}
if err := sharingDB.Where(cond).Order(columnName("id")).Offset((pageIndex - 1) * pageSize).Limit(pageSize).Find(&sharings).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get find sharings")
}
return sharings, count, nil
}
func CreateSharing(s *model.SharingDB) (string, error) {
id := random.String(8)
for len(id) < 12 {
old := model.SharingDB{
ID: id,
}
if err := db.Where(old).First(&old).Error; err != nil {
s.ID = id
return id, errors.WithStack(db.Create(s).Error)
}
id += random.String(1)
}
return "", errors.New("failed find valid id")
}
func UpdateSharing(s *model.SharingDB) error {
return errors.WithStack(db.Save(s).Error)
}
func DeleteSharingById(id string) error {
s := model.SharingDB{ID: id}
return errors.WithStack(db.Where(s).Delete(&s).Error)
}

9
internal/driver/var.go Normal file
View File

@ -0,0 +1,9 @@
package driver
import (
"github.com/hashicorp/go-plugin"
)
var (
PluginMap map[string]plugin.Plugin
)

View File

@ -1,47 +0,0 @@
package model
import "time"
type SharingDB struct {
ID string `json:"id" gorm:"type:char(12);primaryKey"`
FilesRaw string `json:"-" gorm:"type:text"`
Expires *time.Time `json:"expires"`
Pwd string `json:"pwd"`
Accessed int `json:"accessed"`
MaxAccessed int `json:"max_accessed"`
CreatorId uint `json:"-"`
Disabled bool `json:"disabled"`
Remark string `json:"remark"`
Readme string `json:"readme" gorm:"type:text"`
Header string `json:"header" gorm:"type:text"`
Sort
}
type Sharing struct {
*SharingDB
Files []string `json:"files"`
Creator *User `json:"-"`
}
func (s *Sharing) Valid() bool {
if s.Disabled {
return false
}
if s.MaxAccessed > 0 && s.Accessed >= s.MaxAccessed {
return false
}
if len(s.Files) == 0 {
return false
}
if !s.Creator.CanShare() {
return false
}
if s.Expires != nil && !s.Expires.IsZero() && s.Expires.Before(time.Now()) {
return false
}
return true
}
func (s *Sharing) Verify(pwd string) bool {
return s.Pwd == "" || s.Pwd == pwd
}

View File

@ -1,139 +0,0 @@
package op
import (
"fmt"
stdpath "path"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/db"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/go-cache"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
func makeJoined(sdb []model.SharingDB) []model.Sharing {
creator := make(map[uint]*model.User)
return utils.MustSliceConvert(sdb, func(s model.SharingDB) model.Sharing {
var c *model.User
var ok bool
if c, ok = creator[s.CreatorId]; !ok {
var err error
if c, err = GetUserById(s.CreatorId); err != nil {
c = nil
} else {
creator[s.CreatorId] = c
}
}
var files []string
if err := utils.Json.UnmarshalFromString(s.FilesRaw, &files); err != nil {
files = make([]string, 0)
}
return model.Sharing{
SharingDB: &s,
Files: files,
Creator: c,
}
})
}
var sharingCache = cache.NewMemCache(cache.WithShards[*model.Sharing](8))
var sharingG singleflight.Group[*model.Sharing]
func GetSharingById(id string, refresh ...bool) (*model.Sharing, error) {
if !utils.IsBool(refresh...) {
if sharing, ok := sharingCache.Get(id); ok {
log.Debugf("use cache when get sharing %s", id)
return sharing, nil
}
}
sharing, err, _ := sharingG.Do(id, func() (*model.Sharing, error) {
s, err := db.GetSharingById(id)
if err != nil {
return nil, errors.WithMessagef(err, "failed get sharing [%s]", id)
}
creator, err := GetUserById(s.CreatorId)
if err != nil {
return nil, errors.WithMessagef(err, "failed get sharing creator [%s]", id)
}
var files []string
if err = utils.Json.UnmarshalFromString(s.FilesRaw, &files); err != nil {
files = make([]string, 0)
}
return &model.Sharing{
SharingDB: s,
Files: files,
Creator: creator,
}, nil
})
return sharing, err
}
func GetSharings(pageIndex, pageSize int) ([]model.Sharing, int64, error) {
s, cnt, err := db.GetSharings(pageIndex, pageSize)
if err != nil {
return nil, 0, errors.WithStack(err)
}
return makeJoined(s), cnt, nil
}
func GetSharingsByCreatorId(userId uint, pageIndex, pageSize int) ([]model.Sharing, int64, error) {
s, cnt, err := db.GetSharingsByCreatorId(userId, pageIndex, pageSize)
if err != nil {
return nil, 0, errors.WithStack(err)
}
return makeJoined(s), cnt, nil
}
func GetSharingUnwrapPath(sharing *model.Sharing, path string) (unwrapPath string, err error) {
if len(sharing.Files) == 0 {
return "", errors.New("cannot get actual path of an invalid sharing")
}
if len(sharing.Files) == 1 {
return stdpath.Join(sharing.Files[0], path), nil
}
path = utils.FixAndCleanPath(path)[1:]
if len(path) == 0 {
return "", errors.New("cannot get actual path of a sharing root path")
}
mapPath := ""
child, rest, _ := strings.Cut(path, "/")
for _, c := range sharing.Files {
if child == stdpath.Base(c) {
mapPath = c
break
}
}
if mapPath == "" {
return "", fmt.Errorf("failed find child [%s] of sharing [%s]", child, sharing.ID)
}
return stdpath.Join(mapPath, rest), nil
}
func CreateSharing(sharing *model.Sharing) (id string, err error) {
sharing.CreatorId = sharing.Creator.ID
sharing.FilesRaw, err = utils.Json.MarshalToString(utils.MustSliceConvert(sharing.Files, utils.FixAndCleanPath))
if err != nil {
return "", errors.WithStack(err)
}
return db.CreateSharing(sharing.SharingDB)
}
func UpdateSharing(sharing *model.Sharing, skipMarshal ...bool) (err error) {
if !utils.IsBool(skipMarshal...) {
sharing.CreatorId = sharing.Creator.ID
sharing.FilesRaw, err = utils.Json.MarshalToString(utils.MustSliceConvert(sharing.Files, utils.FixAndCleanPath))
if err != nil {
return errors.WithStack(err)
}
}
sharingCache.Del(sharing.ID)
return db.UpdateSharing(sharing.SharingDB)
}
func DeleteSharing(sid string) error {
sharingCache.Del(sid)
return db.DeleteSharingById(sid)
}

View File

@ -1,65 +0,0 @@
package sharing
import (
"context"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func archiveMeta(ctx context.Context, sid, path string, args model.SharingArchiveMetaArgs) (*model.Sharing, *model.ArchiveMetaProvider, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.GetArchiveMeta(ctx, storage, actualPath, args.ArchiveMetaArgs)
return sharing, obj, err
}
return nil, nil, errors.New("cannot get sharing root archive meta")
}
func archiveList(ctx context.Context, sid, path string, args model.SharingArchiveListArgs) (*model.Sharing, []model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.ListArchive(ctx, storage, actualPath, args.ArchiveListArgs)
return sharing, obj, err
}
return nil, nil, errors.New("cannot get sharing root archive list")
}

View File

@ -1,60 +0,0 @@
package sharing
import (
"context"
stdpath "path"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func get(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
if unwrapPath != "/" {
virtualFiles := op.GetStorageVirtualFilesByPath(stdpath.Dir(unwrapPath))
for _, f := range virtualFiles {
if f.GetName() == stdpath.Base(unwrapPath) {
return sharing, f, nil
}
}
} else {
return sharing, &model.Object{
Name: sid,
Size: 0,
Modified: time.Time{},
IsFolder: true,
}, nil
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.Get(ctx, storage, actualPath)
return sharing, obj, err
}
return sharing, &model.Object{
Name: sid,
Size: 0,
Modified: time.Time{},
IsFolder: true,
}, nil
}

View File

@ -1,46 +0,0 @@
package sharing
import (
"context"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/pkg/errors"
)
func link(ctx context.Context, sid, path string, args *LinkArgs) (*model.Sharing, *model.Link, model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.SharingListArgs.Refresh)
if err != nil {
return nil, nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing link")
}
l, obj, err := op.Link(ctx, storage, actualPath, args.LinkArgs)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing link")
}
if l.URL != "" && !strings.HasPrefix(l.URL, "http://") && !strings.HasPrefix(l.URL, "https://") {
l.URL = common.GetApiUrl(ctx) + l.URL
}
return sharing, l, obj, nil
}
return nil, nil, nil, errors.New("cannot get sharing root link")
}

View File

@ -1,83 +0,0 @@
package sharing
import (
"context"
stdpath "path"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func list(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, []model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
virtualFiles := op.GetStorageVirtualFilesByPath(unwrapPath)
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil && len(virtualFiles) == 0 {
return nil, nil, errors.WithMessage(err, "failed list sharing")
}
var objs []model.Obj
if storage != nil {
objs, err = op.List(ctx, storage, actualPath, model.ListArgs{
Refresh: args.Refresh,
ReqPath: stdpath.Join(sid, path),
})
if err != nil && len(virtualFiles) == 0 {
return nil, nil, errors.WithMessage(err, "failed list sharing")
}
}
om := model.NewObjMerge()
objs = om.Merge(objs, virtualFiles...)
model.SortFiles(objs, sharing.OrderBy, sharing.OrderDirection)
model.ExtractFolder(objs, sharing.ExtractFolder)
return sharing, objs, nil
}
objs := make([]model.Obj, 0, len(sharing.Files))
for _, f := range sharing.Files {
if f != "/" {
isVf := false
virtualFiles := op.GetStorageVirtualFilesByPath(stdpath.Dir(f))
for _, vf := range virtualFiles {
if vf.GetName() == stdpath.Base(f) {
objs = append(objs, vf)
isVf = true
break
}
}
if isVf {
continue
}
} else {
continue
}
storage, actualPath, err := op.GetStorageAndActualPath(f)
if err != nil {
continue
}
obj, err := op.Get(ctx, storage, actualPath)
if err != nil {
continue
}
objs = append(objs, obj)
}
model.SortFiles(objs, sharing.OrderBy, sharing.OrderDirection)
model.ExtractFolder(objs, sharing.ExtractFolder)
return sharing, objs, nil
}

View File

@ -1,58 +0,0 @@
package sharing
import (
"context"
"github.com/OpenListTeam/OpenList/v4/internal/model"
log "github.com/sirupsen/logrus"
)
func List(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, []model.Obj, error) {
sharing, res, err := list(ctx, sid, path, args)
if err != nil {
log.Errorf("failed list sharing %s/%s: %+v", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func Get(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, model.Obj, error) {
sharing, res, err := get(ctx, sid, path, args)
if err != nil {
log.Warnf("failed get sharing %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func ArchiveMeta(ctx context.Context, sid, path string, args model.SharingArchiveMetaArgs) (*model.Sharing, *model.ArchiveMetaProvider, error) {
sharing, res, err := archiveMeta(ctx, sid, path, args)
if err != nil {
log.Warnf("failed get sharing archive meta %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func ArchiveList(ctx context.Context, sid, path string, args model.SharingArchiveListArgs) (*model.Sharing, []model.Obj, error) {
sharing, res, err := archiveList(ctx, sid, path, args)
if err != nil {
log.Warnf("failed list sharing archive %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
type LinkArgs struct {
model.SharingListArgs
model.LinkArgs
}
func Link(ctx context.Context, sid, path string, args *LinkArgs) (*model.Sharing, *model.Link, model.Obj, error) {
sharing, res, file, err := link(ctx, sid, path, args)
if err != nil {
log.Errorf("failed get sharing link %s/%s: %+v", sid, path, err)
return nil, nil, nil, err
}
return sharing, res, file, nil
}

27
layers/file/driver.go Normal file
View File

@ -0,0 +1,27 @@
package file
import "context"
// HostFileServer 驱动文件接口 #################################################################
type HostFileServer interface {
// CopyFile 复制文件 =======================================================================
CopyFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// MoveFile 移动文件 =======================================================================
MoveFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// NameFile 移动文件 =======================================================================
NameFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// ListFile 列举文件 =======================================================================
ListFile(ctx context.Context, path []string, opt *ListFileOption) ([]*HostFileObject, error)
// FindFile 搜索文件 =======================================================================
FindFile(ctx context.Context, path []string, opt *FindFileOption) ([]*HostFileObject, error)
// Download 获取文件 =======================================================================
Download(ctx context.Context, path []string, opt *DownloadOption) ([]*LinkFileObject, error)
// Uploader 上传文件 =======================================================================
Uploader(ctx context.Context, path []string, opt *UploaderOption) ([]*BackFileAction, error)
// KillFile 删除文件 =======================================================================
KillFile(ctx context.Context, path []string, opt *KillFileOption) ([]*BackFileAction, error)
// MakeFile 搜索文件 =======================================================================
MakeFile(ctx context.Context, path []string, opt *MakeFileOption) ([]*BackFileAction, error)
// MakePath 搜索文件 =======================================================================
MakePath(ctx context.Context, path []string, opt *MakeFileOption) ([]*BackFileAction, error)
}

71
layers/file/manage.go Normal file
View File

@ -0,0 +1,71 @@
package file
import (
"context"
)
// UserFileServer 文件服务接口 #################################################################
type UserFileServer interface {
// CopyFile 复制文件 =======================================================================
CopyFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// MoveFile 移动文件 =======================================================================
MoveFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// NameFile 移动文件 =======================================================================
NameFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// ListFile 列举文件 =======================================================================
ListFile(ctx context.Context, path []string, opt *ListFileOption) ([]*UserFileObject, error)
// FindFile 搜索文件 =======================================================================
FindFile(ctx context.Context, path []string, opt *FindFileOption) ([]*UserFileObject, error)
// Download 获取文件 =======================================================================
Download(ctx context.Context, path []string, opt *DownloadOption) ([]*LinkFileObject, error)
// Uploader 上传文件 =======================================================================
Uploader(ctx context.Context, path []string, opt *UploaderOption) ([]*BackFileAction, error)
// KillFile 删除文件 =======================================================================
KillFile(ctx context.Context, path []string, opt *KillFileOption) ([]*BackFileAction, error)
// MakeFile 搜索文件 =======================================================================
MakeFile(ctx context.Context, path []string, opt *MakeFileOption) ([]*BackFileAction, error)
// MakePath 搜索文件 =======================================================================
MakePath(ctx context.Context, path []string, opt *MakeFileOption) ([]*BackFileAction, error)
// PermFile 设置权限 =======================================================================
PermFile(ctx context.Context, path []string, opt *PermissionFile) ([]*BackFileAction, error)
//// NewShare 创建分享 =======================================================================
//NewShare(ctx context.Context, path []string, opt *NewShareAction) ([]*BackFileAction, error)
//// GetShare 获取分享 =======================================================================
//GetShare(ctx context.Context, path []string, opt *NewShareAction) ([]*UserFileObject, error)
//// DelShare 删除分享 =======================================================================
//DelShare(ctx context.Context, path []string, opt *NewShareAction) ([]*BackFileAction, error)
}
type UserFileUpload interface {
fullPost(ctx context.Context, path []string)
pfCreate(ctx context.Context, path []string)
pfUpload(ctx context.Context, path []string)
pfUpdate(ctx context.Context, path []string)
}
func ListFile(ctx context.Context, path []string, opt *ListFileOption) ([]*UserFileObject, error) {
return ListDeal([]*HostFileObject{})
}
func FindFile(ctx context.Context, path []string, opt *ListFileOption) ([]*UserFileObject, error) {
return ListDeal([]*HostFileObject{})
}
func ListDeal(originList []*HostFileObject) ([]*UserFileObject, error) {
serverList := make([]*UserFileObject, 0)
for _, fileItem := range originList {
serverList = append(serverList, &UserFileObject{
HostFileObject: *fileItem,
// ... 用户层逻辑
})
}
return serverList, nil
}
func Download(ctx context.Context, path []string, opt *ListFileOption) ([]*LinkFileObject, error) {
}
func Uploader(ctx context.Context, path []string, opt *ListFileOption) ([]*BackFileAction, error) {
}

79
layers/file/object.go Normal file
View File

@ -0,0 +1,79 @@
package file
import "time"
// HostFileObject 驱动层获取获取的文件信息
type HostFileObject struct {
realName []string // 真实名称
previews []string // 文件预览
fileSize int64 // 文件大小
lastTime time.Time // 修改时间
makeTime time.Time // 创建时间
fileType bool // 文件类型
fileHash string // 文件哈希
hashType int16 // 哈希类型
}
// UserFileObject 由用户层转换后的文件信息
type UserFileObject struct {
HostFileObject
showPath []string // 文件路径
showName []string // 文件名称
realPath []string // 真实路径
checksum int32 // 密码校验
fileMask int16 // 文件权限
encrypts int16 // 文件状态
// 下列信息用于前端展示文件用
enc_type string // 加解密类型
enc_from string // 文件密码源
enc_pass string // 加解密密码
com_type string // 压缩的类型
sub_nums int16 // 子文件数量
// 下列信息用于后端内部处理用
// fileMask =================
// 占用000000 0 000 000 000
// 含义ABCDEF 1 421 421 421
// A-加密 B-前端解密 C-自解密
// D-is分卷 E-is压缩 F-is隐藏
// encrypts =================
// 占用位0000000000 00 0000
// 含义为:分卷数量 压缩 加密
}
type PermissionFile struct {
}
type LinkFileObject struct {
download []string // 下载链接
usrAgent []string // 用户代理
}
type ListFileOption struct {
}
type FindFileOption struct {
}
type KillFileOption struct {
}
type MakeFileOption struct {
}
type DownloadOption struct {
downType int8 // 下载类型
}
type UploaderOption struct {
}
type BackFileAction struct {
success bool // 是否成功
message string // 错误信息
}
type NewShareAction struct {
BackFileAction
shareID string // 分享编码
pubUrls string // 公开链接
passkey string // 分析密码
expired time.Time // 过期时间
}

16
layers/perm/fsmask.go Normal file
View File

@ -0,0 +1,16 @@
package perm
type FileMask struct {
uuid string // 密钥UUID
user string // 所属用户
path string // 匹配路径
name string // 友好名称
idKeyset string // 密钥集ID
encrypts string // 加密组ID
password string // 独立密码
fileUser string // 所有用户
filePart int64 // 分卷大小
fileMask int16 // 文件权限
compress int16 // 是否压缩
isEnable bool // 是否启用
}

22
layers/perm/keyset.go Normal file
View File

@ -0,0 +1,22 @@
package perm
type UserKeys struct {
uuid string // 密钥UUID
user string // 所属用户
main string // 核心密钥用户密钥SHA2
name string // 友好名称
algo int8 // 密钥算法
enabled bool // 是否启用
encFile bool // 加密文件
encName bool // 加密名称
keyAuto bool // 自动更新
keyRand bool // 随机密钥
keyAuth UserAuth // 密钥认证
}
type UserAuth struct {
uuid string // 密钥UUID
user string // 所属用户
plugin string // 认证插件
config string // 认证配置
}

10
layers/perm/shared.go Normal file
View File

@ -0,0 +1,10 @@
package perm
type ShareUrl struct {
uuid string // 密钥UUID
user string // 所属用户
path string // 分享路径
pass string // 分享密码
date string // 过期时间
flag bool // 是否有效
}

14
layers/user/object.go Normal file
View File

@ -0,0 +1,14 @@
package user
type UserInfo struct {
uuid string // 用户UUID
name string // 用户名称
flag bool // 是否有效
perm PermInfo // 权限信息
}
type PermInfo struct {
isAdmin bool // 是否管理员
davRead bool // 是否允许读
// ...
}

View File

@ -1,6 +1,6 @@
package main
import "github.com/OpenListTeam/OpenList/v4/cmd"
import "github.com/OpenListTeam/OpenList/v5/cmd"
func main() {
cmd.Execute()

View File

@ -87,7 +87,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: "1.25.0"
go-version: "1.24.5"
- name: Setup web
run: bash build.sh dev web

View File

@ -33,7 +33,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: "1.25.0"
go-version: "1.24.5"
- name: Setup web
run: bash build.sh dev web

View File

@ -46,7 +46,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.25.0'
go-version: '1.24'
- name: Checkout
uses: actions/checkout@v4
@ -73,5 +73,4 @@ jobs:
with:
files: build/compress/*
prerelease: false
tag_name: ${{ github.event.release.tag_name }}

View File

@ -47,7 +47,7 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: '1.25.0'
go-version: 'stable'
- name: Cache Musl
id: cache-musl
@ -87,7 +87,7 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: '1.25.0'
go-version: 'stable'
- name: Cache Musl
id: cache-musl

View File

@ -36,7 +36,7 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: '1.25.0'
go-version: 'stable'
- name: Cache Musl
id: cache-musl

34
origin/.gitignore vendored Normal file
View File

@ -0,0 +1,34 @@
.idea/
.DS_Store
output/
/dist/
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
*.db
*.bin
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
/bin/*
*.json
/build
/data/
/tmp/
/log/
/lang/
/daemon/
/public/dist/*
/!public/dist/README.md
.VSCodeCounter

107
origin/CONTRIBUTING.md Normal file
View File

@ -0,0 +1,107 @@
# Contributing
## Setup your machine
`OpenList` is written in [Go](https://golang.org/) and [React](https://reactjs.org/).
Prerequisites:
- [git](https://git-scm.com)
- [Go 1.20+](https://golang.org/doc/install)
- [gcc](https://gcc.gnu.org/)
- [nodejs](https://nodejs.org/)
Clone `OpenList` and `OpenList-Frontend` anywhere:
```shell
$ git clone https://github.com/OpenListTeam/OpenList.git
$ git clone --recurse-submodules https://github.com/OpenListTeam/OpenList-Frontend.git
```
You should switch to the `main` branch for development.
## Preview your change
### backend
```shell
$ go run main.go
```
### frontend
```shell
$ pnpm dev
```
## Add a new driver
Copy `drivers/template` folder and rename it, and follow the comments in it.
## Create a commit
Commit messages should be well formatted, and to make that "standardized".
### Commit Message Format
Each commit message consists of a **header**, a **body** and a **footer**. The header has a special
format that includes a **type**, a **scope** and a **subject**:
```
<type>(<scope>): <subject>
<BLANK LINE>
<body>
<BLANK LINE>
<footer>
```
The **header** is mandatory and the **scope** of the header is optional.
Any line of the commit message cannot be longer than 100 characters! This allows the message to be easier
to read on GitHub as well as in various git tools.
### Revert
If the commit reverts a previous commit, it should begin with `revert: `, followed by the header
of the reverted commit.
In the body it should say: `This reverts commit <hash>.`, where the hash is the SHA of the commit
being reverted.
### Type
Must be one of the following:
* **feat**: A new feature
* **fix**: A bug fix
* **docs**: Documentation only changes
* **style**: Changes that do not affect the meaning of the code (white-space, formatting, missing
semi-colons, etc)
* **refactor**: A code change that neither fixes a bug nor adds a feature
* **perf**: A code change that improves performance
* **test**: Adding missing or correcting existing tests
* **build**: Affects project builds or dependency modifications
* **revert**: Restore the previous commit
* **ci**: Continuous integration of related file modifications
* **chore**: Changes to the build process or auxiliary tools and libraries such as documentation
generation
* **release**: Release a new version
### Scope
The scope could be anything specifying place of the commit change. For example `$location`,
`$browser`, `$compile`, `$rootScope`, `ngHref`, `ngClick`, `ngView`, etc...
You can use `*` when the change affects more than a single scope.
### Subject
The subject contains succinct description of the change:
* use the imperative, present tense: "change" not "changed" nor "changes"
* don't capitalize first letter
* no dot (.) at the end
### Body
Just as in the **subject**, use the imperative, present tense: "change" not "changed" nor "changes".
The body should include the motivation for the change and contrast this with previous behavior.
### Footer
The footer should contain any information about **Breaking Changes** and is also the place to
[reference GitHub issues that this commit closes](https://help.github.com/articles/closing-issues-via-commit-messages/).
**Breaking Changes** should start with the word `BREAKING CHANGE:` with a space or two newlines.
The rest of the commit message is then used for this.
## Submit a pull request
Push your branch to your `openlist` fork and open a pull request against the
`main` branch.

View File

@ -20,12 +20,11 @@ ARG GID=1001
WORKDIR /opt/openlist/
RUN addgroup -g ${GID} ${USER} && \
adduser -D -u ${UID} -G ${USER} ${USER} && \
mkdir -p /opt/openlist/data
COPY --from=builder --chmod=755 --chown=${UID}:${GID} /app/bin/openlist ./
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh
COPY --chmod=755 --from=builder /app/bin/openlist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN adduser -u ${UID} -g ${GID} -h /opt/openlist/data -D -s /bin/sh ${USER} \
&& chown -R ${UID}:${GID} /opt \
&& chown -R ${UID}:${GID} /entrypoint.sh
USER ${USER}
RUN /entrypoint.sh version

View File

@ -10,12 +10,12 @@ ARG GID=1001
WORKDIR /opt/openlist/
RUN addgroup -g ${GID} ${USER} && \
adduser -D -u ${UID} -G ${USER} ${USER} && \
mkdir -p /opt/openlist/data
COPY --chmod=755 /build/${TARGETPLATFORM}/openlist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh
COPY --chmod=755 --chown=${UID}:${GID} /build/${TARGETPLATFORM}/openlist ./
COPY --chmod=755 --chown=${UID}:${GID} entrypoint.sh /entrypoint.sh
RUN adduser -u ${UID} -g ${GID} -h /opt/openlist/data -D -s /bin/sh ${USER} \
&& chown -R ${UID}:${GID} /opt \
&& chown -R ${UID}:${GID} /entrypoint.sh
USER ${USER}
RUN /entrypoint.sh version

View File

@ -74,6 +74,7 @@ Thank you for your support and understanding of the OpenList project.
- [x] [Thunder](https://pan.xunlei.com)
- [x] [Lanzou](https://www.lanzou.com)
- [x] [ILanzou](https://www.ilanzou.com)
- [x] [Aliyundrive share](https://www.alipan.com)
- [x] [Google photo](https://photos.google.com)
- [x] [Mega.nz](https://mega.nz)
- [x] [Baidu photo](https://photo.baidu.com)
@ -84,16 +85,6 @@ Thank you for your support and understanding of the OpenList project.
- [x] [FeijiPan](https://www.feijipan.com)
- [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] [Chaoxing](https://www.chaoxing.com)
- [x] [CNB](https://cnb.cool/)
- [x] [Degoo](https://degoo.com)
- [x] [Doubao](https://www.doubao.com)
- [x] [Febbox](https://www.febbox.com)
- [x] [GitHub](https://github.com)
- [x] [OpenList](https://github.com/OpenListTeam/OpenList)
- [x] [Teldrive](https://github.com/tgdrive/teldrive)
- [x] [Weiyun](https://www.weiyun.com)
- [x] Easy to deploy and out-of-the-box
- [x] File preview (PDF, markdown, code, plain text, ...)
- [x] Image preview in gallery mode

View File

@ -74,6 +74,7 @@ OpenList 是一个由 OpenList 团队独立维护的开源项目,遵循 AGPL-3
- [x] [迅雷网盘](https://pan.xunlei.com)
- [x] [蓝奏云](https://www.lanzou.com)
- [x] [蓝奏云优享版](https://www.ilanzou.com)
- [x] [阿里云盘分享](https://www.alipan.com)
- [x] [Google 相册](https://photos.google.com)
- [x] [Mega.nz](https://mega.nz)
- [x] [百度相册](https://photo.baidu.com)
@ -84,15 +85,6 @@ OpenList 是一个由 OpenList 团队独立维护的开源项目,遵循 AGPL-3
- [x] [飞机盘](https://www.feijipan.com)
- [x] [多吉云](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] [超星](https://www.chaoxing.com)
- [x] [CNB](https://cnb.cool/)
- [x] [Degoo](https://degoo.com)
- [x] [豆包](https://www.doubao.com)
- [x] [Febbox](https://www.febbox.com)
- [x] [GitHub](https://github.com)
- [x] [OpenList](https://github.com/OpenListTeam/OpenList)
- [x] [Teldrive](https://github.com/tgdrive/teldrive)
- [x] [微云](https://www.weiyun.com)
- [x] 部署方便,开箱即用
- [x] 文件预览PDF、markdown、代码、纯文本等
- [x] 画廊模式下的图片预览

View File

@ -74,6 +74,7 @@ OpenListプロジェクトへのご支援とご理解をありがとうござい
- [x] [Thunder](https://pan.xunlei.com)
- [x] [Lanzou](https://www.lanzou.com)
- [x] [ILanzou](https://www.ilanzou.com)
- [x] [Aliyundrive share](https://www.alipan.com)
- [x] [Google photo](https://photos.google.com)
- [x] [Mega.nz](https://mega.nz)
- [x] [Baidu photo](https://photo.baidu.com)
@ -84,15 +85,6 @@ OpenListプロジェクトへのご支援とご理解をありがとうござい
- [x] [FeijiPan](https://www.feijipan.com)
- [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] [Chaoxing](https://www.chaoxing.com)
- [x] [CNB](https://cnb.cool/)
- [x] [Degoo](https://degoo.com)
- [x] [Doubao](https://www.doubao.com)
- [x] [Febbox](https://www.febbox.com)
- [x] [GitHub](https://github.com)
- [x] [OpenList](https://github.com/OpenListTeam/OpenList)
- [x] [Teldrive](https://github.com/tgdrive/teldrive)
- [x] [Weiyun](https://www.weiyun.com)
- [x] 簡単にデプロイでき、すぐに使える
- [x] ファイルプレビューPDF、markdown、コード、テキストなど
- [x] ギャラリーモードでの画像プレビュー

View File

@ -74,6 +74,7 @@ Dank u voor uw ondersteuning en begrip
- [x] [Thunder](https://pan.xunlei.com)
- [x] [Lanzou](https://www.lanzou.com)
- [x] [ILanzou](https://www.ilanzou.com)
- [x] [Aliyundrive share](https://www.alipan.com)
- [x] [Google photo](https://photos.google.com)
- [x] [Mega.nz](https://mega.nz)
- [x] [Baidu photo](https://photo.baidu.com)
@ -84,15 +85,6 @@ Dank u voor uw ondersteuning en begrip
- [x] [FeijiPan](https://www.feijipan.com)
- [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] [Chaoxing](https://www.chaoxing.com)
- [x] [CNB](https://cnb.cool/)
- [x] [Degoo](https://degoo.com)
- [x] [Doubao](https://www.doubao.com)
- [x] [Febbox](https://www.febbox.com)
- [x] [GitHub](https://github.com)
- [x] [OpenList](https://github.com/OpenListTeam/OpenList)
- [x] [Teldrive](https://github.com/tgdrive/teldrive)
- [x] [Weiyun](https://www.weiyun.com)
- [x] Eenvoudig te implementeren en direct te gebruiken
- [x] Bestandsvoorbeeld (PDF, markdown, code, platte tekst, ...)
- [x] Afbeeldingsvoorbeeld in galerijweergave

View File

@ -236,7 +236,7 @@ BuildRelease() {
BuildLoongGLIBC() {
local target_abi="$2"
local output_file="$1"
local oldWorldGoVersion="1.25.0"
local oldWorldGoVersion="1.24.3"
if [ "$target_abi" = "abi1.0" ]; then
echo building for linux-loong64-abi1.0
@ -254,13 +254,13 @@ BuildLoongGLIBC() {
# Download and setup patched Go compiler for old-world
if ! curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250821/go${oldWorldGoVersion}.linux-amd64.tar.gz" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/go${oldWorldGoVersion}.linux-amd64.tar.gz" \
-o go-loong64-abi1.0.tar.gz; then
echo "Error: Failed to download patched Go compiler for old-world ABI1.0"
if [ -n "$GITHUB_TOKEN" ]; then
echo "Error output from curl:"
curl -fsSL --retry 3 -H "Authorization: Bearer $GITHUB_TOKEN" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250821/go${oldWorldGoVersion}.linux-amd64.tar.gz" \
"https://github.com/loong64/loong64-abi1.0-toolchains/releases/download/20250722/go${oldWorldGoVersion}.linux-amd64.tar.gz" \
-o go-loong64-abi1.0.tar.gz || true
fi
return 1

51
origin/cmd/common.go Normal file
View File

@ -0,0 +1,51 @@
package cmd
import (
"os"
"path/filepath"
"strconv"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap/data"
"github.com/OpenListTeam/OpenList/v4/internal/db"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
log "github.com/sirupsen/logrus"
)
func Init() {
bootstrap.InitConfig()
bootstrap.Log()
bootstrap.InitDB()
data.InitData()
bootstrap.InitStreamLimit()
bootstrap.InitIndex()
bootstrap.InitUpgradePatch()
}
func Release() {
db.Close()
}
var pid = -1
var pidFile string
func initDaemon() {
ex, err := os.Executable()
if err != nil {
log.Fatal(err)
}
exPath := filepath.Dir(ex)
_ = os.MkdirAll(filepath.Join(exPath, "daemon"), 0700)
pidFile = filepath.Join(exPath, "daemon/pid")
if utils.Exists(pidFile) {
bytes, err := os.ReadFile(pidFile)
if err != nil {
log.Fatal("failed to read pid file", err)
}
id, err := strconv.Atoi(string(bytes))
if err != nil {
log.Fatal("failed to parse pid data", err)
}
pid = id
}
}

View File

@ -0,0 +1,10 @@
package flags
var (
DataDir string
Debug bool
NoPrefix bool
Dev bool
ForceBinDir bool
LogStd bool
)

36
origin/cmd/root.go Normal file
View File

@ -0,0 +1,36 @@
package cmd
import (
"fmt"
"os"
"github.com/OpenListTeam/OpenList/v4/cmd/flags"
_ "github.com/OpenListTeam/OpenList/v4/drivers"
_ "github.com/OpenListTeam/OpenList/v4/internal/archive"
_ "github.com/OpenListTeam/OpenList/v4/internal/offline_download"
"github.com/spf13/cobra"
)
var RootCmd = &cobra.Command{
Use: "openlist",
Short: "A file list program that supports multiple storage.",
Long: `A file list program that supports multiple storage,
built with love by OpenListTeam.
Complete documentation is available at https://doc.oplist.org/`,
}
func Execute() {
if err := RootCmd.Execute(); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func init() {
RootCmd.PersistentFlags().StringVar(&flags.DataDir, "data", "data", "data folder")
RootCmd.PersistentFlags().BoolVar(&flags.Debug, "debug", false, "start with debug mode")
RootCmd.PersistentFlags().BoolVar(&flags.NoPrefix, "no-prefix", false, "disable env prefix")
RootCmd.PersistentFlags().BoolVar(&flags.Dev, "dev", false, "start with dev mode")
RootCmd.PersistentFlags().BoolVar(&flags.ForceBinDir, "force-bin-dir", false, "Force to use the directory where the binary file is located as data directory")
RootCmd.PersistentFlags().BoolVar(&flags.LogStd, "log-std", false, "Force to log to std")
}

261
origin/cmd/server.go Normal file
View File

@ -0,0 +1,261 @@
package cmd
import (
"context"
"errors"
"fmt"
"net"
"net/http"
"os"
"os/signal"
"strconv"
"sync"
"syscall"
"time"
"github.com/OpenListTeam/OpenList/v4/cmd/flags"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/fs"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server"
"github.com/OpenListTeam/OpenList/v4/server/middlewares"
"github.com/OpenListTeam/sftpd-openlist"
ftpserver "github.com/fclairamb/ftpserverlib"
"github.com/gin-gonic/gin"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"golang.org/x/net/http2"
"golang.org/x/net/http2/h2c"
)
// ServerCmd represents the server command
var ServerCmd = &cobra.Command{
Use: "server",
Short: "Start the server at the specified address",
Long: `Start the server at the specified address
the address is defined in config file`,
Run: func(cmd *cobra.Command, args []string) {
Init()
if conf.Conf.DelayedStart != 0 {
utils.Log.Infof("delayed start for %d seconds", conf.Conf.DelayedStart)
time.Sleep(time.Duration(conf.Conf.DelayedStart) * time.Second)
}
bootstrap.InitOfflineDownloadTools()
bootstrap.LoadStorages()
bootstrap.InitTaskManager()
if !flags.Debug && !flags.Dev {
gin.SetMode(gin.ReleaseMode)
}
r := gin.New()
// gin log
if conf.Conf.Log.Filter.Enable {
r.Use(middlewares.FilteredLogger())
} else {
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out))
}
r.Use(gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r)
var httpHandler http.Handler = r
if conf.Conf.Scheme.EnableH2c {
httpHandler = h2c.NewHandler(r, &http2.Server{})
}
var httpSrv, httpsSrv, unixSrv *http.Server
if conf.Conf.Scheme.HttpPort != -1 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
fmt.Printf("start HTTP server @ %s\n", httpBase)
utils.Log.Infof("start HTTP server @ %s", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler}
go func() {
err := httpSrv.ListenAndServe()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start http: %s", err.Error())
}
}()
}
if conf.Conf.Scheme.HttpsPort != -1 {
httpsBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpsPort)
fmt.Printf("start HTTPS server @ %s\n", httpsBase)
utils.Log.Infof("start HTTPS server @ %s", httpsBase)
httpsSrv = &http.Server{Addr: httpsBase, Handler: r}
go func() {
err := httpsSrv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start https: %s", err.Error())
}
}()
}
if conf.Conf.Scheme.UnixFile != "" {
fmt.Printf("start unix server @ %s\n", conf.Conf.Scheme.UnixFile)
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: httpHandler}
go func() {
listener, err := net.Listen("unix", conf.Conf.Scheme.UnixFile)
if err != nil {
utils.Log.Fatalf("failed to listen unix: %+v", err)
}
// set socket file permission
mode, err := strconv.ParseUint(conf.Conf.Scheme.UnixFilePerm, 8, 32)
if err != nil {
utils.Log.Errorf("failed to parse socket file permission: %+v", err)
} else {
err = os.Chmod(conf.Conf.Scheme.UnixFile, os.FileMode(mode))
if err != nil {
utils.Log.Errorf("failed to chmod socket file: %+v", err)
}
}
err = unixSrv.Serve(listener)
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start unix: %s", err.Error())
}
}()
}
if conf.Conf.S3.Port != -1 && conf.Conf.S3.Enable {
s3r := gin.New()
s3r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.InitS3(s3r)
s3Base := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.S3.Port)
fmt.Printf("start S3 server @ %s\n", s3Base)
utils.Log.Infof("start S3 server @ %s", s3Base)
go func() {
var err error
if conf.Conf.S3.SSL {
httpsSrv = &http.Server{Addr: s3Base, Handler: s3r}
err = httpsSrv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
}
if !conf.Conf.S3.SSL {
httpSrv = &http.Server{Addr: s3Base, Handler: s3r}
err = httpSrv.ListenAndServe()
}
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start s3 server: %s", err.Error())
}
}()
}
var ftpDriver *server.FtpMainDriver
var ftpServer *ftpserver.FtpServer
if conf.Conf.FTP.Listen != "" && conf.Conf.FTP.Enable {
var err error
ftpDriver, err = server.NewMainDriver()
if err != nil {
utils.Log.Fatalf("failed to start ftp driver: %s", err.Error())
} else {
fmt.Printf("start ftp server on %s\n", conf.Conf.FTP.Listen)
utils.Log.Infof("start ftp server on %s", conf.Conf.FTP.Listen)
go func() {
ftpServer = ftpserver.NewFtpServer(ftpDriver)
err = ftpServer.ListenAndServe()
if err != nil {
utils.Log.Fatalf("problem ftp server listening: %s", err.Error())
}
}()
}
}
var sftpDriver *server.SftpDriver
var sftpServer *sftpd.SftpServer
if conf.Conf.SFTP.Listen != "" && conf.Conf.SFTP.Enable {
var err error
sftpDriver, err = server.NewSftpDriver()
if err != nil {
utils.Log.Fatalf("failed to start sftp driver: %s", err.Error())
} else {
fmt.Printf("start sftp server on %s", conf.Conf.SFTP.Listen)
utils.Log.Infof("start sftp server on %s", conf.Conf.SFTP.Listen)
go func() {
sftpServer = sftpd.NewSftpServer(sftpDriver)
err = sftpServer.RunServer()
if err != nil {
utils.Log.Fatalf("problem sftp server listening: %s", err.Error())
}
}()
}
}
// Wait for interrupt signal to gracefully shutdown the server with
// a timeout of 1 second.
quit := make(chan os.Signal, 1)
// kill (no param) default send syscanll.SIGTERM
// kill -2 is syscall.SIGINT
// kill -9 is syscall. SIGKILL but can"t be catch, so don't need add it
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
utils.Log.Println("Shutdown server...")
fs.ArchiveContentUploadTaskManager.RemoveAll()
Release()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
var wg sync.WaitGroup
if conf.Conf.Scheme.HttpPort != -1 {
wg.Add(1)
go func() {
defer wg.Done()
if err := httpSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTP server shutdown err: ", err)
}
}()
}
if conf.Conf.Scheme.HttpsPort != -1 {
wg.Add(1)
go func() {
defer wg.Done()
if err := httpsSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTPS server shutdown err: ", err)
}
}()
}
if conf.Conf.Scheme.UnixFile != "" {
wg.Add(1)
go func() {
defer wg.Done()
if err := unixSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("Unix server shutdown err: ", err)
}
}()
}
if conf.Conf.FTP.Listen != "" && conf.Conf.FTP.Enable && ftpServer != nil && ftpDriver != nil {
wg.Add(1)
go func() {
defer wg.Done()
ftpDriver.Stop()
if err := ftpServer.Stop(); err != nil {
utils.Log.Fatal("FTP server shutdown err: ", err)
}
}()
}
if conf.Conf.SFTP.Listen != "" && conf.Conf.SFTP.Enable && sftpServer != nil && sftpDriver != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := sftpServer.Close(); err != nil {
utils.Log.Fatal("SFTP server shutdown err: ", err)
}
}()
}
wg.Wait()
utils.Log.Println("Server exit")
},
}
func init() {
RootCmd.AddCommand(ServerCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// serverCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// serverCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
// OutOpenListInit 暴露用于外部启动server的函数
func OutOpenListInit() {
var (
cmd *cobra.Command
args []string
)
ServerCmd.Run(cmd, args)
}

Some files were not shown because too many files have changed in this diff Show More