Compare commits

..

3 Commits

Author SHA1 Message Date
d78c42b02f fix: filepath 2021-12-11 17:57:09 +01:00
8cb40b2d60 feat: replay 2021-12-11 15:53:02 +01:00
57fdd492ef Merge branch 'develop' into 'master'
fix: auto-updater operators registration to ensure blender 2.93 compatibility

See merge request slumber/multi-user!117
2021-04-15 13:39:47 +00:00
90 changed files with 2506 additions and 3521 deletions

1
.gitignore vendored
View File

@ -14,4 +14,3 @@ _build
# ignore generated zip generated from blender_addon_tester
*.zip
libs

View File

@ -8,5 +8,3 @@ build:
name: multi_user
paths:
- multi_user
variables:
GIT_SUBMODULE_STRATEGY: recursive

View File

@ -5,7 +5,6 @@ deploy:
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
GIT_SUBMODULE_STRATEGY: recursive
services:
- docker:19.03.12-dind

View File

@ -3,5 +3,3 @@ test:
image: slumber/blender-addon-testing:latest
script:
- python3 scripts/test_addon.py
variables:
GIT_SUBMODULE_STRATEGY: recursive

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "multi_user/libs/replication"]
path = multi_user/libs/replication
url = https://gitlab.com/slumber/replication.git

View File

@ -187,33 +187,3 @@ All notable changes to this project will be documented in this file.
- Sync missing armature bone Roll
- Sync missing driver data_path
- Constraint replication
## [0.4.0] - 2021-07-20
### Added
- Connection preset system (@Kysios)
- Display connected users active mode (users pannel and viewport) (@Kysios)
- Delta-based replication
- Sync timeline marker
- Sync images settings (@Kysios)
- Sync parent relation type (@Kysios)
- Sync uv project modifier
- Sync FCurves modifiers
### Changed
- User selection optimizations (draw and sync) (@Kysios)
- Improved shapekey syncing performances
- Improved gpencil syncing performances
- Integrate replication as a submodule
- The dependencies are now installed in a folder(blender addon folder) that no longer requires administrative rights
- Presence overlay UI optimization (@Kysios)
### Fixed
- User selection bounding box glitches for non-mesh objects (@Kysios)
- Transforms replication for animated objects
- GPencil fill stroke
- Sculpt and GPencil brushes deleted when joining a session (@Kysios)
- Auto-updater doesn't work for master and develop builds

View File

@ -11,8 +11,9 @@ This tool aims to allow multiple users to work on the same scene over the networ
## Quick installation
1. Download [latest build](https://gitlab.com/slumber/multi-user/-/jobs/artifacts/develop/download?job=build) or [stable build](https://gitlab.com/slumber/multi-user/-/jobs/artifacts/master/download?job=build).
2. Install last_version.zip from your addon preferences.
1. Download latest release [multi_user.zip](https://gitlab.com/slumber/multi-user/-/jobs/artifacts/master/download?job=build).
2. Run blender as administrator (dependencies installation).
3. Install last_version.zip from your addon preferences.
[Dependencies](#dependencies) will be automatically added to your blender python during installation.
@ -29,34 +30,34 @@ See the [troubleshooting guide](https://slumber.gitlab.io/multi-user/getting_sta
Currently, not all data-block are supported for replication over the wire. The following list summarizes the status for each ones.
| Name | Status | Comment |
| -------------- | :----: | :---------------------------------------------------------------------: |
| -------------- | :----: | :----------------------------------------------------------: |
| action | ✔️ | |
| armature | ❗ | Not stable |
| camera | ✔️ | |
| collection | ✔️ | |
| curve | ❗ | Nurbs surfaces not supported |
| gpencil | ✔️ | |
| image | ✔️ | |
| mesh | ✔️ | |
| material | ✔️ | |
| node_groups | ✔️ | Material & Geometry only |
| node_groups | | Material & Geometry only |
| geometry nodes | ✔️ | |
| metaball | ✔️ | |
| object | ✔️ | |
| textures | ❗ | Supported for modifiers/materials/geo nodes only |
| texts | ✔️ | |
| scene | ✔️ | |
| world | ✔️ | |
| volumes | ✔️ | |
| lightprobes | ✔️ | |
| physics | ✔️ | |
| textures | ✔️ | |
| curve | | Nurbs surfaces not supported |
| armature | | Only for Mesh. [Planned for GPencil](https://gitlab.com/slumber/multi-user/-/issues/161). Not stable yet |
| compositing | | [Planned](https://gitlab.com/slumber/multi-user/-/issues/46) |
| texts | ❌ | [Planned](https://gitlab.com/slumber/multi-user/-/issues/81) |
| nla | | |
| volumes | ✔️ | |
| particles | ❗ | The cache isn't syncing. |
| speakers | ❗ | [Partial](https://gitlab.com/slumber/multi-user/-/issues/65) |
| vse | ❗ | Mask and Clip not supported yet |
| libraries | ❌ | |
| nla | | |
| texts | ❌ | [Planned for v0.5.0](https://gitlab.com/slumber/multi-user/-/issues/81) |
| compositing | ❌ | [Planned for v0.5.0](https://gitlab.com/slumber/multi-user/-/issues/46) |
| physics | ❌ | [Planned](https://gitlab.com/slumber/multi-user/-/issues/45) |
| libraries | | Partial |

View File

@ -19,10 +19,10 @@ import sys
project = 'multi-user'
copyright = '2020, Swann Martinez'
author = 'Swann Martinez, Poochy, Fabian'
author = 'Swann Martinez, with contributions from Poochy'
# The full version, including alpha/beta/rc tags
release = '0.5.0-develop'
release = '0.2.0'
# -- General configuration ---------------------------------------------------

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 365 KiB

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 320 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.2 KiB

View File

@ -108,69 +108,36 @@ Before starting make sure that you have access to the session IP address and por
1. Fill in your user information
--------------------------------
Joining a server
=======================
Follow the user-info_ section for this step.
--------------
Network setup
--------------
----------------
2. Network setup
----------------
In the network panel, select **JOIN**.
The **join sub-panel** (see image below) allows you to configure your client to join a
collaborative session which is already hosted.
.. figure:: img/server_preset_image_normal_server.png
.. figure:: img/quickstart_join.png
:align: center
:width: 200px
:alt: Connect menu
Connection pannel
Connection panel
Fill in the fields with your information:
- **IP**: the host's IP address.
- **Port**: the host's port number.
- **Connect as admin**: connect yourself with **admin rights** (see :ref:`admin` ) to the session.
Once you've configured every field, hit the button **CONNECT** to join the session !
When the :ref:`session-status` is **ONLINE** you are online and ready to start co-creating.
.. note::
If you want to have **administrator rights** (see :ref:`admin` ) on the server, just enter the password created by the host in the **Connect as admin** section
.. figure:: img/server_preset_image_admin.png
:align: center
:width: 200px
Admin password
---------------
Server presets
---------------
You can save your server presets in a preset list below the 'JOIN' and 'HOST' buttons. This allows you to quickly access and manage your servers.
To add a server, first enter the ip address and the port (plus the password if needed), then click on the + icon to add a name to your preset. To remove a server from the list, select it and click on the - icon.
.. figure:: img/server_preset_exemple.gif
:align: center
:width: 200px
.. warning:: Be careful, if you don't rename your new preset, or if it has the same name as an existing preset, the old preset will be overwritten.
.. figure:: img/server_preset_image_report.png
:align: center
:width: 200px
.. note::
Two presets are already present when the addon is launched:
- The 'localhost' preset, to host and join a local session quickly
- The 'public session' preset, to join the public sessions of the multi-user server (official discord to participate : https://discord.gg/aBPvGws)
.. Maybe something more explicit here
.. note::
Additional configuration settings can be found in the :ref:`advanced` section.
Once you've configured every field, hit the button **CONNECT** to join the session !
When the :ref:`session-status` is **ONLINE** you are online and ready to start co-creating.
.. note::
When starting a **dedicated server**, the session status screen will take you to the **LOBBY**, awaiting an admin to start the session.
@ -215,10 +182,8 @@ One of the most vital tools is the **Online user panel**. It lists all connected
users' information including your own:
* **Role** : if a user is an admin or a regular user.
* **Username** : Name of the user.
* **Mode** : User's active editing mode (edit_mesh, paint,etc.).
* **Frame**: When (on which frame) the user is working.
* **Location**: Where the user is actually working.
* **Frame**: When (on which frame) the user is working.
* **Ping**: user's connection delay in milliseconds
.. figure:: img/quickstart_users.png
@ -275,7 +240,6 @@ it draw users' related information in your viewport such as:
* Username
* User point of view
* User active mode
* User selection
.. figure:: img/quickstart_presence.png
@ -410,6 +374,15 @@ Network
Advanced network settings
**IPC Port** is the port used for Inter Process Communication. This port is used
by the multi-user subprocesses to communicate with each other. If different instances
of multi-user are using the same IPC port, this will create conflict !
.. note::
You only need to modify this setting if you need to launch multiple clients from the same
computer (or if you try to host and join from the same computer). To resolve this, you simply need to enter a different
**IPC port** for each blender instance.
**Timeout (in milliseconds)** is the maximum ping authorized before auto-disconnecting.
You should only increase it if you have a bad connection.

View File

@ -76,7 +76,7 @@ Hit 'Create a network'(see image below) and go to the network settings.
:align: center
:width: 450px
Admin password
Network page
Now that the network is created, let's configure it.
@ -212,14 +212,14 @@ You can run the dedicated server on any platform by following these steps:
.. code-block:: bash
replication.serve
replication.server
.. hint::
You can also specify a custom **port** (-p), **timeout** (-t), **admin password** (-pwd), **log level (ERROR, WARNING, INFO or DEBUG)** (-l) and **log file** (-lf) with the following optional arguments
.. code-block:: bash
replication.serve -p 5555 -pwd admin -t 5000 -l INFO -lf server.log
replication.server -p 5555 -pwd admin -t 5000 -l INFO -lf server.log
Here, for example, a server is instantiated on port 5555, with password 'admin', a 5 second timeout, and logging enabled.
@ -562,7 +562,7 @@ The default Docker image essentially runs the equivalent of:
.. code-block:: bash
replication.serve -pwd admin -p 5555 -t 5000 -l DEBUG -lf multiuser_server.log
replication.server -pwd admin -p 5555 -t 5000 -l DEBUG -lf multiuser_server.log
This means the server will be launched with 'admin' as the administrator password, run on ports 5555:5558, use a timeout of 5 seconds, verbose 'DEBUG' log level, and with log files written to 'multiuser_server.log'. See :ref:`cmd-line` for a description of optional parameters.
@ -572,7 +572,7 @@ For example, I would like to launch my server with a different administrator pas
.. code-block:: bash
replication.serve -pwd supersecretpassword -p 5555 -t 3000 -l DEBUG -lf logname.log
python3 -m replication.server -pwd supersecretpassword -p 5555 -t 3000 -l DEBUG -lf logname.log
Now, my configuration should look like this:
@ -691,7 +691,7 @@ We're finally ready to launch the server. Simply run:
.. code-block:: bash
replication.serve -p 5555 -pwd admin -t 5000 -l INFO -lf server.log
python3 -m replication.server -p 5555 -pwd admin -t 5000 -l INFO -lf server.log
See :ref:`cmd-line` for a description of optional parameters

View File

@ -19,7 +19,7 @@
bl_info = {
"name": "Multi-User",
"author": "Swann Martinez",
"version": (0, 4, 1),
"version": (0, 3, 0),
"description": "Enable real-time collaborative workflow inside blender",
"blender": (2, 82, 0),
"location": "3D View > Sidebar > Multi-User tab",
@ -43,10 +43,13 @@ from bpy.app.handlers import persistent
from . import environment
DEPENDENCIES = {
("replication", '0.1.26'),
}
module_error_msg = "Insufficient rights to install the multi-user \
dependencies, aunch blender with administrator rights."
def register():
# Setup logging policy
logging.basicConfig(
@ -55,13 +58,16 @@ def register():
level=logging.INFO)
try:
environment.register()
if bpy.app.version[1] >= 91:
python_binary_path = sys.executable
else:
python_binary_path = bpy.app.binary_path_python
environment.setup(DEPENDENCIES, python_binary_path)
from . import presence
from . import operators
from . import handlers
from . import ui
from . import icons
from . import preferences
from . import addon_updater_ops
@ -69,9 +75,7 @@ def register():
addon_updater_ops.register(bl_info)
presence.register()
operators.register()
handlers.register()
ui.register()
icons.register()
except ModuleNotFoundError as e:
raise Exception(module_error_msg)
logging.error(module_error_msg)
@ -85,28 +89,21 @@ def register():
type=preferences.SessionUser
)
bpy.types.WindowManager.user_index = bpy.props.IntProperty()
bpy.types.WindowManager.server_index = bpy.props.IntProperty()
bpy.types.TOPBAR_MT_file_import.append(operators.menu_func_import)
bpy.types.TOPBAR_MT_file_export.append(operators.menu_func_export)
def unregister():
from . import presence
from . import operators
from . import handlers
from . import ui
from . import icons
from . import preferences
from . import addon_updater_ops
bpy.types.TOPBAR_MT_file_import.remove(operators.menu_func_import)
bpy.types.TOPBAR_MT_file_export.remove(operators.menu_func_export)
presence.unregister()
addon_updater_ops.unregister()
ui.unregister()
icons.unregister()
handlers.unregister()
operators.unregister()
preferences.unregister()
@ -114,6 +111,3 @@ def unregister():
del bpy.types.ID.uuid
del bpy.types.WindowManager.online_users
del bpy.types.WindowManager.user_index
del bpy.types.WindowManager.server_index
environment.unregister()

View File

@ -1688,7 +1688,10 @@ class GitlabEngine(object):
# Could clash with tag names and if it does, it will
# download TAG zip instead of branch zip to get
# direct path, would need.
return f"https://gitlab.com/slumber/multi-user/-/jobs/artifacts/{branch}/download?job=build"
return "{}{}{}".format(
self.form_repo_url(updater),
"/repository/archive.zip?sha=",
branch)
def get_zip_url(self, sha, updater):
return "{base}/repository/archive.zip?sha={sha}".format(

View File

@ -28,6 +28,7 @@ __all__ = [
'bl_light',
'bl_scene',
'bl_material',
'bl_library',
'bl_armature',
'bl_action',
'bl_world',
@ -38,27 +39,18 @@ __all__ = [
'bl_font',
'bl_sound',
'bl_file',
# 'bl_sequencer',
'bl_node_group',
'bl_texture',
"bl_particle",
] # Order here defines execution order
if bpy.app.version >= (2,91,0):
if bpy.app.version[1] >= 91:
__all__.append('bl_volume')
from . import *
from replication.data import ReplicatedDataFactory
def types_to_register():
return __all__
from replication.protocol import DataTranslationProtocol
def get_data_translation_protocol()-> DataTranslationProtocol:
""" Return a data translation protocol from implemented bpy types
"""
bpy_protocol = DataTranslationProtocol()
for module_name in __all__:
impl = globals().get(module_name)
if impl and hasattr(impl, "_type") and hasattr(impl, "_type"):
bpy_protocol.register_implementation(impl._type, impl._class)
return bpy_protocol

View File

@ -25,8 +25,8 @@ from enum import Enum
from .. import utils
from .dump_anything import (
Dumper, Loader, np_dump_collection, np_load_collection, remove_items_from_dict)
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_datablock import BlDatablock
KEYFRAME = [
'amplitude',
@ -41,66 +41,6 @@ KEYFRAME = [
'interpolation',
]
def has_action(datablock):
""" Check if the datablock datablock has actions
"""
return (hasattr(datablock, 'animation_data')
and datablock.animation_data
and datablock.animation_data.action)
def has_driver(datablock):
""" Check if the datablock datablock is driven
"""
return (hasattr(datablock, 'animation_data')
and datablock.animation_data
and datablock.animation_data.drivers)
def dump_driver(driver):
dumper = Dumper()
dumper.depth = 6
data = dumper.dump(driver)
return data
def load_driver(target_datablock, src_driver):
loader = Loader()
drivers = target_datablock.animation_data.drivers
src_driver_data = src_driver['driver']
new_driver = drivers.new(src_driver['data_path'], index=src_driver['array_index'])
# Settings
new_driver.driver.type = src_driver_data['type']
new_driver.driver.expression = src_driver_data['expression']
loader.load(new_driver, src_driver)
# Variables
for src_variable in src_driver_data['variables']:
src_var_data = src_driver_data['variables'][src_variable]
new_var = new_driver.driver.variables.new()
new_var.name = src_var_data['name']
new_var.type = src_var_data['type']
for src_target in src_var_data['targets']:
src_target_data = src_var_data['targets'][src_target]
src_id = src_target_data.get('id')
if src_id:
new_var.targets[src_target].id = utils.resolve_from_id(src_target_data['id'], src_target_data['id_type'])
loader.load(new_var.targets[src_target], src_target_data)
# Fcurve
new_fcurve = new_driver.keyframe_points
for p in reversed(new_fcurve):
new_fcurve.remove(p, fast=True)
new_fcurve.add(len(src_driver['keyframe_points']))
for index, src_point in enumerate(src_driver['keyframe_points']):
new_point = new_fcurve[index]
loader.load(new_point, src_driver['keyframe_points'][src_point])
def dump_fcurve(fcurve: bpy.types.FCurve, use_numpy: bool = True) -> dict:
""" Dump a sigle curve to a dict
@ -121,6 +61,7 @@ def dump_fcurve(fcurve: bpy.types.FCurve, use_numpy: bool = True) -> dict:
points = fcurve.keyframe_points
fcurve_data['keyframes_count'] = len(fcurve.keyframe_points)
fcurve_data['keyframe_points'] = np_dump_collection(points, KEYFRAME)
else: # Legacy method
dumper = Dumper()
fcurve_data["keyframe_points"] = []
@ -130,18 +71,6 @@ def dump_fcurve(fcurve: bpy.types.FCurve, use_numpy: bool = True) -> dict:
dumper.dump(k)
)
if fcurve.modifiers:
dumper = Dumper()
dumper.exclude_filter = [
'is_valid',
'active'
]
dumped_modifiers = []
for modfifier in fcurve.modifiers:
dumped_modifiers.append(dumper.dump(modfifier))
fcurve_data['modifiers'] = dumped_modifiers
return fcurve_data
@ -154,7 +83,7 @@ def load_fcurve(fcurve_data, fcurve):
:type fcurve: bpy.types.FCurve
"""
use_numpy = fcurve_data.get('use_numpy')
loader = Loader()
keyframe_points = fcurve.keyframe_points
# Remove all keyframe points
@ -199,91 +128,27 @@ def load_fcurve(fcurve_data, fcurve):
fcurve.update()
dumped_fcurve_modifiers = fcurve_data.get('modifiers', None)
if dumped_fcurve_modifiers:
# clear modifiers
for fmod in fcurve.modifiers:
fcurve.modifiers.remove(fmod)
# Load each modifiers in order
for modifier_data in dumped_fcurve_modifiers:
modifier = fcurve.modifiers.new(modifier_data['type'])
loader.load(modifier, modifier_data)
elif fcurve.modifiers:
for fmod in fcurve.modifiers:
fcurve.modifiers.remove(fmod)
def dump_animation_data(datablock):
animation_data = {}
if has_action(datablock):
animation_data['action'] = datablock.animation_data.action.uuid
if has_driver(datablock):
animation_data['drivers'] = []
for driver in datablock.animation_data.drivers:
animation_data['drivers'].append(dump_driver(driver))
return animation_data
def load_animation_data(animation_data, datablock):
# Load animation data
if animation_data:
if datablock.animation_data is None:
datablock.animation_data_create()
for d in datablock.animation_data.drivers:
datablock.animation_data.drivers.remove(d)
if 'drivers' in animation_data:
for driver in animation_data['drivers']:
load_driver(datablock, driver)
action = animation_data.get('action')
if action:
action = resolve_datablock_from_uuid(action, bpy.data.actions)
datablock.animation_data.action = action
elif datablock.animation_data.action:
datablock.animation_data.action = None
# Remove existing animation data if there is not more to load
elif hasattr(datablock, 'animation_data') and datablock.animation_data:
datablock.animation_data_clear()
def resolve_animation_dependencies(datablock):
if has_action(datablock):
return [datablock.animation_data.action]
else:
return []
class BlAction(ReplicatedDatablock):
use_delta = True
class BlAction(BlDatablock):
bl_id = "actions"
bl_class = bpy.types.Action
bl_check_common = False
bl_icon = 'ACTION_TWEAK'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.actions.new(data["name"])
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
for dumped_fcurve in data["fcurves"]:
dumped_data_path = dumped_fcurve["data_path"]
dumped_array_index = dumped_fcurve["dumped_array_index"]
# create fcurve if needed
fcurve = datablock.fcurves.find(
fcurve = target.fcurves.find(
dumped_data_path, index=dumped_array_index)
if fcurve is None:
fcurve = datablock.fcurves.new(
fcurve = target.fcurves.new(
dumped_data_path, index=dumped_array_index)
load_fcurve(dumped_fcurve, fcurve)
@ -291,10 +156,9 @@ class BlAction(ReplicatedDatablock):
id_root = data.get('id_root')
if id_root:
datablock.id_root = id_root
target.id_root = id_root
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
dumper = Dumper()
dumper.exclude_filter = [
'name_full',
@ -309,23 +173,11 @@ class BlAction(ReplicatedDatablock):
'users'
]
dumper.depth = 1
data = dumper.dump(datablock)
data = dumper.dump(instance)
data["fcurves"] = []
for fcurve in datablock.fcurves:
for fcurve in instance.fcurves:
data["fcurves"].append(dump_fcurve(fcurve, use_numpy=True))
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.actions)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
return []
_type = bpy.types.Action
_class = BlAction

View File

@ -22,9 +22,8 @@ import mathutils
from .dump_anything import Loader, Dumper
from .. import presence, operators, utils
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from .bl_datablock import BlDatablock
def get_roll(bone: bpy.types.Bone) -> float:
""" Compute the actuall roll of a pose bone
@ -36,21 +35,17 @@ def get_roll(bone: bpy.types.Bone) -> float:
return bone.AxisRollFromMatrix(bone.matrix_local.to_3x3())[1]
class BlArmature(ReplicatedDatablock):
use_delta = True
class BlArmature(BlDatablock):
bl_id = "armatures"
bl_class = bpy.types.Armature
bl_check_common = False
bl_icon = 'ARMATURE_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.armatures.new(data["name"])
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
# Load parent object
parent_object = utils.find_from_attr(
'uuid',
@ -60,7 +55,7 @@ class BlArmature(ReplicatedDatablock):
if parent_object is None:
parent_object = bpy.data.objects.new(
data['user_name'], datablock)
data['user_name'], target)
parent_object.uuid = data['user']
is_object_in_master = (
@ -95,10 +90,10 @@ class BlArmature(ReplicatedDatablock):
bpy.ops.object.mode_set(mode='EDIT')
for bone in data['bones']:
if bone not in datablock.edit_bones:
new_bone = datablock.edit_bones.new(bone)
if bone not in target.edit_bones:
new_bone = target.edit_bones.new(bone)
else:
new_bone = datablock.edit_bones[bone]
new_bone = target.edit_bones[bone]
bone_data = data['bones'].get(bone)
@ -109,7 +104,7 @@ class BlArmature(ReplicatedDatablock):
new_bone.roll = bone_data['roll']
if 'parent' in bone_data:
new_bone.parent = datablock.edit_bones[data['bones']
new_bone.parent = target.edit_bones[data['bones']
[bone]['parent']]
new_bone.use_connect = bone_data['use_connect']
@ -124,10 +119,9 @@ class BlArmature(ReplicatedDatablock):
if 'EDIT' in current_mode:
bpy.ops.object.mode_set(mode='EDIT')
load_animation_data(data.get('animation_data'), datablock)
def _dump_implementation(self, data, instance=None):
assert(instance)
@staticmethod
def dump(datablock: object) -> dict:
dumper = Dumper()
dumper.depth = 4
dumper.include_filter = [
@ -141,14 +135,14 @@ class BlArmature(ReplicatedDatablock):
'name',
'layers',
]
data = dumper.dump(datablock)
data = dumper.dump(instance)
for bone in datablock.bones:
for bone in instance.bones:
if bone.parent:
data['bones'][bone.name]['parent'] = bone.parent.name
# get the parent Object
# TODO: Use id_data instead
object_users = utils.get_datablock_users(datablock)[0]
object_users = utils.get_datablock_users(instance)[0]
data['user'] = object_users.uuid
data['user_name'] = object_users.name
@ -159,25 +153,7 @@ class BlArmature(ReplicatedDatablock):
data['user_scene'] = [
item.name for item in container_users if isinstance(item, bpy.types.Scene)]
for bone in datablock.bones:
for bone in instance.bones:
data['bones'][bone.name]['roll'] = get_roll(bone)
data['animation_data'] = dump_animation_data(datablock)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
name = data.get('name')
datablock = resolve_datablock_from_uuid(uuid, bpy.data.armatures)
if datablock is None:
datablock = bpy.data.armatures.get(name)
return datablock
@staticmethod
def resolve_deps(datablock: object) -> [object]:
return resolve_animation_dependencies(datablock)
_type = bpy.types.Armature
_class = BlArmature

View File

@ -20,58 +20,47 @@ import bpy
import mathutils
from .dump_anything import Loader, Dumper
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from .bl_datablock import BlDatablock
class BlCamera(ReplicatedDatablock):
use_delta = True
class BlCamera(BlDatablock):
bl_id = "cameras"
bl_class = bpy.types.Camera
bl_check_common = False
bl_icon = 'CAMERA_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.cameras.new(data["name"])
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
dof_settings = data.get('dof')
load_animation_data(data.get('animation_data'), datablock)
# DOF settings
if dof_settings:
loader.load(datablock.dof, dof_settings)
loader.load(target.dof, dof_settings)
background_images = data.get('background_images')
datablock.background_images.clear()
# TODO: Use image uuid
target.background_images.clear()
if background_images:
for img_name, img_data in background_images.items():
img_id = img_data.get('image')
if img_id:
target_img = datablock.background_images.new()
target_img = target.background_images.new()
target_img.image = bpy.data.images[img_id]
loader.load(target_img, img_data)
img_user = img_data.get('image_user')
if img_user:
loader.load(target_img.image_user, img_user)
def _dump_implementation(self, data, instance=None):
assert(instance)
# TODO: background image support
@staticmethod
def dump(datablock: object) -> dict:
dumper = Dumper()
dumper.depth = 3
dumper.include_filter = [
@ -112,37 +101,14 @@ class BlCamera(ReplicatedDatablock):
'scale',
'use_flip_x',
'use_flip_y',
'image_user',
'image',
'frame_duration',
'frame_start',
'frame_offset',
'use_cyclic',
'use_auto_refresh'
'image'
]
data = dumper.dump(datablock)
data['animation_data'] = dump_animation_data(datablock)
return dumper.dump(instance)
for index, image in enumerate(datablock.background_images):
if image.image_user:
data['background_images'][index]['image_user'] = dumper.dump(image.image_user)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.cameras)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
deps = []
for background in datablock.background_images:
for background in self.instance.background_images:
if background.image:
deps.append(background.image)
deps.extend(resolve_animation_dependencies(datablock))
return deps
_type = bpy.types.Camera
_class = BlCamera

View File

@ -19,12 +19,10 @@
import bpy
import mathutils
from deepdiff import DeepDiff, Delta
from .. import utils
from replication.protocol import ReplicatedDatablock
from .bl_datablock import BlDatablock
from .dump_anything import Loader, Dumper
from .bl_datablock import resolve_datablock_from_uuid
def dump_collection_children(collection):
collection_children = []
@ -83,82 +81,58 @@ def resolve_collection_dependencies(collection):
return deps
class BlCollection(ReplicatedDatablock):
class BlCollection(BlDatablock):
bl_id = "collections"
bl_icon = 'FILE_FOLDER'
bl_class = bpy.types.Collection
bl_check_common = True
bl_reload_parent = False
use_delta = True
def _construct(self, data):
if self.is_library:
with bpy.data.libraries.load(filepath=bpy.data.libraries[self.data['library']].filepath, link=True) as (sourceData, targetData):
targetData.collections = [
name for name in sourceData.collections if name == self.data['name']]
instance = bpy.data.collections[self.data['name']]
return instance
@staticmethod
def construct(data: dict) -> object:
instance = bpy.data.collections.new(data["name"])
return instance
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
# Objects
load_collection_objects(data['objects'], datablock)
load_collection_objects(data['objects'], target)
# Link childrens
load_collection_childrens(data['children'], datablock)
load_collection_childrens(data['children'], target)
# FIXME: Find a better way after the replication big refacotoring
# Keep other user from deleting collection object by flushing their history
utils.flush_history()
def _dump_implementation(self, data, instance=None):
assert(instance)
@staticmethod
def dump(datablock: object) -> dict:
dumper = Dumper()
dumper.depth = 1
dumper.include_filter = [
"name",
"instance_offset"
]
data = dumper.dump(datablock)
data = dumper.dump(instance)
# dump objects
data['objects'] = dump_collection_objects(datablock)
data['objects'] = dump_collection_objects(instance)
# dump children collections
data['children'] = dump_collection_children(datablock)
data['children'] = dump_collection_children(instance)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.collections)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
return resolve_collection_dependencies(datablock)
@staticmethod
def compute_delta(last_data: dict, current_data: dict) -> Delta:
diff_params = {
'ignore_order': True,
'report_repetition': True
}
delta_params = {
# 'mutate': True
}
return Delta(
DeepDiff(last_data,
current_data,
cache_size=5000,
**diff_params),
**delta_params)
_type = bpy.types.Collection
_class = BlCollection
def _resolve_deps_implementation(self):
return resolve_collection_dependencies(self.instance)

View File

@ -21,15 +21,13 @@ import bpy.types as T
import mathutils
import logging
from ..utils import get_preferences
from replication.protocol import ReplicatedDatablock
from .. import utils
from .bl_datablock import BlDatablock
from .dump_anything import (Dumper, Loader,
np_load_collection,
np_dump_collection)
from .bl_datablock import get_datablock_from_uuid
from .bl_material import dump_materials_slots, load_materials_slots
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
SPLINE_BEZIER_POINT = [
# "handle_left_type",
@ -136,31 +134,25 @@ SPLINE_METADATA = [
]
class BlCurve(ReplicatedDatablock):
use_delta = True
class BlCurve(BlDatablock):
bl_id = "curves"
bl_class = bpy.types.Curve
bl_check_common = False
bl_icon = 'CURVE_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.curves.new(data["name"], data["type"])
@staticmethod
def load(data: dict, datablock: object):
load_animation_data(data.get('animation_data'), datablock)
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
datablock.splines.clear()
target.splines.clear()
# load splines
for spline in data['splines'].values():
new_spline = datablock.splines.new(spline['type'])
new_spline = target.splines.new(spline['type'])
# Load curve geometry data
if new_spline.type == 'BEZIER':
@ -181,14 +173,15 @@ class BlCurve(ReplicatedDatablock):
# MATERIAL SLOTS
src_materials = data.get('materials', None)
if src_materials:
load_materials_slots(src_materials, datablock.materials)
load_materials_slots(src_materials, target.materials)
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
assert(instance)
dumper = Dumper()
# Conflicting attributes
# TODO: remove them with the NURBS support
dumper.include_filter = CURVE_METADATA
dumper.exclude_filter = [
'users',
'order_u',
@ -197,16 +190,14 @@ class BlCurve(ReplicatedDatablock):
'point_count_u',
'active_textbox'
]
if datablock.use_auto_texspace:
if instance.use_auto_texspace:
dumper.exclude_filter.extend([
'texspace_location',
'texspace_size'])
data = dumper.dump(datablock)
data['animation_data'] = dump_animation_data(datablock)
data = dumper.dump(instance)
data['splines'] = {}
for index, spline in enumerate(datablock.splines):
for index, spline in enumerate(instance.splines):
dumper.depth = 2
dumper.include_filter = SPLINE_METADATA
spline_data = dumper.dump(spline)
@ -220,27 +211,21 @@ class BlCurve(ReplicatedDatablock):
spline.bezier_points, SPLINE_BEZIER_POINT)
data['splines'][index] = spline_data
if isinstance(datablock, T.SurfaceCurve):
if isinstance(instance, T.SurfaceCurve):
data['type'] = 'SURFACE'
elif isinstance(datablock, T.TextCurve):
elif isinstance(instance, T.TextCurve):
data['type'] = 'FONT'
elif isinstance(datablock, T.Curve):
elif isinstance(instance, T.Curve):
data['type'] = 'CURVE'
data['materials'] = dump_materials_slots(datablock.materials)
data['materials'] = dump_materials_slots(instance.materials)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.curves)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
# TODO: resolve material
deps = []
curve = datablock
curve = self.instance
if isinstance(curve, T.TextCurve):
deps.extend([
@ -249,19 +234,15 @@ class BlCurve(ReplicatedDatablock):
curve.font_bold_italic,
curve.font_italic])
for material in datablock.materials:
for material in self.instance.materials:
if material:
deps.append(material)
deps.extend(resolve_animation_dependencies(datablock))
return deps
@staticmethod
def needs_update(datablock: object, data: dict) -> bool:
return 'EDIT' not in bpy.context.mode \
or get_preferences().sync_flags.sync_during_editmode
_type = [bpy.types.Curve, bpy.types.TextCurve]
_class = BlCurve
def diff(self):
if 'EDIT' in bpy.context.mode \
and not self.preferences.sync_flags.sync_during_editmode:
return False
else:
return super().diff()

View File

@ -22,11 +22,73 @@ from collections.abc import Iterable
import bpy
import mathutils
from replication.constants import DIFF_BINARY, DIFF_JSON, UP
from replication.protocol import ReplicatedDatablock
from replication.data import ReplicatedDatablock
from .. import utils
from .dump_anything import Dumper, Loader
def has_action(target):
""" Check if the target datablock has actions
"""
return (hasattr(target, 'animation_data')
and target.animation_data
and target.animation_data.action)
def has_driver(target):
""" Check if the target datablock is driven
"""
return (hasattr(target, 'animation_data')
and target.animation_data
and target.animation_data.drivers)
def dump_driver(driver):
dumper = Dumper()
dumper.depth = 6
data = dumper.dump(driver)
return data
def load_driver(target_datablock, src_driver):
loader = Loader()
drivers = target_datablock.animation_data.drivers
src_driver_data = src_driver['driver']
new_driver = drivers.new(src_driver['data_path'], index=src_driver['array_index'])
# Settings
new_driver.driver.type = src_driver_data['type']
new_driver.driver.expression = src_driver_data['expression']
loader.load(new_driver, src_driver)
# Variables
for src_variable in src_driver_data['variables']:
src_var_data = src_driver_data['variables'][src_variable]
new_var = new_driver.driver.variables.new()
new_var.name = src_var_data['name']
new_var.type = src_var_data['type']
for src_target in src_var_data['targets']:
src_target_data = src_var_data['targets'][src_target]
new_var.targets[src_target].id = utils.resolve_from_id(
src_target_data['id'], src_target_data['id_type'])
loader.load(
new_var.targets[src_target], src_target_data)
# Fcurve
new_fcurve = new_driver.keyframe_points
for p in reversed(new_fcurve):
new_fcurve.remove(p, fast=True)
new_fcurve.add(len(src_driver['keyframe_points']))
for index, src_point in enumerate(src_driver['keyframe_points']):
new_point = new_fcurve[index]
loader.load(new_point, src_driver['keyframe_points'][src_point])
def get_datablock_from_uuid(uuid, default, ignore=[]):
if not uuid:
return default
@ -38,8 +100,132 @@ def get_datablock_from_uuid(uuid, default, ignore=[]):
return item
return default
def resolve_datablock_from_uuid(uuid, bpy_collection):
for item in bpy_collection:
if getattr(item, 'uuid', None) == uuid:
return item
return None
class BlDatablock(ReplicatedDatablock):
"""BlDatablock
bl_id : blender internal storage identifier
bl_class : blender internal type
bl_icon : type icon (blender icon name)
bl_check_common: enable check even in common rights
bl_reload_parent: reload parent
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
instance = kwargs.get('instance', None)
self.preferences = utils.get_preferences()
# TODO: use is_library_indirect
self.is_library = (instance and hasattr(instance, 'library') and
instance.library) or \
(hasattr(self,'data') and self.data and 'library' in self.data)
if instance and hasattr(instance, 'uuid'):
instance.uuid = self.uuid
def resolve(self, construct = True):
datablock_root = getattr(bpy.data, self.bl_id)
datablock_ref = utils.find_from_attr('uuid', self.uuid, datablock_root)
if not datablock_ref:
try:
datablock_ref = datablock_root[self.data['name']]
except Exception:
pass
if construct and not datablock_ref:
name = self.data.get('name')
logging.debug(f"Constructing {name}")
datablock_ref = self._construct(data=self.data)
if datablock_ref is not None:
setattr(datablock_ref, 'uuid', self.uuid)
self.instance = datablock_ref
return True
else:
return False
def remove_instance(self):
"""
Remove instance from blender data
"""
assert(self.instance)
datablock_root = getattr(bpy.data, self.bl_id)
datablock_root.remove(self.instance)
def _dump(self, instance=None):
dumper = Dumper()
data = {}
# Dump animation data
if has_action(instance):
dumper = Dumper()
dumper.include_filter = ['action']
data['animation_data'] = dumper.dump(instance.animation_data)
if has_driver(instance):
dumped_drivers = {'animation_data': {'drivers': []}}
for driver in instance.animation_data.drivers:
dumped_drivers['animation_data']['drivers'].append(
dump_driver(driver))
data.update(dumped_drivers)
if self.is_library:
data.update(dumper.dump(instance))
else:
data.update(self._dump_implementation(data, instance=instance))
return data
def _dump_implementation(self, data, target):
raise NotImplementedError
def _load(self, data, target):
# Load animation data
if 'animation_data' in data.keys():
if target.animation_data is None:
target.animation_data_create()
for d in target.animation_data.drivers:
target.animation_data.drivers.remove(d)
if 'drivers' in data['animation_data']:
for driver in data['animation_data']['drivers']:
load_driver(target, driver)
if 'action' in data['animation_data']:
target.animation_data.action = bpy.data.actions[data['animation_data']['action']]
# Remove existing animation data if there is not more to load
elif hasattr(target, 'animation_data') and target.animation_data:
target.animation_data_clear()
if self.is_library:
return
else:
self._load_implementation(data, target)
def _load_implementation(self, data, target):
raise NotImplementedError
def resolve_deps(self):
dependencies = []
if has_action(self.instance):
dependencies.append(self.instance.animation_data.action)
if not self.is_library:
dependencies.extend(self._resolve_deps_implementation())
logging.debug(f"{self.instance} dependencies: {dependencies}")
return dependencies
def _resolve_deps_implementation(self):
return []
def is_valid(self):
return getattr(bpy.data, self.bl_id).get(self.data['name'])

View File

@ -19,15 +19,14 @@
import logging
import os
import sys
from pathlib import Path, WindowsPath, PosixPath
from pathlib import Path
import bpy
import mathutils
from replication.constants import DIFF_BINARY, UP
from replication.protocol import ReplicatedDatablock
from replication.data import ReplicatedDatablock
from .. import utils
from ..utils import get_preferences
from .dump_anything import Dumper, Loader
@ -59,16 +58,33 @@ class BlFile(ReplicatedDatablock):
bl_icon = 'FILE'
bl_reload_parent = True
@staticmethod
def construct(data: dict) -> object:
return Path(get_filepath(data['name']))
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.instance = kwargs.get('instance', None)
@staticmethod
def resolve(data: dict) -> object:
return Path(get_filepath(data['name']))
if self.instance and not self.instance.exists():
raise FileNotFoundError(str(self.instance))
@staticmethod
def dump(datablock: object) -> dict:
self.preferences = utils.get_preferences()
def resolve(self, construct = True):
self.instance = Path(get_filepath(self.data['name']))
file_exists = self.instance.exists()
if not file_exists:
logging.debug("File don't exist, loading it.")
self._load(self.data, self.instance)
return file_exists
def push(self, socket, identity=None, check_data=False):
super().push(socket, identity=None, check_data=False)
if self.preferences.clear_memory_filecache:
del self.data['file']
def _dump(self, instance=None):
"""
Read the file and return a dict as:
{
@ -80,62 +96,44 @@ class BlFile(ReplicatedDatablock):
logging.info(f"Extracting file metadata")
data = {
'name': datablock.name,
'name': self.instance.name,
}
logging.info(f"Reading {datablock.name} content: {datablock.stat().st_size} bytes")
logging.info(
f"Reading {self.instance.name} content: {self.instance.stat().st_size} bytes")
try:
file = open(datablock, "rb")
file = open(self.instance, "rb")
data['file'] = file.read()
file.close()
except IOError:
logging.warning(f"{datablock} doesn't exist, skipping")
logging.warning(f"{self.instance} doesn't exist, skipping")
else:
file.close()
return data
@staticmethod
def load(data: dict, datablock: object):
def _load(self, data, target):
"""
Writing the file
"""
try:
file = open(datablock, "wb")
file = open(target, "wb")
file.write(data['file'])
if get_preferences().clear_memory_filecache:
del data['file']
if self.preferences.clear_memory_filecache:
del self.data['file']
except IOError:
logging.warning(f"{datablock} doesn't exist, skipping")
logging.warning(f"{target} doesn't exist, skipping")
else:
file.close()
@staticmethod
def resolve_deps(datablock: object) -> [object]:
return []
@staticmethod
def needs_update(datablock: object, data:dict)-> bool:
if get_preferences().clear_memory_filecache:
def diff(self):
if self.preferences.clear_memory_filecache:
return False
else:
if not datablock:
return None
if not data:
return True
memory_size = sys.getsizeof(data['file'])-33
disk_size = datablock.stat().st_size
if memory_size != disk_size:
return True
else:
return False
_type = [WindowsPath, PosixPath]
_class = BlFile
memory_size = sys.getsizeof(self.data['file'])-33
disk_size = self.instance.stat().st_size
return memory_size != disk_size

View File

@ -22,20 +22,19 @@ from pathlib import Path
import bpy
from replication.protocol import ReplicatedDatablock
from .bl_datablock import BlDatablock
from .bl_file import get_filepath, ensure_unpacked
from .dump_anything import Dumper, Loader
from .bl_datablock import resolve_datablock_from_uuid
class BlFont(ReplicatedDatablock):
class BlFont(BlDatablock):
bl_id = "fonts"
bl_class = bpy.types.VectorFont
bl_check_common = False
bl_icon = 'FILE_FONT'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
filename = data.get('filename')
if filename == '<builtin>':
@ -43,43 +42,31 @@ class BlFont(ReplicatedDatablock):
else:
return bpy.data.fonts.load(get_filepath(filename))
@staticmethod
def load(data: dict, datablock: object):
def _load(self, data, target):
pass
@staticmethod
def dump(datablock: object) -> dict:
if datablock.filepath == '<builtin>':
def _dump(self, instance=None):
if instance.filepath == '<builtin>':
filename = '<builtin>'
else:
filename = Path(datablock.filepath).name
filename = Path(instance.filepath).name
if not filename:
raise FileExistsError(datablock.filepath)
raise FileExistsError(instance.filepath)
return {
'filename': filename,
'name': datablock.name
'name': instance.name
}
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.fonts)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
deps = []
if datablock.filepath and datablock.filepath != '<builtin>':
ensure_unpacked(datablock)
deps.append(Path(bpy.path.abspath(datablock.filepath)))
return deps
@staticmethod
def needs_update(datablock: object, data:dict)-> bool:
def diff(self):
return False
_type = bpy.types.VectorFont
_class = BlFont
def _resolve_deps_implementation(self):
deps = []
if self.instance.filepath and self.instance.filepath != '<builtin>':
ensure_unpacked(self.instance)
deps.append(Path(bpy.path.abspath(self.instance.filepath)))
return deps

View File

@ -24,12 +24,10 @@ from .dump_anything import (Dumper,
Loader,
np_dump_collection,
np_load_collection)
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from ..utils import get_preferences
from ..timers import is_annotating
from .bl_material import load_materials_slots, dump_materials_slots
from .bl_datablock import BlDatablock
# GPencil data api is structured as it follow:
# GP-Object --> GP-Layers --> GP-Frames --> GP-Strokes --> GP-Stroke-Points
STROKE_POINT = [
'co',
@ -53,12 +51,12 @@ STROKE = [
"uv_translation",
"vertex_color_fill",
]
if bpy.app.version >= (2,91,0):
if bpy.app.version[1] >= 91:
STROKE.append('use_cyclic')
else:
STROKE.append('draw_cyclic')
if bpy.app.version >= (2,83,0):
if bpy.app.version[1] >= 83:
STROKE_POINT.append('vertex_color')
def dump_stroke(stroke):
@ -66,9 +64,36 @@ def dump_stroke(stroke):
:param stroke: target grease pencil stroke
:type stroke: bpy.types.GPencilStroke
:return: (p_count, p_data)
:return: dict
"""
return (len(stroke.points), np_dump_collection(stroke.points, STROKE_POINT))
assert(stroke)
dumper = Dumper()
dumper.include_filter = [
"aspect",
"display_mode",
"draw_cyclic",
"end_cap_mode",
"hardeness",
"line_width",
"material_index",
"start_cap_mode",
"uv_rotation",
"uv_scale",
"uv_translation",
"vertex_color_fill",
]
dumped_stroke = dumper.dump(stroke)
# Stoke points
p_count = len(stroke.points)
dumped_stroke['p_count'] = p_count
dumped_stroke['points'] = np_dump_collection(stroke.points, STROKE_POINT)
# TODO: uv_factor, uv_rotation
return dumped_stroke
def load_stroke(stroke_data, stroke):
@ -81,13 +106,12 @@ def load_stroke(stroke_data, stroke):
"""
assert(stroke and stroke_data)
stroke.points.add(stroke_data[0])
np_load_collection(stroke_data[1], stroke.points, STROKE_POINT)
stroke.points.add(stroke_data["p_count"])
np_load_collection(stroke_data['points'], stroke.points, STROKE_POINT)
# HACK: Temporary fix to trigger a BKE_gpencil_stroke_geometry_update to
# fix fill issues
stroke.uv_scale = 1.0
stroke.uv_scale = stroke_data["uv_scale"]
def dump_frame(frame):
""" Dump a grease pencil frame to a dict
@ -121,15 +145,12 @@ def load_frame(frame_data, frame):
assert(frame and frame_data)
# Load stroke points
for stroke_data in frame_data['strokes_points']:
target_stroke = frame.strokes.new()
load_stroke(stroke_data, target_stroke)
# Load stroke metadata
np_load_collection(frame_data['strokes'], frame.strokes, STROKE)
def dump_layer(layer):
""" Dump a grease pencil layer
@ -146,6 +167,7 @@ def dump_layer(layer):
'opacity',
'channel_color',
'color',
# 'thickness', #TODO: enabling only for annotation
'tint_color',
'tint_factor',
'vertex_paint_opacity',
@ -162,7 +184,7 @@ def dump_layer(layer):
'hide',
'annotation_hide',
'lock',
'lock_frame',
# 'lock_frame',
# 'lock_material',
# 'use_mask_layer',
'use_lights',
@ -170,13 +192,12 @@ def dump_layer(layer):
'select',
'show_points',
'show_in_front',
# 'thickness'
# 'parent',
# 'parent_type',
# 'parent_bone',
# 'matrix_inverse',
]
if layer.thickness != 0:
if layer.id_data.is_annotation:
dumper.include_filter.append('thickness')
dumped_layer = dumper.dump(layer)
@ -207,99 +228,87 @@ def load_layer(layer_data, layer):
load_frame(frame_data, target_frame)
def layer_changed(datablock: object, data: dict) -> bool:
if datablock.layers.active and \
datablock.layers.active.info != data["active_layers"]:
return True
else:
return False
def frame_changed(data: dict) -> bool:
return bpy.context.scene.frame_current != data["eval_frame"]
class BlGpencil(ReplicatedDatablock):
class BlGpencil(BlDatablock):
bl_id = "grease_pencils"
bl_class = bpy.types.GreasePencil
bl_check_common = False
bl_icon = 'GREASEPENCIL'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.grease_pencils.new(data["name"])
@staticmethod
def load(data: dict, datablock: object):
# MATERIAL SLOTS
src_materials = data.get('materials', None)
if src_materials:
load_materials_slots(src_materials, datablock.materials)
def _load_implementation(self, data, target):
target.materials.clear()
if "materials" in data.keys():
for mat in data['materials']:
target.materials.append(bpy.data.materials[mat])
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
# TODO: reuse existing layer
for layer in datablock.layers:
datablock.layers.remove(layer)
for layer in target.layers:
target.layers.remove(layer)
if "layers" in data.keys():
for layer in data["layers"]:
layer_data = data["layers"].get(layer)
# if layer not in datablock.layers.keys():
target_layer = datablock.layers.new(data["layers"][layer]["info"])
# if layer not in target.layers.keys():
target_layer = target.layers.new(data["layers"][layer]["info"])
# else:
# target_layer = target.layers[layer]
# target_layer.clear()
load_layer(layer_data, target_layer)
datablock.layers.update()
target.layers.update()
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
assert(instance)
dumper = Dumper()
dumper.depth = 2
dumper.include_filter = [
'materials',
'name',
'zdepth_offset',
'stroke_thickness_space',
'pixel_factor',
'stroke_depth_order'
]
data = dumper.dump(datablock)
data['materials'] = dump_materials_slots(datablock.materials)
data = dumper.dump(instance)
data['layers'] = {}
for layer in datablock.layers:
for layer in instance.layers:
data['layers'][layer.info] = dump_layer(layer)
data["active_layers"] = datablock.layers.active.info if datablock.layers.active else "None"
data["active_layers"] = instance.layers.active.info
data["eval_frame"] = bpy.context.scene.frame_current
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.grease_pencils)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
deps = []
for material in datablock.materials:
for material in self.instance.materials:
deps.append(material)
return deps
@staticmethod
def needs_update(datablock: object, data: dict) -> bool:
return bpy.context.mode == 'OBJECT' \
or layer_changed(datablock, data) \
or frame_changed(data) \
or get_preferences().sync_flags.sync_during_editmode \
or is_annotating(bpy.context)
def layer_changed(self):
return self.instance.layers.active.info != self.data["active_layers"]
_type = bpy.types.GreasePencil
_class = BlGpencil
def frame_changed(self):
return bpy.context.scene.frame_current != self.data["eval_frame"]
def diff(self):
if self.layer_changed() \
or self.frame_changed() \
or bpy.context.mode == 'OBJECT' \
or self.preferences.sync_flags.sync_during_editmode:
return super().diff()
else:
return False

View File

@ -24,12 +24,9 @@ import bpy
import mathutils
from .. import utils
from replication.protocol import ReplicatedDatablock
from .bl_datablock import BlDatablock
from .dump_anything import Dumper, Loader
from .bl_file import get_filepath, ensure_unpacked
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
format_to_ext = {
'BMP': 'bmp',
@ -51,37 +48,32 @@ format_to_ext = {
}
class BlImage(ReplicatedDatablock):
class BlImage(BlDatablock):
bl_id = "images"
bl_class = bpy.types.Image
bl_check_common = False
bl_icon = 'IMAGE_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.images.new(
name=data['name'],
width=data['size'][0],
height=data['size'][1]
)
@staticmethod
def load(data: dict, datablock: object):
def _load(self, data, target):
loader = Loader()
loader.load(datablock, data)
loader.load(data, target)
# datablock.name = data.get('name')
datablock.source = 'FILE'
datablock.filepath_raw = get_filepath(data['filename'])
color_space_name = data.get("colorspace")
target.source = 'FILE'
target.filepath_raw = get_filepath(data['filename'])
target.colorspace_settings.name = data["colorspace_settings"]["name"]
if color_space_name:
datablock.colorspace_settings.name = color_space_name
def _dump(self, instance=None):
assert(instance)
@staticmethod
def dump(datablock: object) -> dict:
filename = Path(datablock.filepath).name
filename = Path(instance.filepath).name
data = {
"filename": filename
@ -91,47 +83,41 @@ class BlImage(ReplicatedDatablock):
dumper.depth = 2
dumper.include_filter = [
"name",
# 'source',
'size',
'alpha_mode']
data.update(dumper.dump(datablock))
data['colorspace'] = datablock.colorspace_settings.name
'height',
'alpha',
'float_buffer',
'alpha_mode',
'colorspace_settings']
data.update(dumper.dump(instance))
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.images)
def diff(self):
if self.instance.is_dirty:
self.instance.save()
@staticmethod
def resolve_deps(datablock: object) -> [object]:
if self.instance and (self.instance.name != self.data['name']):
return True
else:
return False
def _resolve_deps_implementation(self):
deps = []
if datablock.packed_file:
filename = Path(bpy.path.abspath(datablock.filepath)).name
datablock.filepath_raw = get_filepath(filename)
datablock.save()
if self.instance.packed_file:
filename = Path(bpy.path.abspath(self.instance.filepath)).name
self.instance.filepath_raw = get_filepath(filename)
self.instance.save()
# An image can't be unpacked to the modified path
# TODO: make a bug report
datablock.unpack(method="REMOVE")
self.instance.unpack(method="REMOVE")
elif datablock.source == "GENERATED":
filename = f"{datablock.name}.png"
datablock.filepath = get_filepath(filename)
datablock.save()
elif self.instance.source == "GENERATED":
filename = f"{self.instance.name}.png"
self.instance.filepath = get_filepath(filename)
self.instance.save()
if datablock.filepath:
deps.append(Path(bpy.path.abspath(datablock.filepath)))
if self.instance.filepath:
deps.append(Path(bpy.path.abspath(self.instance.filepath)))
return deps
@staticmethod
def needs_update(datablock: object, data:dict)-> bool:
if datablock.is_dirty:
datablock.save()
return True
_type = bpy.types.Image
_class = BlImage

View File

@ -20,41 +20,33 @@ import bpy
import mathutils
from .dump_anything import Dumper, Loader, np_dump_collection, np_load_collection
from replication.protocol import ReplicatedDatablock
from .bl_datablock import BlDatablock
from replication.exception import ContextError
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
POINT = ['co', 'weight_softbody', 'co_deform']
class BlLattice(ReplicatedDatablock):
use_delta = True
class BlLattice(BlDatablock):
bl_id = "lattices"
bl_class = bpy.types.Lattice
bl_check_common = False
bl_icon = 'LATTICE_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.lattices.new(data["name"])
@staticmethod
def load(data: dict, datablock: object):
load_animation_data(data.get('animation_data'), datablock)
if datablock.is_editmode:
def _load_implementation(self, data, target):
if target.is_editmode:
raise ContextError("lattice is in edit mode")
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
np_load_collection(data['points'], datablock.points, POINT)
np_load_collection(data['points'], target.points, POINT)
@staticmethod
def dump(datablock: object) -> dict:
if datablock.is_editmode:
def _dump_implementation(self, data, instance=None):
if instance.is_editmode:
raise ContextError("lattice is in edit mode")
dumper = Dumper()
@ -70,20 +62,9 @@ class BlLattice(ReplicatedDatablock):
'interpolation_type_w',
'use_outside'
]
data = dumper.dump(datablock)
data = dumper.dump(instance)
data['points'] = np_dump_collection(instance.points, POINT)
data['points'] = np_dump_collection(datablock.points, POINT)
data['animation_data'] = dump_animation_data(datablock)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.lattices)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
return resolve_animation_dependencies(datablock)
_type = bpy.types.Lattice
_class = BlLattice

View File

@ -15,31 +15,31 @@
#
# ##### END GPL LICENSE BLOCK #####
import bpy
import os
import mathutils
from pathlib import Path
import bpy.utils.previews
def register():
global icons_col
pcoll = bpy.utils.previews.new()
icons_dir = os.path.join(os.path.dirname(__file__), ".")
for png in Path(icons_dir).rglob("*.png"):
pcoll.load(png.stem, str(png), "IMAGE")
icons_col = pcoll
from .dump_anything import Loader, Dumper
from .bl_datablock import BlDatablock
def unregister():
class BlLibrary(BlDatablock):
bl_id = "libraries"
bl_class = bpy.types.Library
bl_check_common = False
bl_icon = 'LIBRARY_DATA_DIRECT'
bl_reload_parent = False
global icons_col
try:
bpy.utils.previews.remove(icons_col)
except Exception:
def _construct(self, data):
with bpy.data.libraries.load(filepath=data["filepath"], link=True) as (sourceData, targetData):
targetData = sourceData
return sourceData
def _load(self, data, target):
pass
icons_col = None
def _dump(self, instance=None):
assert(instance)
dumper = Dumper()
return dumper.dump(instance)

View File

@ -20,34 +20,25 @@ import bpy
import mathutils
from .dump_anything import Loader, Dumper
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from .bl_datablock import BlDatablock
class BlLight(ReplicatedDatablock):
use_delta = True
class BlLight(BlDatablock):
bl_id = "lights"
bl_class = bpy.types.Light
bl_check_common = False
bl_icon = 'LIGHT_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
instance = bpy.data.lights.new(data["name"], data["type"])
instance.uuid = data.get("uuid")
return instance
def _construct(self, data):
return bpy.data.lights.new(data["name"], data["type"])
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
load_animation_data(data.get('animation_data'), datablock)
loader.load(target, data)
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
assert(instance)
dumper = Dumper()
dumper.depth = 3
dumper.include_filter = [
@ -76,23 +67,9 @@ class BlLight(ReplicatedDatablock):
'spot_size',
'spot_blend'
]
data = dumper.dump(datablock)
data['animation_data'] = dump_animation_data(datablock)
data = dumper.dump(instance)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.lights)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
deps = []
deps.extend(resolve_animation_dependencies(datablock))
return deps
_type = [bpy.types.SpotLight, bpy.types.PointLight, bpy.types.AreaLight, bpy.types.SunLight]
_class = BlLight

View File

@ -21,35 +21,31 @@ import mathutils
import logging
from .dump_anything import Loader, Dumper
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_datablock import BlDatablock
class BlLightprobe(ReplicatedDatablock):
use_delta = True
class BlLightprobe(BlDatablock):
bl_id = "lightprobes"
bl_class = bpy.types.LightProbe
bl_check_common = False
bl_icon = 'LIGHTPROBE_GRID'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
type = 'CUBE' if data['type'] == 'CUBEMAP' else data['type']
# See https://developer.blender.org/D6396
if bpy.app.version >= (2,83,0):
if bpy.app.version[1] >= 83:
return bpy.data.lightprobes.new(data["name"], type)
else:
logging.warning("Lightprobe replication only supported since 2.83. See https://developer.blender.org/D6396")
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
@staticmethod
def dump(datablock: object) -> dict:
if bpy.app.version < (2,83,0):
def _dump_implementation(self, data, instance=None):
assert(instance)
if bpy.app.version[1] < 83:
logging.warning("Lightprobe replication only supported since 2.83. See https://developer.blender.org/D6396")
dumper = Dumper()
@ -75,16 +71,7 @@ class BlLightprobe(ReplicatedDatablock):
'visibility_blur'
]
return dumper.dump(datablock)
return dumper.dump(instance)
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.lightprobes)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
return []
_type = bpy.types.LightProbe
_class = BlLightprobe

View File

@ -24,10 +24,7 @@ import re
from uuid import uuid4
from .dump_anything import Loader, Dumper
from replication.protocol import ReplicatedDatablock
from .bl_datablock import get_datablock_from_uuid, resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from .bl_datablock import BlDatablock, get_datablock_from_uuid
NODE_SOCKET_INDEX = re.compile('\[(\d*)\]')
IGNORED_SOCKETS = ['GEOMETRY', 'SHADER', 'CUSTOM']
@ -48,11 +45,7 @@ def load_node(node_data: dict, node_tree: bpy.types.ShaderNodeTree):
node_tree_uuid = node_data.get('node_tree_uuid', None)
if image_uuid and not target_node.image:
image = resolve_datablock_from_uuid(image_uuid, bpy.data.images)
if image is None:
logging.error(f"Fail to find material image from uuid {image_uuid}")
else:
target_node.image = image
target_node.image = get_datablock_from_uuid(image_uuid, None)
if node_tree_uuid:
target_node.node_tree = get_datablock_from_uuid(node_tree_uuid, None)
@ -124,7 +117,8 @@ def dump_node(node: bpy.types.ShaderNode) -> dict:
"show_preview",
"show_texture",
"outputs",
"width_hidden"
"width_hidden",
"image"
]
dumped_node = node_dumper.dump(node)
@ -387,50 +381,44 @@ def load_materials_slots(src_materials: list, dst_materials: bpy.types.bpy_prop_
for mat_uuid, mat_name in src_materials:
mat_ref = None
if mat_uuid:
if mat_uuid is not None:
mat_ref = get_datablock_from_uuid(mat_uuid, None)
else:
mat_ref = bpy.data.materials[mat_name]
dst_materials.append(mat_ref)
class BlMaterial(ReplicatedDatablock):
use_delta = True
class BlMaterial(BlDatablock):
bl_id = "materials"
bl_class = bpy.types.Material
bl_check_common = False
bl_icon = 'MATERIAL_DATA'
bl_reload_parent = False
bl_reload_child = True
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.materials.new(data["name"])
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
loader = Loader()
is_grease_pencil = data.get('is_grease_pencil')
use_nodes = data.get('use_nodes')
loader.load(datablock, data)
loader.load(target, data)
if is_grease_pencil:
if not datablock.is_grease_pencil:
bpy.data.materials.create_gpencil_data(datablock)
loader.load(datablock.grease_pencil, data['grease_pencil'])
if not target.is_grease_pencil:
bpy.data.materials.create_gpencil_data(target)
loader.load(target.grease_pencil, data['grease_pencil'])
elif use_nodes:
if datablock.node_tree is None:
datablock.use_nodes = True
if target.node_tree is None:
target.use_nodes = True
load_node_tree(data['node_tree'], datablock.node_tree)
load_animation_data(data.get('nodes_animation_data'), datablock.node_tree)
load_animation_data(data.get('animation_data'), datablock)
load_node_tree(data['node_tree'], target.node_tree)
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
assert(instance)
mat_dumper = Dumper()
mat_dumper.depth = 2
mat_dumper.include_filter = [
@ -456,9 +444,9 @@ class BlMaterial(ReplicatedDatablock):
'line_priority',
'is_grease_pencil'
]
data = mat_dumper.dump(datablock)
data = mat_dumper.dump(instance)
if datablock.is_grease_pencil:
if instance.is_grease_pencil:
gp_mat_dumper = Dumper()
gp_mat_dumper.depth = 3
@ -492,30 +480,19 @@ class BlMaterial(ReplicatedDatablock):
'use_overlap_strokes',
'use_fill_holdout',
]
data['grease_pencil'] = gp_mat_dumper.dump(datablock.grease_pencil)
elif datablock.use_nodes:
data['node_tree'] = dump_node_tree(datablock.node_tree)
data['nodes_animation_data'] = dump_animation_data(datablock.node_tree)
data['animation_data'] = dump_animation_data(datablock)
data['grease_pencil'] = gp_mat_dumper.dump(instance.grease_pencil)
elif instance.use_nodes:
data['node_tree'] = dump_node_tree(instance.node_tree)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.materials)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
# TODO: resolve node group deps
deps = []
if datablock.use_nodes:
deps.extend(get_node_tree_dependencies(datablock.node_tree))
deps.extend(resolve_animation_dependencies(datablock.node_tree))
deps.extend(resolve_animation_dependencies(datablock))
if self.instance.use_nodes:
deps.extend(get_node_tree_dependencies(self.instance.node_tree))
if self.is_library:
deps.append(self.instance.library)
return deps
_type = bpy.types.Material
_class = BlMaterial

View File

@ -25,13 +25,8 @@ import numpy as np
from .dump_anything import Dumper, Loader, np_load_collection_primitives, np_dump_collection_primitive, np_load_collection, np_dump_collection
from replication.constants import DIFF_BINARY
from replication.exception import ContextError
from replication.protocol import ReplicatedDatablock
from .bl_datablock import get_datablock_from_uuid
from .bl_datablock import BlDatablock, get_datablock_from_uuid
from .bl_material import dump_materials_slots, load_materials_slots
from ..utils import get_preferences
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
VERTICE = ['co']
@ -54,79 +49,76 @@ POLYGON = [
'material_index',
]
class BlMesh(ReplicatedDatablock):
use_delta = True
class BlMesh(BlDatablock):
bl_id = "meshes"
bl_class = bpy.types.Mesh
bl_check_common = False
bl_icon = 'MESH_DATA'
bl_reload_parent = True
@staticmethod
def construct(data: dict) -> object:
return bpy.data.meshes.new(data.get("name"))
def _construct(self, data):
instance = bpy.data.meshes.new(data["name"])
instance.uuid = self.uuid
return instance
@staticmethod
def load(data: dict, datablock: object):
if not datablock or datablock.is_editmode:
def _load_implementation(self, data, target):
if not target or target.is_editmode:
raise ContextError
else:
load_animation_data(data.get('animation_data'), datablock)
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
# MATERIAL SLOTS
src_materials = data.get('materials', None)
if src_materials:
load_materials_slots(src_materials, datablock.materials)
load_materials_slots(src_materials, target.materials)
# CLEAR GEOMETRY
if datablock.vertices:
datablock.clear_geometry()
if target.vertices:
target.clear_geometry()
datablock.vertices.add(data["vertex_count"])
datablock.edges.add(data["egdes_count"])
datablock.loops.add(data["loop_count"])
datablock.polygons.add(data["poly_count"])
target.vertices.add(data["vertex_count"])
target.edges.add(data["egdes_count"])
target.loops.add(data["loop_count"])
target.polygons.add(data["poly_count"])
# LOADING
np_load_collection(data['vertices'], datablock.vertices, VERTICE)
np_load_collection(data['edges'], datablock.edges, EDGE)
np_load_collection(data['loops'], datablock.loops, LOOP)
np_load_collection(data["polygons"],datablock.polygons, POLYGON)
np_load_collection(data['vertices'], target.vertices, VERTICE)
np_load_collection(data['edges'], target.edges, EDGE)
np_load_collection(data['loops'], target.loops, LOOP)
np_load_collection(data["polygons"],target.polygons, POLYGON)
# UV Layers
if 'uv_layers' in data.keys():
for layer in data['uv_layers']:
if layer not in datablock.uv_layers:
datablock.uv_layers.new(name=layer)
if layer not in target.uv_layers:
target.uv_layers.new(name=layer)
np_load_collection_primitives(
datablock.uv_layers[layer].data,
target.uv_layers[layer].data,
'uv',
data["uv_layers"][layer]['data'])
# Vertex color
if 'vertex_colors' in data.keys():
for color_layer in data['vertex_colors']:
if color_layer not in datablock.vertex_colors:
datablock.vertex_colors.new(name=color_layer)
if color_layer not in target.vertex_colors:
target.vertex_colors.new(name=color_layer)
np_load_collection_primitives(
datablock.vertex_colors[color_layer].data,
target.vertex_colors[color_layer].data,
'color',
data["vertex_colors"][color_layer]['data'])
datablock.validate()
datablock.update()
target.validate()
target.update()
@staticmethod
def dump(datablock: object) -> dict:
if (datablock.is_editmode or bpy.context.mode == "SCULPT") and not get_preferences().sync_flags.sync_during_editmode:
def _dump_implementation(self, data, instance=None):
assert(instance)
if (instance.is_editmode or bpy.context.mode == "SCULPT") and not self.preferences.sync_flags.sync_during_editmode:
raise ContextError("Mesh is in edit mode")
mesh = datablock
mesh = instance
dumper = Dumper()
dumper.depth = 1
@ -140,8 +132,6 @@ class BlMesh(ReplicatedDatablock):
data = dumper.dump(mesh)
data['animation_data'] = dump_animation_data(datablock)
# VERTICES
data["vertex_count"] = len(mesh.vertices)
data["vertices"] = np_dump_collection(mesh.vertices, VERTICE)
@ -173,30 +163,21 @@ class BlMesh(ReplicatedDatablock):
data['vertex_colors'][color_map.name]['data'] = np_dump_collection_primitive(color_map.data, 'color')
# Materials
data['materials'] = dump_materials_slots(datablock.materials)
data['materials'] = dump_materials_slots(instance.materials)
return data
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
deps = []
for material in datablock.materials:
for material in self.instance.materials:
if material:
deps.append(material)
deps.extend(resolve_animation_dependencies(datablock))
return deps
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.meshes)
@staticmethod
def needs_update(datablock: object, data: dict) -> bool:
return ('EDIT' not in bpy.context.mode and bpy.context.mode != 'SCULPT') \
or get_preferences().sync_flags.sync_during_editmode
_type = bpy.types.Mesh
_class = BlMesh
def diff(self):
if 'EDIT' in bpy.context.mode \
and not self.preferences.sync_flags.sync_during_editmode:
return False
else:
return super().diff()

View File

@ -23,9 +23,7 @@ from .dump_anything import (
Dumper, Loader, np_dump_collection_primitive, np_load_collection_primitives,
np_dump_collection, np_load_collection)
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from .bl_datablock import BlDatablock
ELEMENT = [
@ -64,35 +62,29 @@ def load_metaball_elements(elements_data, elements):
np_load_collection(elements_data, elements, ELEMENT)
class BlMetaball(ReplicatedDatablock):
use_delta = True
class BlMetaball(BlDatablock):
bl_id = "metaballs"
bl_class = bpy.types.MetaBall
bl_check_common = False
bl_icon = 'META_BALL'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.metaballs.new(data["name"])
@staticmethod
def load(data: dict, datablock: object):
load_animation_data(data.get('animation_data'), datablock)
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
datablock.elements.clear()
target.elements.clear()
for mtype in data["elements"]['type']:
new_element = datablock.elements.new()
new_element = target.elements.new()
load_metaball_elements(data['elements'], datablock.elements)
load_metaball_elements(data['elements'], target.elements)
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
assert(instance)
dumper = Dumper()
dumper.depth = 1
dumper.include_filter = [
@ -106,24 +98,7 @@ class BlMetaball(ReplicatedDatablock):
'texspace_size'
]
data = dumper.dump(datablock)
data['animation_data'] = dump_animation_data(datablock)
data['elements'] = dump_metaball_elements(datablock.elements)
data = dumper.dump(instance)
data['elements'] = dump_metaball_elements(instance.elements)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.metaballs)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
deps = []
deps.extend(resolve_animation_dependencies(datablock))
return deps
_type = bpy.types.MetaBall
_class = BlMetaball

View File

@ -20,45 +20,26 @@ import bpy
import mathutils
from .dump_anything import Dumper, Loader, np_dump_collection, np_load_collection
from replication.protocol import ReplicatedDatablock
from .bl_datablock import BlDatablock
from .bl_material import (dump_node_tree,
load_node_tree,
get_node_tree_dependencies)
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
class BlNodeGroup(ReplicatedDatablock):
use_delta = True
class BlNodeGroup(BlDatablock):
bl_id = "node_groups"
bl_class = bpy.types.NodeTree
bl_check_common = False
bl_icon = 'NODETREE'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.node_groups.new(data["name"], data["type"])
@staticmethod
def load(data: dict, datablock: object):
load_node_tree(data, datablock)
def _load_implementation(self, data, target):
load_node_tree(data, target)
@staticmethod
def dump(datablock: object) -> dict:
return dump_node_tree(datablock)
def _dump_implementation(self, data, instance=None):
return dump_node_tree(instance)
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.node_groups)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
deps = []
deps.extend(get_node_tree_dependencies(datablock))
deps.extend(resolve_animation_dependencies(datablock))
return deps
_type = [bpy.types.ShaderNodeTree, bpy.types.GeometryNodeTree]
_class = BlNodeGroup
def _resolve_deps_implementation(self):
return get_node_tree_dependencies(self.instance)

View File

@ -22,11 +22,8 @@ import bpy
import mathutils
from replication.exception import ContextError
from replication.protocol import ReplicatedDatablock
from .bl_datablock import get_datablock_from_uuid, resolve_datablock_from_uuid
from .bl_datablock import BlDatablock, get_datablock_from_uuid
from .bl_material import IGNORED_SOCKETS
from ..utils import get_preferences
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from .dump_anything import (
Dumper,
Loader,
@ -40,40 +37,21 @@ SKIN_DATA = [
'use_root'
]
SHAPEKEY_BLOCK_ATTR = [
'mute',
'value',
'slider_min',
'slider_max',
]
if bpy.app.version >= (2,93,0):
if bpy.app.version[1] >= 93:
SUPPORTED_GEOMETRY_NODE_PARAMETERS = (int, str, float)
else:
SUPPORTED_GEOMETRY_NODE_PARAMETERS = (int, str)
logging.warning("Geometry node Float parameter not supported in \
blender 2.92.")
def get_node_group_properties_identifiers(node_group):
props_ids = []
# Inputs
def get_node_group_inputs(node_group):
inputs = []
for inpt in node_group.inputs:
if inpt.type in IGNORED_SOCKETS:
continue
else:
props_ids.append((inpt.identifier, inpt.type))
if inpt.type in ['INT', 'VALUE', 'BOOLEAN', 'RGBA', 'VECTOR']:
props_ids.append((f"{inpt.identifier}_attribute_name",'STR'))
props_ids.append((f"{inpt.identifier}_use_attribute", 'BOOL'))
for outpt in node_group.outputs:
if outpt.type not in IGNORED_SOCKETS and outpt.type in ['INT', 'VALUE', 'BOOLEAN', 'RGBA', 'VECTOR']:
props_ids.append((f"{outpt.identifier}_attribute_name", 'STR'))
return props_ids
inputs.append(inpt)
return inputs
# return [inpt.identifer for inpt in node_group.inputs if inpt.type not in IGNORED_SOCKETS]
@ -104,7 +82,6 @@ def dump_physics(target: bpy.types.Object)->dict:
return physics_data
def load_physics(dumped_settings: dict, target: bpy.types.Object):
""" Load all physics settings from a given object excluding modifier
related physics settings (such as softbody, cloth, dynapaint and fluid)
@ -131,36 +108,29 @@ def load_physics(dumped_settings: dict, target: bpy.types.Object):
elif target.rigid_body_constraint:
bpy.ops.rigidbody.constraint_remove({"object": target})
def dump_modifier_geometry_node_props(modifier: bpy.types.Modifier) -> list:
def dump_modifier_geometry_node_inputs(modifier: bpy.types.Modifier) -> list:
""" Dump geometry node modifier input properties
:arg modifier: geometry node modifier to dump
:type modifier: bpy.type.Modifier
"""
dumped_props = []
dumped_inputs = []
for inpt in get_node_group_inputs(modifier.node_group):
input_value = modifier[inpt.identifier]
for prop_value, prop_type in get_node_group_properties_identifiers(modifier.node_group):
try:
prop_value = modifier[prop_value]
except KeyError as e:
logging.error(f"fail to dump geomety node modifier property : {prop_value} ({e})")
else:
dump = None
if isinstance(prop_value, bpy.types.ID):
dump = prop_value.uuid
elif isinstance(prop_value, SUPPORTED_GEOMETRY_NODE_PARAMETERS):
dump = prop_value
elif hasattr(prop_value, 'to_list'):
dump = prop_value.to_list()
dumped_input = None
if isinstance(input_value, bpy.types.ID):
dumped_input = input_value.uuid
elif isinstance(input_value, SUPPORTED_GEOMETRY_NODE_PARAMETERS):
dumped_input = input_value
elif hasattr(input_value, 'to_list'):
dumped_input = input_value.to_list()
dumped_inputs.append(dumped_input)
dumped_props.append((dump, prop_type))
# logging.info(prop_value)
return dumped_props
return dumped_inputs
def load_modifier_geometry_node_props(dumped_modifier: dict, target_modifier: bpy.types.Modifier):
def load_modifier_geometry_node_inputs(dumped_modifier: dict, target_modifier: bpy.types.Modifier):
""" Load geometry node modifier inputs
:arg dumped_modifier: source dumped modifier to load
@ -169,17 +139,17 @@ def load_modifier_geometry_node_props(dumped_modifier: dict, target_modifier: bp
:type target_modifier: bpy.type.Modifier
"""
for input_index, inpt in enumerate(get_node_group_properties_identifiers(target_modifier.node_group)):
dumped_value, dumped_type = dumped_modifier['props'][input_index]
input_value = target_modifier[inpt[0]]
if dumped_type in ['INT', 'VALUE', 'STR']:
logging.info(f"{inpt[0]}/{dumped_value}")
target_modifier[inpt[0]] = dumped_value
elif dumped_type in ['RGBA', 'VECTOR']:
for input_index, inpt in enumerate(get_node_group_inputs(target_modifier.node_group)):
dumped_value = dumped_modifier['inputs'][input_index]
input_value = target_modifier[inpt.identifier]
if isinstance(input_value, SUPPORTED_GEOMETRY_NODE_PARAMETERS):
target_modifier[inpt.identifier] = dumped_value
elif hasattr(input_value, 'to_list'):
for index in range(len(input_value)):
input_value[index] = dumped_value[index]
elif dumped_type in ['COLLECTION', 'OBJECT', 'IMAGE', 'TEXTURE', 'MATERIAL']:
target_modifier[inpt[0]] = get_datablock_from_uuid(dumped_value, None)
elif inpt.type in ['COLLECTION', 'OBJECT']:
target_modifier[inpt.identifier] = get_datablock_from_uuid(
dumped_value, None)
def load_pose(target_bone, data):
@ -214,12 +184,12 @@ def find_data_from_name(name=None):
instance = bpy.data.speakers[name]
elif name in bpy.data.lightprobes.keys():
# Only supported since 2.83
if bpy.app.version >= (2,83,0):
if bpy.app.version[1] >= 83:
instance = bpy.data.lightprobes[name]
else:
logging.warning(
"Lightprobe replication only supported since 2.83. See https://developer.blender.org/D6396")
elif bpy.app.version >= (2,91,0) and name in bpy.data.volumes.keys():
elif bpy.app.version[1] >= 91 and name in bpy.data.volumes.keys():
# Only supported since 2.91
instance = bpy.data.volumes[name]
return instance
@ -266,11 +236,10 @@ def find_geometry_nodes_dependencies(modifiers: bpy.types.bpy_prop_collection) -
for mod in modifiers:
if mod.type == 'NODES' and mod.node_group:
dependencies.append(mod.node_group)
for inpt, inpt_type in get_node_group_properties_identifiers(mod.node_group):
inpt_value = mod.get(inpt)
# Avoid to handle 'COLLECTION', 'OBJECT' to avoid circular dependencies
if inpt_type in ['IMAGE', 'TEXTURE', 'MATERIAL'] and inpt_value:
dependencies.append(inpt_value)
# for inpt in get_node_group_inputs(mod.node_group):
# parameter = mod.get(inpt.identifier)
# if parameter and isinstance(parameter, bpy.types.ID):
# dependencies.append(parameter)
return dependencies
@ -320,279 +289,115 @@ def load_vertex_groups(dumped_vertex_groups: dict, target_object: bpy.types.Obje
vertex_group.add([index], weight, 'REPLACE')
def dump_shape_keys(target_key: bpy.types.Key)->dict:
""" Dump the target shape_keys datablock to a dict using numpy
:param dumped_key: target key datablock
:type dumped_key: bpy.types.Key
:return: dict
"""
dumped_key_blocks = []
dumper = Dumper()
dumper.include_filter = [
'name',
'mute',
'value',
'slider_min',
'slider_max',
]
for key in target_key.key_blocks:
dumped_key_block = dumper.dump(key)
dumped_key_block['data'] = np_dump_collection(key.data, ['co'])
dumped_key_block['relative_key'] = key.relative_key.name
dumped_key_blocks.append(dumped_key_block)
return {
'reference_key': target_key.reference_key.name,
'use_relative': target_key.use_relative,
'key_blocks': dumped_key_blocks,
'animation_data': dump_animation_data(target_key)
}
def load_shape_keys(dumped_shape_keys: dict, target_object: bpy.types.Object):
""" Load the target shape_keys datablock to a dict using numpy
:param dumped_key: src key data
:type dumped_key: bpy.types.Key
:param target_object: object used to load the shapekeys data onto
:type target_object: bpy.types.Object
"""
loader = Loader()
# Remove existing ones
target_object.shape_key_clear()
# Create keys and load vertices coords
dumped_key_blocks = dumped_shape_keys.get('key_blocks')
for dumped_key_block in dumped_key_blocks:
key_block = target_object.shape_key_add(name=dumped_key_block['name'])
loader.load(key_block, dumped_key_block)
np_load_collection(dumped_key_block['data'], key_block.data, ['co'])
# Load relative key after all
for dumped_key_block in dumped_key_blocks:
relative_key_name = dumped_key_block.get('relative_key')
key_name = dumped_key_block.get('name')
target_keyblock = target_object.data.shape_keys.key_blocks[key_name]
relative_key = target_object.data.shape_keys.key_blocks[relative_key_name]
target_keyblock.relative_key = relative_key
# Shape keys animation data
anim_data = dumped_shape_keys.get('animation_data')
if anim_data:
load_animation_data(anim_data, target_object.data.shape_keys)
def dump_modifiers(modifiers: bpy.types.bpy_prop_collection)->dict:
""" Dump all modifiers of a modifier collection into a dict
:param modifiers: modifiers
:type modifiers: bpy.types.bpy_prop_collection
:return: dict
"""
dumped_modifiers = []
dumper = Dumper()
dumper.depth = 1
dumper.exclude_filter = ['is_active']
for modifier in modifiers:
dumped_modifier = dumper.dump(modifier)
# hack to dump geometry nodes inputs
if modifier.type == 'NODES':
dumped_modifier['props'] = dump_modifier_geometry_node_props(modifier)
elif modifier.type == 'PARTICLE_SYSTEM':
dumper.exclude_filter = [
"is_edited",
"is_editable",
"is_global_hair"
]
dumped_modifier['particle_system'] = dumper.dump(modifier.particle_system)
dumped_modifier['particle_system']['settings_uuid'] = modifier.particle_system.settings.uuid
elif modifier.type in ['SOFT_BODY', 'CLOTH']:
dumped_modifier['settings'] = dumper.dump(modifier.settings)
elif modifier.type == 'UV_PROJECT':
dumped_modifier['projectors'] =[p.object.name for p in modifier.projectors if p and p.object]
dumped_modifiers.append(dumped_modifier)
return dumped_modifiers
def dump_constraints(constraints: bpy.types.bpy_prop_collection)->list:
"""Dump all constraints to a list
:param constraints: constraints
:type constraints: bpy.types.bpy_prop_collection
:return: dict
"""
dumper = Dumper()
dumper.depth = 2
dumper.include_filter = None
dumped_constraints = []
for constraint in constraints:
dumped_constraints.append(dumper.dump(constraint))
return dumped_constraints
def load_constraints(dumped_constraints: list, constraints: bpy.types.bpy_prop_collection):
""" Load dumped constraints
:param dumped_constraints: list of constraints to load
:type dumped_constraints: list
:param constraints: constraints
:type constraints: bpy.types.bpy_prop_collection
"""
loader = Loader()
constraints.clear()
for dumped_constraint in dumped_constraints:
constraint_type = dumped_constraint.get('type')
new_constraint = constraints.new(constraint_type)
loader.load(new_constraint, dumped_constraint)
def load_modifiers(dumped_modifiers: list, modifiers: bpy.types.bpy_prop_collection):
""" Dump all modifiers of a modifier collection into a dict
:param dumped_modifiers: list of modifiers to load
:type dumped_modifiers: list
:param modifiers: modifiers
:type modifiers: bpy.types.bpy_prop_collection
"""
loader = Loader()
modifiers.clear()
for dumped_modifier in dumped_modifiers:
name = dumped_modifier.get('name')
mtype = dumped_modifier.get('type')
loaded_modifier = modifiers.new(name, mtype)
loader.load(loaded_modifier, dumped_modifier)
if loaded_modifier.type == 'NODES':
load_modifier_geometry_node_props(dumped_modifier, loaded_modifier)
elif loaded_modifier.type == 'PARTICLE_SYSTEM':
default = loaded_modifier.particle_system.settings
dumped_particles = dumped_modifier['particle_system']
loader.load(loaded_modifier.particle_system, dumped_particles)
settings = get_datablock_from_uuid(dumped_particles['settings_uuid'], None)
if settings:
loaded_modifier.particle_system.settings = settings
# Hack to remove the default generated particle settings
if not default.uuid:
bpy.data.particles.remove(default)
elif loaded_modifier.type in ['SOFT_BODY', 'CLOTH']:
loader.load(loaded_modifier.settings, dumped_modifier['settings'])
elif loaded_modifier.type == 'UV_PROJECT':
for projector_index, projector_object in enumerate(dumped_modifier['projectors']):
target_object = bpy.data.objects.get(projector_object)
if target_object:
loaded_modifier.projectors[projector_index].object = target_object
else:
logging.error("Could't load projector target object {projector_object}")
def load_modifiers_custom_data(dumped_modifiers: dict, modifiers: bpy.types.bpy_prop_collection):
""" Load modifiers custom data not managed by the dump_anything loader
:param dumped_modifiers: modifiers to load
:type dumped_modifiers: dict
:param modifiers: target modifiers collection
:type modifiers: bpy.types.bpy_prop_collection
"""
loader = Loader()
for modifier in modifiers:
dumped_modifier = dumped_modifiers.get(modifier.name)
class BlObject(ReplicatedDatablock):
use_delta = True
class BlObject(BlDatablock):
bl_id = "objects"
bl_class = bpy.types.Object
bl_check_common = False
bl_icon = 'OBJECT_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
instance = None
if self.is_library:
with bpy.data.libraries.load(filepath=bpy.data.libraries[self.data['library']].filepath, link=True) as (sourceData, targetData):
targetData.objects = [
name for name in sourceData.objects if name == self.data['name']]
instance = bpy.data.objects[self.data['name']]
instance.uuid = self.uuid
return instance
# TODO: refactoring
object_name = data.get("name")
data_uuid = data.get("data_uuid")
data_id = data.get("data")
data_type = data.get("type")
object_data = get_datablock_from_uuid(
data_uuid,
find_data_from_name(data_id),
ignore=['images']) # TODO: use resolve_from_id
if data_type != 'EMPTY' and object_data is None:
raise Exception(f"Fail to load object {data['name']})")
if object_data is None and data_uuid:
raise Exception(f"Fail to load object {data['name']}({self.uuid})")
return bpy.data.objects.new(object_name, object_data)
instance = bpy.data.objects.new(object_name, object_data)
instance.uuid = self.uuid
@staticmethod
def load(data: dict, datablock: object):
return instance
def _load_implementation(self, data, target):
loader = Loader()
load_animation_data(data.get('animation_data'), datablock)
data_uuid = data.get("data_uuid")
data_id = data.get("data")
if datablock.data and (datablock.data.name != data_id):
datablock.data = get_datablock_from_uuid(
if target.data and (target.data.name != data_id):
target.data = get_datablock_from_uuid(
data_uuid, find_data_from_name(data_id), ignore=['images'])
# vertex groups
vertex_groups = data.get('vertex_groups', None)
if vertex_groups:
load_vertex_groups(vertex_groups, datablock)
load_vertex_groups(vertex_groups, target)
object_data = datablock.data
object_data = target.data
# SHAPE KEYS
shape_keys = data.get('shape_keys')
if shape_keys:
load_shape_keys(shape_keys, datablock)
if 'shape_keys' in data:
target.shape_key_clear()
# Create keys and load vertices coords
for key_block in data['shape_keys']['key_blocks']:
key_data = data['shape_keys']['key_blocks'][key_block]
target.shape_key_add(name=key_block)
loader.load(
target.data.shape_keys.key_blocks[key_block], key_data)
for vert in key_data['data']:
target.data.shape_keys.key_blocks[key_block].data[vert].co = key_data['data'][vert]['co']
# Load relative key after all
for key_block in data['shape_keys']['key_blocks']:
reference = data['shape_keys']['key_blocks'][key_block]['relative_key']
target.data.shape_keys.key_blocks[key_block].relative_key = target.data.shape_keys.key_blocks[reference]
# Load transformation data
loader.load(datablock, data)
loader.load(target, data)
# Object display fields
if 'display' in data:
loader.load(datablock.display, data['display'])
loader.load(target.display, data['display'])
# Parenting
parent_id = data.get('parent_uid')
if parent_id:
parent = get_datablock_from_uuid(parent_id[0], bpy.data.objects[parent_id[1]])
# Avoid reloading
if datablock.parent != parent and parent is not None:
datablock.parent = parent
elif datablock.parent:
datablock.parent = None
if target.parent != parent and parent is not None:
target.parent = parent
elif target.parent:
target.parent = None
# Pose
if 'pose' in data:
if not datablock.pose:
if not target.pose:
raise Exception('No pose data yet (Fixed in a near futur)')
# Bone groups
for bg_name in data['pose']['bone_groups']:
bg_data = data['pose']['bone_groups'].get(bg_name)
bg_target = datablock.pose.bone_groups.get(bg_name)
bg_target = target.pose.bone_groups.get(bg_name)
if not bg_target:
bg_target = datablock.pose.bone_groups.new(name=bg_name)
bg_target = target.pose.bone_groups.new(name=bg_name)
loader.load(bg_target, bg_data)
# datablock.pose.bone_groups.get
# target.pose.bone_groups.get
# Bones
for bone in data['pose']['bones']:
target_bone = datablock.pose.bones.get(bone)
target_bone = target.pose.bones.get(bone)
bone_data = data['pose']['bones'].get(bone)
if 'constraints' in bone_data.keys():
@ -601,13 +406,13 @@ class BlObject(ReplicatedDatablock):
load_pose(target_bone, bone_data)
if 'bone_index' in bone_data.keys():
target_bone.bone_group = datablock.pose.bone_group[bone_data['bone_group_index']]
target_bone.bone_group = target.pose.bone_group[bone_data['bone_group_index']]
# TODO: find another way...
if datablock.empty_display_type == "IMAGE":
if target.empty_display_type == "IMAGE":
img_uuid = data.get('data_uuid')
if datablock.data is None and img_uuid:
datablock.data = get_datablock_from_uuid(img_uuid, None)
if target.data is None and img_uuid:
target.data = get_datablock_from_uuid(img_uuid, None)
if hasattr(object_data, 'skin_vertices') \
and object_data.skin_vertices\
@ -618,31 +423,56 @@ class BlObject(ReplicatedDatablock):
skin_data.data,
SKIN_DATA)
if hasattr(datablock, 'cycles_visibility') \
if hasattr(target, 'cycles_visibility') \
and 'cycles_visibility' in data:
loader.load(datablock.cycles_visibility, data['cycles_visibility'])
loader.load(target.cycles_visibility, data['cycles_visibility'])
if hasattr(datablock, 'modifiers'):
load_modifiers(data['modifiers'], datablock.modifiers)
# TODO: handle geometry nodes input from dump_anything
if hasattr(target, 'modifiers'):
nodes_modifiers = [
mod for mod in target.modifiers if mod.type == 'NODES']
for modifier in nodes_modifiers:
load_modifier_geometry_node_inputs(
data['modifiers'][modifier.name], modifier)
constraints = data.get('constraints')
if constraints:
load_constraints(constraints, datablock.constraints)
particles_modifiers = [
mod for mod in target.modifiers if mod.type == 'PARTICLE_SYSTEM']
for mod in particles_modifiers:
default = mod.particle_system.settings
dumped_particles = data['modifiers'][mod.name]['particle_system']
loader.load(mod.particle_system, dumped_particles)
settings = get_datablock_from_uuid(dumped_particles['settings_uuid'], None)
if settings:
mod.particle_system.settings = settings
# Hack to remove the default generated particle settings
if not default.uuid:
bpy.data.particles.remove(default)
phys_modifiers = [
mod for mod in target.modifiers if mod.type in ['SOFT_BODY', 'CLOTH']]
for mod in phys_modifiers:
loader.load(mod.settings, data['modifiers'][mod.name]['settings'])
# PHYSICS
load_physics(data, datablock)
load_physics(data, target)
transform = data.get('transforms', None)
if transform:
datablock.matrix_parent_inverse = mathutils.Matrix(transform['matrix_parent_inverse'])
datablock.matrix_basis = mathutils.Matrix(transform['matrix_basis'])
target.matrix_parent_inverse = mathutils.Matrix(
transform['matrix_parent_inverse'])
target.matrix_basis = mathutils.Matrix(transform['matrix_basis'])
target.matrix_local = mathutils.Matrix(transform['matrix_local'])
@staticmethod
def dump(datablock: object) -> dict:
if _is_editmode(datablock):
if get_preferences().sync_flags.sync_during_editmode:
datablock.update_from_editmode()
def _dump_implementation(self, data, instance=None):
assert(instance)
if _is_editmode(instance):
if self.preferences.sync_flags.sync_during_editmode:
instance.update_from_editmode()
else:
raise ContextError("Object is in edit-mode.")
@ -678,37 +508,60 @@ class BlObject(ReplicatedDatablock):
'show_all_edges',
'show_texture_space',
'show_in_front',
'type',
'parent_type',
'parent_bone',
'track_axis',
'up_axis',
'type'
]
data = dumper.dump(datablock)
data['animation_data'] = dump_animation_data(datablock)
data = dumper.dump(instance)
dumper.include_filter = [
'matrix_parent_inverse',
'matrix_local',
'matrix_basis']
data['transforms'] = dumper.dump(datablock)
data['transforms'] = dumper.dump(instance)
dumper.include_filter = [
'show_shadows',
]
data['display'] = dumper.dump(datablock.display)
data['display'] = dumper.dump(instance.display)
data['data_uuid'] = getattr(datablock.data, 'uuid', None)
data['data_uuid'] = getattr(instance.data, 'uuid', None)
if self.is_library:
return data
# PARENTING
if datablock.parent:
data['parent_uid'] = (datablock.parent.uuid, datablock.parent.name)
if instance.parent:
data['parent_uid'] = (instance.parent.uuid, instance.parent.name)
# MODIFIERS
modifiers = getattr(datablock, 'modifiers', None)
if hasattr(datablock, 'modifiers'):
data['modifiers'] = dump_modifiers(modifiers)
if hasattr(instance, 'modifiers'):
data["modifiers"] = {}
modifiers = getattr(instance, 'modifiers', None)
if modifiers:
dumper.include_filter = None
dumper.depth = 1
dumper.exclude_filter = ['is_active']
for index, modifier in enumerate(modifiers):
dumped_modifier = dumper.dump(modifier)
# hack to dump geometry nodes inputs
if modifier.type == 'NODES':
dumped_inputs = dump_modifier_geometry_node_inputs(
modifier)
dumped_modifier['inputs'] = dumped_inputs
gp_modifiers = getattr(datablock, 'grease_pencil_modifiers', None)
elif modifier.type == 'PARTICLE_SYSTEM':
dumper.exclude_filter = [
"is_edited",
"is_editable",
"is_global_hair"
]
dumped_modifier['particle_system'] = dumper.dump(modifier.particle_system)
dumped_modifier['particle_system']['settings_uuid'] = modifier.particle_system.settings.uuid
elif modifier.type in ['SOFT_BODY', 'CLOTH']:
dumped_modifier['settings'] = dumper.dump(modifier.settings)
data["modifiers"][modifier.name] = dumped_modifier
gp_modifiers = getattr(instance, 'grease_pencil_modifiers', None)
if gp_modifiers:
dumper.include_filter = None
@ -731,14 +584,16 @@ class BlObject(ReplicatedDatablock):
# CONSTRAINTS
if hasattr(datablock, 'constraints'):
data["constraints"] = dump_constraints(datablock.constraints)
if hasattr(instance, 'constraints'):
dumper.include_filter = None
dumper.depth = 3
data["constraints"] = dumper.dump(instance.constraints)
# POSE
if hasattr(datablock, 'pose') and datablock.pose:
if hasattr(instance, 'pose') and instance.pose:
# BONES
bones = {}
for bone in datablock.pose.bones:
for bone in instance.pose.bones:
bones[bone.name] = {}
dumper.depth = 1
rotation = 'rotation_quaternion' if bone.rotation_mode == 'QUATERNION' else 'rotation_euler'
@ -763,7 +618,7 @@ class BlObject(ReplicatedDatablock):
# GROUPS
bone_groups = {}
for group in datablock.pose.bone_groups:
for group in instance.pose.bone_groups:
dumper.depth = 3
dumper.include_filter = [
'name',
@ -773,13 +628,36 @@ class BlObject(ReplicatedDatablock):
data['pose']['bone_groups'] = bone_groups
# VERTEx GROUP
if len(datablock.vertex_groups) > 0:
data['vertex_groups'] = dump_vertex_groups(datablock)
if len(instance.vertex_groups) > 0:
data['vertex_groups'] = dump_vertex_groups(instance)
# SHAPE KEYS
object_data = datablock.data
object_data = instance.data
if hasattr(object_data, 'shape_keys') and object_data.shape_keys:
data['shape_keys'] = dump_shape_keys(object_data.shape_keys)
dumper = Dumper()
dumper.depth = 2
dumper.include_filter = [
'reference_key',
'use_relative'
]
data['shape_keys'] = dumper.dump(object_data.shape_keys)
data['shape_keys']['reference_key'] = object_data.shape_keys.reference_key.name
key_blocks = {}
for key in object_data.shape_keys.key_blocks:
dumper.depth = 3
dumper.include_filter = [
'name',
'data',
'mute',
'value',
'slider_min',
'slider_max',
'data',
'co'
]
key_blocks[key.name] = dumper.dump(key)
key_blocks[key.name]['relative_key'] = key.relative_key.name
data['shape_keys']['key_blocks'] = key_blocks
# SKIN VERTICES
if hasattr(object_data, 'skin_vertices') and object_data.skin_vertices:
@ -790,7 +668,7 @@ class BlObject(ReplicatedDatablock):
data['skin_vertices'] = skin_vertices
# CYCLE SETTINGS
if hasattr(datablock, 'cycles_visibility'):
if hasattr(instance, 'cycles_visibility'):
dumper.include_filter = [
'camera',
'diffuse',
@ -799,48 +677,36 @@ class BlObject(ReplicatedDatablock):
'scatter',
'shadow',
]
data['cycles_visibility'] = dumper.dump(datablock.cycles_visibility)
data['cycles_visibility'] = dumper.dump(instance.cycles_visibility)
# PHYSICS
data.update(dump_physics(datablock))
data.update(dump_physics(instance))
return data
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
deps = []
# Avoid Empty case
if datablock.data:
deps.append(datablock.data)
if self.instance.data:
deps.append(self.instance.data)
# Particle systems
for particle_slot in datablock.particle_systems:
for particle_slot in self.instance.particle_systems:
deps.append(particle_slot.settings)
if datablock.parent:
deps.append(datablock.parent)
if self.is_library:
deps.append(self.instance.library)
if datablock.instance_type == 'COLLECTION':
if self.instance.parent:
deps.append(self.instance.parent)
if self.instance.instance_type == 'COLLECTION':
# TODO: uuid based
deps.append(datablock.instance_collection)
deps.append(self.instance.instance_collection)
if datablock.modifiers:
deps.extend(find_textures_dependencies(datablock.modifiers))
deps.extend(find_geometry_nodes_dependencies(datablock.modifiers))
if hasattr(datablock.data, 'shape_keys') and datablock.data.shape_keys:
deps.extend(resolve_animation_dependencies(datablock.data.shape_keys))
deps.extend(resolve_animation_dependencies(datablock))
if self.instance.modifiers:
deps.extend(find_textures_dependencies(self.instance.modifiers))
deps.extend(find_geometry_nodes_dependencies(self.instance.modifiers))
return deps
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.objects)
_type = bpy.types.Object
_class = BlObject

View File

@ -2,10 +2,7 @@ import bpy
import mathutils
from . import dump_anything
from replication.protocol import ReplicatedDatablock
from .bl_datablock import get_datablock_from_uuid
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from .bl_datablock import BlDatablock, get_datablock_from_uuid
def dump_textures_slots(texture_slots: bpy.types.bpy_prop_collection) -> list:
@ -40,67 +37,54 @@ IGNORED_ATTR = [
"users"
]
class BlParticle(ReplicatedDatablock):
use_delta = True
class BlParticle(BlDatablock):
bl_id = "particles"
bl_class = bpy.types.ParticleSettings
bl_icon = "PARTICLES"
bl_check_common = False
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
return bpy.data.particles.new(data["name"])
def _construct(self, data):
instance = bpy.data.particles.new(data["name"])
instance.uuid = self.uuid
return instance
@staticmethod
def load(data: dict, datablock: object):
load_animation_data(data.get('animation_data'), datablock)
dump_anything.load(datablock, data)
def _load_implementation(self, data, target):
dump_anything.load(target, data)
dump_anything.load(datablock.effector_weights, data["effector_weights"])
dump_anything.load(target.effector_weights, data["effector_weights"])
# Force field
force_field_1 = data.get("force_field_1", None)
if force_field_1:
dump_anything.load(datablock.force_field_1, force_field_1)
dump_anything.load(target.force_field_1, force_field_1)
force_field_2 = data.get("force_field_2", None)
if force_field_2:
dump_anything.load(datablock.force_field_2, force_field_2)
dump_anything.load(target.force_field_2, force_field_2)
# Texture slots
load_texture_slots(data["texture_slots"], datablock.texture_slots)
load_texture_slots(data["texture_slots"], target.texture_slots)
def _dump_implementation(self, data, instance=None):
assert instance
@staticmethod
def dump(datablock: object) -> dict:
dumper = dump_anything.Dumper()
dumper.depth = 1
dumper.exclude_filter = IGNORED_ATTR
data = dumper.dump(datablock)
data = dumper.dump(instance)
# Particle effectors
data["effector_weights"] = dumper.dump(datablock.effector_weights)
if datablock.force_field_1:
data["force_field_1"] = dumper.dump(datablock.force_field_1)
if datablock.force_field_2:
data["force_field_2"] = dumper.dump(datablock.force_field_2)
data["effector_weights"] = dumper.dump(instance.effector_weights)
if instance.force_field_1:
data["force_field_1"] = dumper.dump(instance.force_field_1)
if instance.force_field_2:
data["force_field_2"] = dumper.dump(instance.force_field_2)
# Texture slots
data["texture_slots"] = dump_textures_slots(datablock.texture_slots)
data['animation_data'] = dump_animation_data(datablock)
data["texture_slots"] = dump_textures_slots(instance.texture_slots)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.particles)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
deps = [t.texture for t in datablock.texture_slots if t and t.texture]
deps.extend(resolve_animation_dependencies(datablock))
return deps
_type = bpy.types.ParticleSettings
_class = BlParticle
def _resolve_deps_implementation(self):
return [t.texture for t in self.instance.texture_slots if t and t.texture]

View File

@ -18,21 +18,17 @@
import logging
from pathlib import Path
from uuid import uuid4
import bpy
import mathutils
from deepdiff import DeepDiff, Delta
from deepdiff import DeepDiff
from replication.constants import DIFF_JSON, MODIFIED
from replication.protocol import ReplicatedDatablock
from ..utils import flush_history, get_preferences
from .bl_action import (dump_animation_data, load_animation_data,
resolve_animation_dependencies)
from ..utils import flush_history
from .bl_collection import (dump_collection_children, dump_collection_objects,
load_collection_childrens, load_collection_objects,
resolve_collection_dependencies)
from .bl_datablock import resolve_datablock_from_uuid
from .bl_datablock import BlDatablock
from .bl_file import get_filepath
from .dump_anything import Dumper, Loader
@ -290,10 +286,12 @@ def dump_sequence(sequence: bpy.types.Sequence) -> dict:
dumper.depth = 1
data = dumper.dump(sequence)
# TODO: Support multiple images
if sequence.type == 'IMAGE':
data['filenames'] = [e.filename for e in sequence.elements]
# Effect strip inputs
input_count = getattr(sequence, 'input_count', None)
if input_count:
@ -304,8 +302,7 @@ def dump_sequence(sequence: bpy.types.Sequence) -> dict:
return data
def load_sequence(sequence_data: dict,
sequence_editor: bpy.types.SequenceEditor):
def load_sequence(sequence_data: dict, sequence_editor: bpy.types.SequenceEditor):
""" Load sequence from dumped data
:arg sequence_data: sequence to dump
@ -347,15 +344,14 @@ def load_sequence(sequence_data: dict,
strip_channel,
strip_frame_start)
# load other images
if len(images_name) > 1:
for img_idx in range(1, len(images_name)):
if len(images_name)>1:
for img_idx in range(1,len(images_name)):
sequence.elements.append((images_name[img_idx]))
else:
seq = {}
for i in range(sequence_data['input_count']):
seq[f"seq{i+1}"] = sequence_editor.sequences_all.get(
sequence_data.get(f"input_{i+1}", None))
seq[f"seq{i+1}"] = sequence_editor.sequences_all.get(sequence_data.get(f"input_{i+1}", None))
sequence = sequence_editor.sequences.new_effect(name=strip_name,
type=strip_type,
@ -365,104 +361,89 @@ def load_sequence(sequence_data: dict,
**seq)
loader = Loader()
loader.exclure_filter = ['filepath', 'sound', 'filenames', 'fps']
# TODO: Support filepath updates
loader.exclure_filter = ['filepath', 'sound', 'filenames','fps']
loader.load(sequence, sequence_data)
sequence.select = False
class BlScene(ReplicatedDatablock):
is_root = True
use_delta = True
class BlScene(BlDatablock):
bl_id = "scenes"
bl_class = bpy.types.Scene
bl_check_common = True
bl_icon = 'SCENE_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
return bpy.data.scenes.new(data["name"])
def _construct(self, data):
instance = bpy.data.scenes.new(data["name"])
instance.uuid = self.uuid
@staticmethod
def load(data: dict, datablock: object):
load_animation_data(data.get('animation_data'), datablock)
return instance
def _load_implementation(self, data, target):
# Load other meshes metadata
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
# Load master collection
load_collection_objects(
data['collection']['objects'], datablock.collection)
data['collection']['objects'], target.collection)
load_collection_childrens(
data['collection']['children'], datablock.collection)
data['collection']['children'], target.collection)
if 'world' in data.keys():
datablock.world = bpy.data.worlds[data['world']]
target.world = bpy.data.worlds[data['world']]
# Annotation
gpencil_uid = data.get('grease_pencil')
if gpencil_uid:
datablock.grease_pencil = resolve_datablock_from_uuid(gpencil_uid, bpy.data.grease_pencils)
if 'grease_pencil' in data.keys():
target.grease_pencil = bpy.data.grease_pencils[data['grease_pencil']]
if get_preferences().sync_flags.sync_render_settings:
if self.preferences.sync_flags.sync_render_settings:
if 'eevee' in data.keys():
loader.load(datablock.eevee, data['eevee'])
loader.load(target.eevee, data['eevee'])
if 'cycles' in data.keys():
loader.load(datablock.cycles, data['cycles'])
loader.load(target.cycles, data['cycles'])
if 'render' in data.keys():
loader.load(datablock.render, data['render'])
loader.load(target.render, data['render'])
view_settings = data.get('view_settings')
if view_settings:
loader.load(datablock.view_settings, view_settings)
if datablock.view_settings.use_curve_mapping and \
'curve_mapping' in view_settings:
if 'view_settings' in data.keys():
loader.load(target.view_settings, data['view_settings'])
if target.view_settings.use_curve_mapping and \
'curve_mapping' in data['view_settings']:
# TODO: change this ugly fix
datablock.view_settings.curve_mapping.white_level = view_settings['curve_mapping']['white_level']
datablock.view_settings.curve_mapping.black_level = view_settings['curve_mapping']['black_level']
datablock.view_settings.curve_mapping.update()
target.view_settings.curve_mapping.white_level = data[
'view_settings']['curve_mapping']['white_level']
target.view_settings.curve_mapping.black_level = data[
'view_settings']['curve_mapping']['black_level']
target.view_settings.curve_mapping.update()
# Sequencer
sequences = data.get('sequences')
if sequences:
# Create sequencer data
datablock.sequence_editor_create()
vse = datablock.sequence_editor
target.sequence_editor_create()
vse = target.sequence_editor
# Clear removed sequences
for seq in vse.sequences_all:
if seq.name not in sequences:
vse.sequences.remove(seq)
# Load existing sequences
for seq_data in sequences.values():
for seq_name, seq_data in sequences.items():
load_sequence(seq_data, vse)
# If the sequence is no longer used, clear it
elif datablock.sequence_editor and not sequences:
datablock.sequence_editor_clear()
elif target.sequence_editor and not sequences:
target.sequence_editor_clear()
# Timeline markers
markers = data.get('timeline_markers')
if markers:
datablock.timeline_markers.clear()
for name, frame, camera in markers:
marker = datablock.timeline_markers.new(name, frame=frame)
if camera:
marker.camera = resolve_datablock_from_uuid(camera, bpy.data.objects)
marker.select = False
# FIXME: Find a better way after the replication big refacotoring
# Keep other user from deleting collection object by flushing their history
flush_history()
@staticmethod
def dump(datablock: object) -> dict:
data = {}
data['animation_data'] = dump_animation_data(datablock)
def _dump_implementation(self, data, instance=None):
assert(instance)
# Metadata
scene_dumper = Dumper()
@ -471,44 +452,45 @@ class BlScene(ReplicatedDatablock):
'name',
'world',
'id',
'grease_pencil',
'frame_start',
'frame_end',
'frame_step',
]
if get_preferences().sync_flags.sync_active_camera:
if self.preferences.sync_flags.sync_active_camera:
scene_dumper.include_filter.append('camera')
data.update(scene_dumper.dump(datablock))
data.update(scene_dumper.dump(instance))
# Master collection
data['collection'] = {}
data['collection']['children'] = dump_collection_children(
datablock.collection)
instance.collection)
data['collection']['objects'] = dump_collection_objects(
datablock.collection)
instance.collection)
scene_dumper.depth = 1
scene_dumper.include_filter = None
# Render settings
if get_preferences().sync_flags.sync_render_settings:
if self.preferences.sync_flags.sync_render_settings:
scene_dumper.include_filter = RENDER_SETTINGS
data['render'] = scene_dumper.dump(datablock.render)
data['render'] = scene_dumper.dump(instance.render)
if datablock.render.engine == 'BLENDER_EEVEE':
if instance.render.engine == 'BLENDER_EEVEE':
scene_dumper.include_filter = EVEE_SETTINGS
data['eevee'] = scene_dumper.dump(datablock.eevee)
elif datablock.render.engine == 'CYCLES':
data['eevee'] = scene_dumper.dump(instance.eevee)
elif instance.render.engine == 'CYCLES':
scene_dumper.include_filter = CYCLES_SETTINGS
data['cycles'] = scene_dumper.dump(datablock.cycles)
data['cycles'] = scene_dumper.dump(instance.cycles)
scene_dumper.include_filter = VIEW_SETTINGS
data['view_settings'] = scene_dumper.dump(datablock.view_settings)
data['view_settings'] = scene_dumper.dump(instance.view_settings)
if datablock.view_settings.use_curve_mapping:
if instance.view_settings.use_curve_mapping:
data['view_settings']['curve_mapping'] = scene_dumper.dump(
datablock.view_settings.curve_mapping)
instance.view_settings.curve_mapping)
scene_dumper.depth = 5
scene_dumper.include_filter = [
'curves',
@ -516,44 +498,35 @@ class BlScene(ReplicatedDatablock):
'location',
]
data['view_settings']['curve_mapping']['curves'] = scene_dumper.dump(
datablock.view_settings.curve_mapping.curves)
instance.view_settings.curve_mapping.curves)
# Sequence
vse = datablock.sequence_editor
vse = instance.sequence_editor
if vse:
dumped_sequences = {}
for seq in vse.sequences_all:
dumped_sequences[seq.name] = dump_sequence(seq)
data['sequences'] = dumped_sequences
# Timeline markers
if datablock.timeline_markers:
data['timeline_markers'] = [(m.name, m.frame, getattr(m.camera, 'uuid', None)) for m in datablock.timeline_markers]
if datablock.grease_pencil:
data['grease_pencil'] = datablock.grease_pencil.uuid
return data
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
deps = []
# Master Collection
deps.extend(resolve_collection_dependencies(datablock.collection))
deps.extend(resolve_collection_dependencies(self.instance.collection))
# world
if datablock.world:
deps.append(datablock.world)
if self.instance.world:
deps.append(self.instance.world)
# annotations
if datablock.grease_pencil:
deps.append(datablock.grease_pencil)
deps.extend(resolve_animation_dependencies(datablock))
if self.instance.grease_pencil:
deps.append(self.instance.grease_pencil)
# Sequences
vse = datablock.sequence_editor
vse = self.instance.sequence_editor
if vse:
for sequence in vse.sequences_all:
if sequence.type == 'MOVIE' and sequence.filepath:
@ -568,45 +541,16 @@ class BlScene(ReplicatedDatablock):
return deps
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
name = data.get('name')
datablock = resolve_datablock_from_uuid(uuid, bpy.data.scenes)
if datablock is None:
datablock = bpy.data.scenes.get(name)
return datablock
@staticmethod
def compute_delta(last_data: dict, current_data: dict) -> Delta:
def diff(self):
exclude_path = []
if not get_preferences().sync_flags.sync_render_settings:
if not self.preferences.sync_flags.sync_render_settings:
exclude_path.append("root['eevee']")
exclude_path.append("root['cycles']")
exclude_path.append("root['view_settings']")
exclude_path.append("root['render']")
if not get_preferences().sync_flags.sync_active_camera:
if not self.preferences.sync_flags.sync_active_camera:
exclude_path.append("root['camera']")
diff_params = {
'exclude_paths': exclude_path,
'ignore_order': True,
'report_repetition': True
}
delta_params = {
# 'mutate': True
}
return Delta(
DeepDiff(last_data,
current_data,
cache_size=5000,
**diff_params),
**delta_params)
_type = bpy.types.Scene
_class = BlScene
return DeepDiff(self.data, self._dump(instance=self.instance), exclude_paths=exclude_path)

View File

@ -23,59 +23,45 @@ from pathlib import Path
import bpy
from .bl_file import get_filepath, ensure_unpacked
from replication.protocol import ReplicatedDatablock
from .bl_datablock import BlDatablock
from .dump_anything import Dumper, Loader
from .bl_datablock import resolve_datablock_from_uuid
class BlSound(ReplicatedDatablock):
class BlSound(BlDatablock):
bl_id = "sounds"
bl_class = bpy.types.Sound
bl_check_common = False
bl_icon = 'SOUND'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
filename = data.get('filename')
return bpy.data.sounds.load(get_filepath(filename))
@staticmethod
def load(data: dict, datablock: object):
def _load(self, data, target):
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
@staticmethod
def dump(datablock: object) -> dict:
filename = Path(datablock.filepath).name
def diff(self):
return False
def _dump(self, instance=None):
filename = Path(instance.filepath).name
if not filename:
raise FileExistsError(datablock.filepath)
raise FileExistsError(instance.filepath)
return {
'filename': filename,
'name': datablock.name
'name': instance.name
}
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
deps = []
if datablock.filepath and datablock.filepath != '<builtin>':
ensure_unpacked(datablock)
if self.instance.filepath and self.instance.filepath != '<builtin>':
ensure_unpacked(self.instance)
deps.append(Path(bpy.path.abspath(datablock.filepath)))
deps.append(Path(bpy.path.abspath(self.instance.filepath)))
return deps
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.sounds)
@staticmethod
def needs_update(datablock: object, data:dict)-> bool:
return False
_type = bpy.types.Sound
_class = BlSound

View File

@ -20,31 +20,26 @@ import bpy
import mathutils
from .dump_anything import Loader, Dumper
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
from .bl_datablock import BlDatablock
class BlSpeaker(ReplicatedDatablock):
use_delta = True
class BlSpeaker(BlDatablock):
bl_id = "speakers"
bl_class = bpy.types.Speaker
bl_check_common = False
bl_icon = 'SPEAKER'
bl_reload_parent = False
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
load_animation_data(data.get('animation_data'), datablock)
loader.load(target, data)
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.speakers.new(data["name"])
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
assert(instance)
dumper = Dumper()
dumper.depth = 1
dumper.include_filter = [
@ -63,27 +58,17 @@ class BlSpeaker(ReplicatedDatablock):
'cone_volume_outer'
]
data = dumper.dump(datablock)
data['animation_data'] = dump_animation_data(datablock)
return data
return dumper.dump(instance)
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.speakers)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
# TODO: resolve material
deps = []
sound = datablock.sound
sound = self.instance.sound
if sound:
deps.append(sound)
deps.extend(resolve_animation_dependencies(datablock))
return deps
_type = bpy.types.Speaker
_class = BlSpeaker

View File

@ -20,32 +20,25 @@ import bpy
import mathutils
from .dump_anything import Loader, Dumper
from replication.protocol import ReplicatedDatablock
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
import bpy.types as T
from .bl_datablock import BlDatablock
class BlTexture(ReplicatedDatablock):
use_delta = True
class BlTexture(BlDatablock):
bl_id = "textures"
bl_class = bpy.types.Texture
bl_check_common = False
bl_icon = 'TEXTURE'
bl_reload_parent = False
@staticmethod
def load(data: dict, datablock: object):
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
load_animation_data(data.get('animation_data'), datablock)
loader.load(target, data)
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.textures.new(data["name"], data["type"])
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
assert(instance)
dumper = Dumper()
dumper.depth = 1
@ -59,39 +52,24 @@ class BlTexture(ReplicatedDatablock):
'name_full'
]
data = dumper.dump(datablock)
color_ramp = getattr(datablock, 'color_ramp', None)
data = dumper.dump(instance)
color_ramp = getattr(instance, 'color_ramp', None)
if color_ramp:
dumper.depth = 4
data['color_ramp'] = dumper.dump(color_ramp)
data['animation_data'] = dump_animation_data(datablock)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.textures)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
# TODO: resolve material
deps = []
image = getattr(datablock,"image", None)
image = getattr(self.instance,"image", None)
if image:
deps.append(image)
deps.extend(resolve_animation_dependencies(datablock))
return deps
_type = [T.WoodTexture, T.VoronoiTexture,
T.StucciTexture, T.NoiseTexture,
T.MusgraveTexture, T.MarbleTexture,
T.MagicTexture, T.ImageTexture,
T.DistortedNoiseTexture, T.CloudsTexture,
T.BlendTexture]
_class = BlTexture

View File

@ -21,26 +21,32 @@ import mathutils
from pathlib import Path
from .dump_anything import Loader, Dumper
from replication.protocol import ReplicatedDatablock
from .bl_datablock import get_datablock_from_uuid, resolve_datablock_from_uuid
from .bl_datablock import BlDatablock, get_datablock_from_uuid
from .bl_material import dump_materials_slots, load_materials_slots
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
class BlVolume(ReplicatedDatablock):
use_delta = True
class BlVolume(BlDatablock):
bl_id = "volumes"
bl_class = bpy.types.Volume
bl_check_common = False
bl_icon = 'VOLUME_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _load_implementation(self, data, target):
loader = Loader()
loader.load(target, data)
loader.load(target.display, data['display'])
# MATERIAL SLOTS
src_materials = data.get('materials', None)
if src_materials:
load_materials_slots(src_materials, target.materials)
def _construct(self, data):
return bpy.data.volumes.new(data["name"])
@staticmethod
def dump(datablock: object) -> dict:
def _dump_implementation(self, data, instance=None):
assert(instance)
dumper = Dumper()
dumper.depth = 1
dumper.exclude_filter = [
@ -54,48 +60,27 @@ class BlVolume(ReplicatedDatablock):
'use_fake_user'
]
data = dumper.dump(datablock)
data = dumper.dump(instance)
data['display'] = dumper.dump(datablock.display)
data['display'] = dumper.dump(instance.display)
# Fix material index
data['materials'] = dump_materials_slots(datablock.materials)
data['animation_data'] = dump_animation_data(datablock)
data['materials'] = dump_materials_slots(instance.materials)
return data
@staticmethod
def load(data: dict, datablock: object):
load_animation_data(data.get('animation_data'), datablock)
loader = Loader()
loader.load(datablock, data)
loader.load(datablock.display, data['display'])
# MATERIAL SLOTS
src_materials = data.get('materials', None)
if src_materials:
load_materials_slots(src_materials, datablock.materials)
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.volumes)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
# TODO: resolve material
deps = []
external_vdb = Path(bpy.path.abspath(datablock.filepath))
external_vdb = Path(bpy.path.abspath(self.instance.filepath))
if external_vdb.exists() and not external_vdb.is_dir():
deps.append(external_vdb)
for material in datablock.materials:
for material in self.instance.materials:
if material:
deps.append(material)
deps.extend(resolve_animation_dependencies(datablock))
return deps
_type = bpy.types.Volume
_class = BlVolume

View File

@ -20,42 +20,35 @@ import bpy
import mathutils
from .dump_anything import Loader, Dumper
from replication.protocol import ReplicatedDatablock
from .bl_datablock import BlDatablock
from .bl_material import (load_node_tree,
dump_node_tree,
get_node_tree_dependencies)
from .bl_datablock import resolve_datablock_from_uuid
from .bl_action import dump_animation_data, load_animation_data, resolve_animation_dependencies
class BlWorld(ReplicatedDatablock):
use_delta = True
class BlWorld(BlDatablock):
bl_id = "worlds"
bl_class = bpy.types.World
bl_check_common = True
bl_icon = 'WORLD_DATA'
bl_reload_parent = False
@staticmethod
def construct(data: dict) -> object:
def _construct(self, data):
return bpy.data.worlds.new(data["name"])
@staticmethod
def load(data: dict, datablock: object):
load_animation_data(data.get('animation_data'), datablock)
def _load_implementation(self, data, target):
loader = Loader()
loader.load(datablock, data)
loader.load(target, data)
if data["use_nodes"]:
if datablock.node_tree is None:
datablock.use_nodes = True
if target.node_tree is None:
target.use_nodes = True
load_node_tree(data['node_tree'], datablock.node_tree)
load_node_tree(data['node_tree'], target.node_tree)
def _dump_implementation(self, data, instance=None):
assert(instance)
@staticmethod
def dump(datablock: object) -> dict:
world_dumper = Dumper()
world_dumper.depth = 1
world_dumper.include_filter = [
@ -63,27 +56,17 @@ class BlWorld(ReplicatedDatablock):
"name",
"color"
]
data = world_dumper.dump(datablock)
if datablock.use_nodes:
data['node_tree'] = dump_node_tree(datablock.node_tree)
data = world_dumper.dump(instance)
if instance.use_nodes:
data['node_tree'] = dump_node_tree(instance.node_tree)
data['animation_data'] = dump_animation_data(datablock)
return data
@staticmethod
def resolve(data: dict) -> object:
uuid = data.get('uuid')
return resolve_datablock_from_uuid(uuid, bpy.data.worlds)
@staticmethod
def resolve_deps(datablock: object) -> [object]:
def _resolve_deps_implementation(self):
deps = []
if datablock.use_nodes:
deps.extend(get_node_tree_dependencies(datablock.node_tree))
deps.extend(resolve_animation_dependencies(datablock))
if self.instance.use_nodes:
deps.extend(get_node_tree_dependencies(self.instance.node_tree))
if self.is_library:
deps.append(self.instance.library)
return deps
_type = bpy.types.World
_class = BlWorld

View File

@ -507,12 +507,16 @@ class Loader:
_constructors = {
T.ColorRampElement: (CONSTRUCTOR_NEW, ["position"]),
T.ParticleSettingsTextureSlot: (CONSTRUCTOR_ADD, []),
T.Modifier: (CONSTRUCTOR_NEW, ["name", "type"]),
T.GpencilModifier: (CONSTRUCTOR_NEW, ["name", "type"]),
T.Constraint: (CONSTRUCTOR_NEW, ["type"]),
}
destructors = {
T.ColorRampElement: DESTRUCTOR_REMOVE,
T.Modifier: DESTRUCTOR_CLEAR,
T.GpencilModifier: DESTRUCTOR_CLEAR,
T.Constraint: DESTRUCTOR_REMOVE,
}
element_type = element.bl_rna_property.fixed_type

View File

@ -24,25 +24,20 @@ import sys
from pathlib import Path
import socket
import re
import bpy
VERSION_EXPR = re.compile('\d+.\d+.\d+')
THIRD_PARTY = os.path.join(os.path.dirname(os.path.abspath(__file__)), "libs")
DEFAULT_CACHE_DIR = os.path.join(
os.path.dirname(os.path.abspath(__file__)), "cache")
REPLICATION_DEPENDENCIES = {
"zmq",
"deepdiff"
}
LIBS = os.path.join(os.path.dirname(os.path.abspath(__file__)), "libs")
REPLICATION = os.path.join(LIBS,"replication")
PYTHON_PATH = None
SUBPROCESS_DIR = None
rtypes = []
def module_can_be_imported(name: str) -> bool:
def module_can_be_imported(name):
try:
__import__(name)
return True
@ -55,7 +50,7 @@ def install_pip():
subprocess.run([str(PYTHON_PATH), "-m", "ensurepip"])
def install_package(name: str, install_dir: str):
def install_package(name, version):
logging.info(f"installing {name} version...")
env = os.environ
if "PIP_REQUIRE_VIRTUALENV" in env:
@ -65,13 +60,12 @@ def install_package(name: str, install_dir: str):
# env var for the subprocess.
env = os.environ.copy()
del env["PIP_REQUIRE_VIRTUALENV"]
subprocess.run([str(PYTHON_PATH), "-m", "pip", "install", f"{name}", "-t", install_dir], env=env)
subprocess.run([str(PYTHON_PATH), "-m", "pip", "install", f"{name}=={version}"], env=env)
if name in sys.modules:
del sys.modules[name]
def check_package_version(name: str, required_version: str):
def check_package_version(name, required_version):
logging.info(f"Checking {name} version...")
out = subprocess.run([str(PYTHON_PATH), "-m", "pip", "show", name], capture_output=True)
@ -83,7 +77,6 @@ def check_package_version(name: str, required_version: str):
logging.info(f"{name} need an update")
return False
def get_ip():
"""
Retrieve the main network interface IP.
@ -101,25 +94,7 @@ def check_dir(dir):
os.makedirs(dir)
def setup_paths(paths: list):
""" Add missing path to sys.path
"""
for path in paths:
if path not in sys.path:
logging.debug(f"Adding {path} dir to the path.")
sys.path.insert(0, path)
def remove_paths(paths: list):
""" Remove list of path from sys.path
"""
for path in paths:
if path in sys.path:
logging.debug(f"Removing {path} dir from the path.")
sys.path.remove(path)
def install_modules(dependencies: list, python_path: str, install_dir: str):
def setup(dependencies, python_path):
global PYTHON_PATH, SUBPROCESS_DIR
PYTHON_PATH = Path(python_path)
@ -128,23 +103,9 @@ def install_modules(dependencies: list, python_path: str, install_dir: str):
if not module_can_be_imported("pip"):
install_pip()
for package_name in dependencies:
for package_name, package_version in dependencies:
if not module_can_be_imported(package_name):
install_package(package_name, install_dir=install_dir)
install_package(package_name, package_version)
module_can_be_imported(package_name)
def register():
if bpy.app.version >= (2,91,0):
python_binary_path = sys.executable
else:
python_binary_path = bpy.app.binary_path_python
for module_name in list(sys.modules.keys()):
if 'replication' in module_name:
del sys.modules[module_name]
setup_paths([LIBS, REPLICATION])
install_modules(REPLICATION_DEPENDENCIES, python_binary_path, install_dir=LIBS)
def unregister():
remove_paths([REPLICATION, LIBS])
elif not check_package_version(package_name, package_version):
install_package(package_name, package_version)

View File

@ -1,155 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#
# ##### END GPL LICENSE BLOCK #####
import logging
import bpy
from bpy.app.handlers import persistent
from replication import porcelain
from replication.constants import RP_COMMON, STATE_ACTIVE, STATE_SYNCING, UP
from replication.exception import ContextError, NonAuthorizedOperationError
from replication.interface import session
from . import shared_data, utils
def sanitize_deps_graph(remove_nodes: bool = False):
""" Cleanup the replication graph
"""
if session and session.state == STATE_ACTIVE:
start = utils.current_milli_time()
rm_cpt = 0
for node in session.repository.graph.values():
node.instance = session.repository.rdp.resolve(node.data)
if node is None \
or (node.state == UP and not node.instance):
if remove_nodes:
try:
porcelain.rm(session.repository,
node.uuid,
remove_dependencies=False)
logging.info(f"Removing {node.uuid}")
rm_cpt += 1
except NonAuthorizedOperationError:
continue
logging.info(f"Sanitize took { utils.current_milli_time()-start} ms, removed {rm_cpt} nodes")
def update_external_dependencies():
"""Force external dependencies(files such as images) evaluation
"""
external_types = ['WindowsPath', 'PosixPath', 'Image']
nodes_ids = [n.uuid for n in session.repository.graph.values() if n.data['type_id'] in external_types]
for node_id in nodes_ids:
node = session.repository.graph.get(node_id)
if node and node.owner in [session.repository.username, RP_COMMON]:
porcelain.commit(session.repository, node_id)
porcelain.push(session.repository, 'origin', node_id)
@persistent
def on_scene_update(scene):
"""Forward blender depsgraph update to replication
"""
if session and session.state == STATE_ACTIVE:
context = bpy.context
blender_depsgraph = bpy.context.view_layer.depsgraph
dependency_updates = [u for u in blender_depsgraph.updates]
settings = utils.get_preferences()
incoming_updates = shared_data.session.applied_updates
distant_update = [getattr(u.id, 'uuid', None) for u in dependency_updates if getattr(u.id, 'uuid', None) in incoming_updates]
if distant_update:
for u in distant_update:
shared_data.session.applied_updates.remove(u)
logging.debug(f"Ignoring distant update of {dependency_updates[0].id.name}")
return
# NOTE: maybe we don't need to check each update but only the first
for update in reversed(dependency_updates):
update_uuid = getattr(update.id, 'uuid', None)
if update_uuid:
node = session.repository.graph.get(update.id.uuid)
check_common = session.repository.rdp.get_implementation(update.id).bl_check_common
if node and (node.owner == session.repository.username or check_common):
logging.debug(f"Evaluate {update.id.name}")
if node.state == UP:
try:
porcelain.commit(session.repository, node.uuid)
porcelain.push(session.repository,
'origin', node.uuid)
except ReferenceError:
logging.debug(f"Reference error {node.uuid}")
except ContextError as e:
logging.debug(e)
except Exception as e:
logging.error(e)
else:
continue
elif isinstance(update.id, bpy.types.Scene):
scene = bpy.data.scenes.get(update.id.name)
scn_uuid = porcelain.add(session.repository, scene)
porcelain.commit(session.repository, scn_uuid)
porcelain.push(session.repository, 'origin', scn_uuid)
scene_graph_changed = [u for u in reversed(dependency_updates) if getattr(u.id, 'uuid', None) and isinstance(u.id,(bpy.types.Scene,bpy.types.Collection))]
if scene_graph_changed:
porcelain.purge_orphan_nodes(session.repository)
update_external_dependencies()
@persistent
def resolve_deps_graph(dummy):
"""Resolve deps graph
Temporary solution to resolve each node pointers after a Undo.
A future solution should be to avoid storing dataclock reference...
"""
if session and session.state == STATE_ACTIVE:
sanitize_deps_graph(remove_nodes=True)
@persistent
def load_pre_handler(dummy):
if session and session.state in [STATE_ACTIVE, STATE_SYNCING]:
bpy.ops.session.stop()
@persistent
def update_client_frame(scene):
if session and session.state == STATE_ACTIVE:
porcelain.update_user_metadata(session.repository, {
'frame_current': scene.frame_current
})
def register():
bpy.app.handlers.undo_post.append(resolve_deps_graph)
bpy.app.handlers.redo_post.append(resolve_deps_graph)
bpy.app.handlers.load_pre.append(load_pre_handler)
bpy.app.handlers.frame_change_pre.append(update_client_frame)
def unregister():
bpy.app.handlers.undo_post.remove(resolve_deps_graph)
bpy.app.handlers.redo_post.remove(resolve_deps_graph)
bpy.app.handlers.load_pre.remove(load_pre_handler)
bpy.app.handlers.frame_change_pre.remove(update_client_frame)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because it is too large Load Diff

View File

@ -17,7 +17,6 @@
import random
import logging
from uuid import uuid4
import bpy
import string
import re
@ -26,33 +25,14 @@ import os
from pathlib import Path
from . import bl_types, environment, addon_updater_ops, presence, ui
from .utils import get_preferences, get_expanded_icon, get_folder_size
from .utils import get_preferences, get_expanded_icon
from replication.constants import RP_COMMON
from replication.interface import session
from numpy import interp
# From https://stackoverflow.com/a/106223
IP_REGEX = re.compile("^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$")
HOSTNAME_REGEX = re.compile("^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$")
#SERVER PRESETS AT LAUNCH
DEFAULT_PRESETS = {
"localhost" : {
"server_name": "localhost",
"ip": "localhost",
"port": 5555,
"use_admin_password": True,
"admin_password": "admin",
"server_password": ""
},
"public session" : {
"server_name": "public session",
"ip": "51.75.71.183",
"port": 5555,
"admin_password": "",
"server_password": ""
},
}
def randomColor():
"""Generate a random color """
r = random.random()
@ -86,6 +66,16 @@ def update_ip(self, context):
self['ip'] = "127.0.0.1"
def update_port(self, context):
max_port = self.port + 3
if self.ipc_port < max_port and \
self['ipc_port'] >= self.port:
logging.error(
"IPC Port in conflict with the port, assigning a random value")
self['ipc_port'] = random.randrange(self.port+4, 10000)
def update_directory(self, context):
new_dir = Path(self.cache_directory)
if new_dir.exists() and any(Path(self.cache_directory).iterdir()):
@ -103,6 +93,88 @@ def set_log_level(self, value):
def get_log_level(self):
return logging.getLogger().level
def set_active_replay(self, value):
files_count = len(bpy.context.window_manager.session.replay_files)
if files_count == 0:
return
max_index = files_count-1
if value > max_index:
value = max_index
if hasattr(self, 'active_replay_file'):
self["active_replay_file"] = value
else:
self.active_replay_file = value
if bpy.context.window_manager.session.replay_mode == 'MANUAL':
bpy.ops.session.load(
filepath=bpy.context.window_manager.session.replay_files[value].name,
draw_users=True,
replay=True)
def get_active_replay(self):
return self.get('active_replay_file', 0)
def set_replay_persistent_collection(self, value):
if hasattr(self, 'replay_persistent_collection'):
self["replay_persistent_collection"] = value
else:
self.replay_persistent_collection = value
collection = bpy.data.collections.get("multiuser_timelapse", None)
if collection is None and value:
collection = bpy.data.collections.new('multiuser_timelapse')
cam = bpy.data.cameras.get('multiuser_timelapse_cam', bpy.data.cameras.new('multiuser_timelapse_cam'))
cam_obj = bpy.data.objects.get('multiuser_timelapse_cam_obj', bpy.data.objects.new('multiuser_timelapse_cam_obj', cam))
curve = bpy.data.curves.get('multiuser_timelapse_path', bpy.data.curves.new('multiuser_timelapse_path', 'CURVE'))
curve_obj = bpy.data.objects.get('multiuser_timelapse_path_obj', bpy.data.objects.new('multiuser_timelapse_path_obj', curve))
if cam_obj.name not in collection.objects:
collection.objects.link(cam_obj)
if curve_obj.name not in collection.objects:
collection.objects.link(curve_obj)
bpy.context.scene.collection.children.link(collection)
elif collection and not value:
for o in collection.objects:
bpy.data.objects.remove(o)
bpy.data.collections.remove(collection)
def get_replay_persistent_collection(self):
return self.get('replay_persistent_collection', False)
def set_replay_duration(self, value):
if hasattr(self, 'replay_duration'):
self["replay_duration"] = value
else:
self.replay_duration = value
# Update the animation fcurve
replay_action = bpy.data.actions.get('replay_action')
replay_fcurve = None
for fcurve in replay_action.fcurves:
if fcurve.data_path == 'active_replay_file':
replay_fcurve = fcurve
if replay_fcurve:
for p in reversed(replay_fcurve.keyframe_points):
replay_fcurve.keyframe_points.remove(p, fast=True)
bpy.context.scene.frame_end = value
files_count = len(bpy.context.window_manager.session.replay_files)-1
for index in range(0, files_count):
frame = interp(index,[0, files_count],[bpy.context.scene.frame_start, value])
replay_fcurve.keyframe_points.insert(frame, index)
def get_replay_duration(self):
return self.get('replay_duration', 10)
class ReplicatedDatablock(bpy.types.PropertyGroup):
type_name: bpy.props.StringProperty()
@ -111,16 +183,6 @@ class ReplicatedDatablock(bpy.types.PropertyGroup):
auto_push: bpy.props.BoolProperty(default=True)
icon: bpy.props.StringProperty()
class ServerPreset(bpy.types.PropertyGroup):
server_name: bpy.props.StringProperty(default="")
ip: bpy.props.StringProperty(default="127.0.0.1", update=update_ip)
port: bpy.props.IntProperty(default=5555)
use_server_password: bpy.props.BoolProperty(default=False)
server_password: bpy.props.StringProperty(default="", subtype = "PASSWORD")
use_admin_password: bpy.props.BoolProperty(default=False)
admin_password: bpy.props.StringProperty(default="", subtype = "PASSWORD")
is_online: bpy.props.BoolProperty(default=False)
is_private: bpy.props.BoolProperty(default=False)
def set_sync_render_settings(self, value):
self['sync_render_settings'] = value
@ -170,66 +232,36 @@ class ReplicationFlags(bpy.types.PropertyGroup):
class SessionPrefs(bpy.types.AddonPreferences):
bl_idname = __package__
# User settings
ip: bpy.props.StringProperty(
name="ip",
description='Distant host ip',
default="127.0.0.1",
update=update_ip)
username: bpy.props.StringProperty(
name="Username",
default=f"user_{random_string_digits()}"
)
client_color: bpy.props.FloatVectorProperty(
name="client_instance_color",
description='User color',
subtype='COLOR',
default=randomColor()
)
# Current server settings
server_name: bpy.props.StringProperty(
name="server_name",
description="Custom name of the server",
default='localhost',
)
server_index: bpy.props.IntProperty(
name="server_index",
description="index of the server",
)
# User host session settings
host_port: bpy.props.IntProperty(
name="host_port",
default=randomColor())
port: bpy.props.IntProperty(
name="port",
description='Distant host port',
default=5555
)
host_use_server_password: bpy.props.BoolProperty(
name="use_server_password",
description='Use session password',
default=False
)
host_server_password: bpy.props.StringProperty(
name="server_password",
description='Session password',
subtype='PASSWORD'
)
host_use_admin_password: bpy.props.BoolProperty(
name="use_admin_password",
description='Use admin password',
default=True
)
host_admin_password: bpy.props.StringProperty(
name="admin_password",
description='Admin password',
subtype='PASSWORD',
default='admin'
)
# Other
is_first_launch: bpy.props.BoolProperty(
name="is_fnirst_launch",
description="First time lauching the addon",
default=True
)
sync_flags: bpy.props.PointerProperty(
type=ReplicationFlags
)
supported_datablocks: bpy.props.CollectionProperty(
type=ReplicatedDatablock,
)
ipc_port: bpy.props.IntProperty(
name="ipc_port",
description='internal ttl port(only useful for multiple local instances)',
default=random.randrange(5570, 70000),
update=update_port,
)
init_method: bpy.props.EnumProperty(
name='init_method',
description='Init repo',
@ -247,11 +279,6 @@ class SessionPrefs(bpy.types.AddonPreferences):
description='connection timeout before disconnection',
default=5000
)
ping_timeout: bpy.props.IntProperty(
name='ping timeout',
description='check if servers are online',
default=500
)
# Replication update settings
depsgraph_update_rate: bpy.props.FloatProperty(
name='depsgraph update rate (s)',
@ -263,12 +290,11 @@ class SessionPrefs(bpy.types.AddonPreferences):
description="Remove filecache from memory",
default=False
)
# For UI
# for UI
category: bpy.props.EnumProperty(
name="Category",
description="Preferences Category",
items=[
('PREF', "Preferences", "Preferences of this add-on"),
('CONFIG', "Configuration", "Configuration of this add-on"),
('UPDATE', "Update", "Update this add-on"),
],
@ -312,58 +338,31 @@ class SessionPrefs(bpy.types.AddonPreferences):
step=1,
subtype='PERCENTAGE',
)
presence_text_distance: bpy.props.FloatProperty(
name="Distance text visibilty",
description="Adjust the distance visibilty of user's mode/name",
min=0.1,
max=10000,
default=100,
)
conf_session_identity_expanded: bpy.props.BoolProperty(
name="Identity",
description="Identity",
default=False
default=True
)
conf_session_net_expanded: bpy.props.BoolProperty(
name="Net",
description="net",
default=False
default=True
)
conf_session_hosting_expanded: bpy.props.BoolProperty(
name="Rights",
description="Rights",
default=False
)
conf_session_rep_expanded: bpy.props.BoolProperty(
name="Replication",
description="Replication",
default=False
)
conf_session_cache_expanded: bpy.props.BoolProperty(
name="Cache",
description="cache",
default=False
)
conf_session_log_expanded: bpy.props.BoolProperty(
name="conf_session_log_expanded",
description="conf_session_log_expanded",
default=False
)
conf_session_ui_expanded: bpy.props.BoolProperty(
name="Interface",
description="Interface",
default=False
)
sidebar_repository_shown: bpy.props.BoolProperty(
name="sidebar_repository_shown",
description="sidebar_repository_shown",
default=False
)
sidebar_advanced_shown: bpy.props.BoolProperty(
name="sidebar_advanced_shown",
description="sidebar_advanced_shown",
default=False
)
sidebar_advanced_rep_expanded: bpy.props.BoolProperty(
name="sidebar_advanced_rep_expanded",
description="sidebar_advanced_rep_expanded",
@ -374,11 +373,6 @@ class SessionPrefs(bpy.types.AddonPreferences):
description="sidebar_advanced_log_expanded",
default=False
)
sidebar_advanced_uinfo_expanded: bpy.props.BoolProperty(
name="sidebar_advanced_uinfo_expanded",
description="sidebar_advanced_uinfo_expanded",
default=False
)
sidebar_advanced_net_expanded: bpy.props.BoolProperty(
name="sidebar_advanced_net_expanded",
description="sidebar_advanced_net_expanded",
@ -423,19 +417,6 @@ class SessionPrefs(bpy.types.AddonPreferences):
max=59
)
# Server preset
def server_list_callback(scene, context):
settings = get_preferences()
enum = []
for i in settings.server_preset:
enum.append((i.name, i.name, ""))
return enum
server_preset: bpy.props.CollectionProperty(
name="server preset",
type=ServerPreset,
)
# Custom panel
panel_category: bpy.props.StringProperty(
description="Choose a name for the category of the panel",
@ -444,28 +425,38 @@ class SessionPrefs(bpy.types.AddonPreferences):
def draw(self, context):
layout = self.layout
layout.row().prop(self, "category", expand=True)
if self.category == 'PREF':
grid = layout.column()
box = grid.box()
row = box.row()
# USER SETTINGS
split = row.split(factor=0.7, align=True)
split.prop(self, "username", text="User")
split.prop(self, "client_color", text="")
row = box.row()
row.label(text="Hide settings:")
row = box.row()
row.prop(self, "sidebar_advanced_shown", text="Hide “Advanced” settings in side pannel (Not in session)")
row = box.row()
row.prop(self, "sidebar_repository_shown", text="Hide “Repository” settings in side pannel (In session)")
if self.category == 'CONFIG':
grid = layout.column()
# USER INFORMATIONS
box = grid.box()
box.prop(
self, "conf_session_identity_expanded", text="User information",
icon=get_expanded_icon(self.conf_session_identity_expanded),
emboss=False)
if self.conf_session_identity_expanded:
box.row().prop(self, "username", text="name")
box.row().prop(self, "client_color", text="color")
# NETWORK SETTINGS
box = grid.box()
box.prop(
self, "conf_session_net_expanded", text="Networking",
icon=get_expanded_icon(self.conf_session_net_expanded),
emboss=False)
if self.conf_session_net_expanded:
box.row().prop(self, "ip", text="Address")
row = box.row()
row.label(text="Port:")
row.prop(self, "port", text="")
row = box.row()
row.label(text="Init the session from:")
row.prop(self, "init_method", text="")
# HOST SETTINGS
box = grid.box()
box.prop(
@ -473,57 +464,9 @@ class SessionPrefs(bpy.types.AddonPreferences):
icon=get_expanded_icon(self.conf_session_hosting_expanded),
emboss=False)
if self.conf_session_hosting_expanded:
row = box.row()
row.prop(self, "host_port", text="Port: ")
row = box.row()
row.label(text="Init the session from:")
row.prop(self, "init_method", text="")
row = box.row()
col = row.column()
col.prop(self, "host_use_server_password", text="Server password:")
col = row.column()
col.enabled = True if self.host_use_server_password else False
col.prop(self, "host_server_password", text="")
row = box.row()
col = row.column()
col.prop(self, "host_use_admin_password", text="Admin password:")
col = row.column()
col.enabled = True if self.host_use_admin_password else False
col.prop(self, "host_admin_password", text="")
# NETWORKING
box = grid.box()
box.prop(
self, "conf_session_net_expanded", text="Network",
icon=get_expanded_icon(self.conf_session_net_expanded),
emboss=False)
if self.conf_session_net_expanded:
row = box.row()
row.label(text="Timeout (ms):")
row.prop(self, "connection_timeout", text="")
row = box.row()
row.label(text="Server ping (ms):")
row.prop(self, "ping_timeout", text="")
# REPLICATION
box = grid.box()
box.prop(
self, "conf_session_rep_expanded", text="Replication",
icon=get_expanded_icon(self.conf_session_rep_expanded),
emboss=False)
if self.conf_session_rep_expanded:
row = box.row()
row.prop(self.sync_flags, "sync_render_settings")
row = box.row()
row.prop(self.sync_flags, "sync_active_camera")
row = box.row()
row.prop(self.sync_flags, "sync_during_editmode")
row = box.row()
if self.sync_flags.sync_during_editmode:
warning = row.box()
warning.label(text="Don't use this with heavy meshes !", icon='ERROR')
row = box.row()
row.prop(self, "depsgraph_update_rate", text="Apply delay")
# CACHE SETTINGS
box = grid.box()
@ -534,18 +477,24 @@ class SessionPrefs(bpy.types.AddonPreferences):
if self.conf_session_cache_expanded:
box.row().prop(self, "cache_directory", text="Cache directory")
box.row().prop(self, "clear_memory_filecache", text="Clear memory filecache")
box.row().operator('session.clear_cache', text=f"Clear cache ({get_folder_size(self.cache_directory)})")
# LOGGING
# INTERFACE SETTINGS
box = grid.box()
box.prop(
self, "conf_session_log_expanded", text="Logging",
icon=get_expanded_icon(self.conf_session_log_expanded),
self, "conf_session_ui_expanded", text="Interface",
icon=get_expanded_icon(self.conf_session_ui_expanded),
emboss=False)
if self.conf_session_log_expanded:
if self.conf_session_ui_expanded:
box.row().prop(self, "panel_category", text="Panel category", expand=True)
row = box.row()
row.label(text="Log level:")
row.prop(self, 'logging_level', text="")
row.label(text="Session widget:")
col = box.column(align=True)
col.prop(self, "presence_hud_scale", expand=True)
col.prop(self, "presence_hud_hpos", expand=True)
col.prop(self, "presence_hud_vpos", expand=True)
if self.category == 'UPDATE':
from . import addon_updater_ops
@ -554,43 +503,18 @@ class SessionPrefs(bpy.types.AddonPreferences):
def generate_supported_types(self):
self.supported_datablocks.clear()
bpy_protocol = bl_types.get_data_translation_protocol()
# init the factory with supported types
for dcc_type_id, impl in bpy_protocol.implementations.items():
for type in bl_types.types_to_register():
new_db = self.supported_datablocks.add()
new_db.name = dcc_type_id
new_db.type_name = dcc_type_id
type_module = getattr(bl_types, type)
name = [e.capitalize() for e in type.split('_')[1:]]
type_impl_name = 'Bl'+''.join(name)
type_module_class = getattr(type_module, type_impl_name)
new_db.name = type_impl_name
new_db.type_name = type_impl_name
new_db.use_as_filter = True
new_db.icon = impl.bl_icon
new_db.bl_name = impl.bl_id
# Get a server preset through its name
def get_server_preset(self, name):
existing_preset = None
for server_preset in self.server_preset :
if server_preset.server_name == name :
existing_preset = server_preset
return existing_preset
# Custom at launch server preset
def generate_default_presets(self):
for preset_name, preset_data in DEFAULT_PRESETS.items():
existing_preset = self.get_server_preset(preset_name)
if existing_preset :
continue
new_server = self.server_preset.add()
new_server.name = str(uuid4())
new_server.server_name = preset_data.get('server_name')
new_server.ip = preset_data.get('ip')
new_server.port = preset_data.get('port')
new_server.use_server_password = preset_data.get('use_server_password',False)
new_server.server_password = preset_data.get('server_password',None)
new_server.use_admin_password = preset_data.get('use_admin_password',False)
new_server.admin_password = preset_data.get('admin_password',None)
new_db.icon = type_module_class.bl_icon
new_db.bl_name = type_module_class.bl_id
def client_list_callback(scene, context):
@ -619,11 +543,6 @@ class SessionUser(bpy.types.PropertyGroup):
"""
username: bpy.props.StringProperty(name="username")
current_frame: bpy.props.IntProperty(name="current_frame")
color: bpy.props.FloatVectorProperty(name="color", subtype="COLOR",
min=0.0,
max=1.0,
size=4,
default=(1.0, 1.0, 1.0, 1.0))
class SessionProps(bpy.types.PropertyGroup):
@ -653,11 +572,6 @@ class SessionProps(bpy.types.PropertyGroup):
description='Enable user overlay ',
default=True,
)
presence_show_mode: bpy.props.BoolProperty(
name="Show users current mode",
description='Enable user mode overlay ',
default=False,
)
presence_show_far_user: bpy.props.BoolProperty(
name="Show users on different scenes",
description="Show user on different scenes",
@ -673,16 +587,22 @@ class SessionProps(bpy.types.PropertyGroup):
description='Show only owned datablocks',
default=True
)
filter_name: bpy.props.StringProperty(
name="filter_name",
default="",
description='Node name filter',
)
admin: bpy.props.BoolProperty(
name="admin",
description='Connect as admin',
default=False
)
password: bpy.props.StringProperty(
name="password",
default=random_string_digits(),
description='Session password',
subtype='PASSWORD'
)
internet_ip: bpy.props.StringProperty(
name="internet ip",
default="no found",
description='Internet interface ip',
)
user_snap_running: bpy.props.BoolProperty(
default=False
)
@ -692,6 +612,37 @@ class SessionProps(bpy.types.PropertyGroup):
is_host: bpy.props.BoolProperty(
default=False
)
replay_files: bpy.props.CollectionProperty(
name='File paths',
type=bpy.types.OperatorFileListElement
)
replay_persistent_collection: bpy.props.BoolProperty(
name="replay_persistent_collection",
description='Enable a collection that persist accross frames loading',
get=get_replay_persistent_collection,
set=set_replay_persistent_collection,
)
replay_mode: bpy.props.EnumProperty(
name='replay method',
description='Replay in keyframe (timeline) or manually',
items={
('TIMELINE', 'TIMELINE', 'Replay from the timeline.'),
('MANUAL', 'MANUAL', 'Replay manually, from the replay frame widget.')},
default='TIMELINE')
replay_duration: bpy.props.IntProperty(
name='replay interval',
default=250,
min=10,
set=set_replay_duration,
get=get_replay_duration,
)
replay_frame_current: bpy.props.IntProperty(
name='replay_frame_current',
)
replay_camera: bpy.props.PointerProperty(
name='Replay camera',
type=bpy.types.Object
)
classes = (
@ -699,7 +650,6 @@ classes = (
SessionProps,
ReplicationFlags,
ReplicatedDatablock,
ServerPreset,
SessionPrefs,
)
@ -715,13 +665,20 @@ def register():
logging.debug('Generating bl_types preferences')
prefs.generate_supported_types()
# at launch server presets
prefs.generate_default_presets()
bpy.types.Scene.active_replay_file = bpy.props.IntProperty(
name="active_replay_file",
default=0,
min=0,
description='Active snapshot',
set=set_active_replay,
get=get_active_replay,
options={'ANIMATABLE'}
)
def unregister():
from bpy.utils import unregister_class
for cls in reversed(classes):
unregister_class(cls)
del bpy.types.Scene.active_replay_file

View File

@ -30,7 +30,7 @@ import mathutils
from bpy_extras import view3d_utils
from gpu_extras.batch import batch_for_shader
from replication.constants import (STATE_ACTIVE, STATE_AUTH, STATE_CONFIG,
STATE_INITIAL, CONNECTING,
STATE_INITIAL, STATE_LAUNCHING_SERVICES,
STATE_LOBBY, STATE_QUITTING, STATE_SRV_SYNC,
STATE_SYNCING, STATE_WAITING)
from replication.interface import session
@ -94,41 +94,15 @@ def project_to_viewport(region: bpy.types.Region, rv3d: bpy.types.RegionView3D,
return [target.x, target.y, target.z]
def bbox_from_obj(obj: bpy.types.Object, index: int = 1) -> list:
def bbox_from_obj(obj: bpy.types.Object, radius: float) -> list:
""" Generate a bounding box for a given object by using its world matrix
:param obj: target object
:type obj: bpy.types.Object
:param index: indice offset
:type index: int
:return: list of 8 points [(x,y,z),...], list of 12 link between these points [(1,2),...]
:param radius: bounding box radius
:type radius: float
:return: list of 8 points [(x,y,z),...]
"""
radius = 1.0 # Radius of the bounding box
index = 8*index
vertex_indices = (
(0+index, 1+index), (0+index, 2+index), (1+index, 3+index), (2+index, 3+index),
(4+index, 5+index), (4+index, 6+index), (5+index, 7+index), (6+index, 7+index),
(0+index, 4+index), (1+index, 5+index), (2+index, 6+index), (3+index, 7+index))
if obj.type == 'EMPTY':
radius = obj.empty_display_size
elif obj.type == 'LIGHT':
radius = obj.data.shadow_soft_size
elif obj.type == 'LIGHT_PROBE':
radius = obj.data.influence_distance
elif obj.type == 'CAMERA':
radius = obj.data.display_size
elif hasattr(obj, 'bound_box'):
vertex_indices = (
(0+index, 1+index), (1+index, 2+index),
(2+index, 3+index), (0+index, 3+index),
(4+index, 5+index), (5+index, 6+index),
(6+index, 7+index), (4+index, 7+index),
(0+index, 4+index), (1+index, 5+index),
(2+index, 6+index), (3+index, 7+index))
vertex_pos = get_bb_coords_from_obj(obj)
return vertex_pos, vertex_indices
coords = [
(-radius, -radius, -radius), (+radius, -radius, -radius),
(-radius, +radius, -radius), (+radius, +radius, -radius),
@ -138,32 +112,9 @@ def bbox_from_obj(obj: bpy.types.Object, index: int = 1) -> list:
base = obj.matrix_world
bbox_corners = [base @ mathutils.Vector(corner) for corner in coords]
vertex_pos = [(point.x, point.y, point.z) for point in bbox_corners]
return [(point.x, point.y, point.z)
for point in bbox_corners]
return vertex_pos, vertex_indices
def bbox_from_instance_collection(ic: bpy.types.Object, index: int = 0) -> list:
""" Generate a bounding box for a given instance collection by using its objects
:param ic: target instance collection
:type ic: bpy.types.Object
:param index: indice offset
:type index: int
:return: list of 8*objs points [(x,y,z),...], tuple of 12*objs link between these points [(1,2),...]
"""
vertex_pos = []
vertex_indices = ()
for obj_index, obj in enumerate(ic.instance_collection.objects):
vertex_pos_temp, vertex_indices_temp = bbox_from_obj(obj, index=index+obj_index)
vertex_pos += vertex_pos_temp
vertex_indices += vertex_indices_temp
bbox_corners = [ic.matrix_world @ mathutils.Vector(vertex) for vertex in vertex_pos]
vertex_pos = [(point.x, point.y, point.z) for point in bbox_corners]
return vertex_pos, vertex_indices
def generate_user_camera() -> list:
""" Generate a basic camera represention of the user point of view
@ -252,13 +203,6 @@ class Widget(object):
"""
return True
def configure_bgl(self):
bgl.glLineWidth(2.)
bgl.glEnable(bgl.GL_DEPTH_TEST)
bgl.glEnable(bgl.GL_BLEND)
bgl.glEnable(bgl.GL_LINE_SMOOTH)
def draw(self):
"""How to draw the widget
"""
@ -312,6 +256,11 @@ class UserFrustumWidget(Widget):
{"pos": positions},
indices=self.indices)
bgl.glLineWidth(2.)
bgl.glEnable(bgl.GL_DEPTH_TEST)
bgl.glEnable(bgl.GL_BLEND)
bgl.glEnable(bgl.GL_LINE_SMOOTH)
shader.bind()
shader.uniform_float("color", self.data.get('color'))
batch.draw(shader)
@ -323,8 +272,6 @@ class UserSelectionWidget(Widget):
username):
self.username = username
self.settings = bpy.context.window_manager.session
self.current_selection_ids = []
self.current_selected_objects = []
@property
def data(self):
@ -334,15 +281,6 @@ class UserSelectionWidget(Widget):
else:
return None
@property
def selected_objects(self):
user_selection = self.data.get('selected_objects')
if self.current_selection_ids != user_selection:
self.current_selected_objects = [find_from_attr("uuid", uid, bpy.data.objects) for uid in user_selection]
self.current_selection_ids = user_selection
return self.current_selected_objects
def poll(self):
if self.data is None:
return False
@ -357,32 +295,49 @@ class UserSelectionWidget(Widget):
self.settings.enable_presence
def draw(self):
vertex_pos = []
vertex_ind = []
collection_offset = 0
for obj_index, obj in enumerate(self.selected_objects):
if obj is None:
continue
obj_index+=collection_offset
if hasattr(obj, 'instance_collection') and obj.instance_collection:
bbox_pos, bbox_ind = bbox_from_instance_collection(obj, index=obj_index)
collection_offset+=len(obj.instance_collection.objects)-1
else :
bbox_pos, bbox_ind = bbox_from_obj(obj, index=obj_index)
vertex_pos += bbox_pos
vertex_ind += bbox_ind
user_selection = self.data.get('selected_objects')
for select_ob in user_selection:
ob = find_from_attr("uuid", select_ob, bpy.data.objects)
if not ob:
return
vertex_pos = bbox_from_obj(ob, 1.0)
vertex_indices = ((0, 1), (0, 2), (1, 3), (2, 3),
(4, 5), (4, 6), (5, 7), (6, 7),
(0, 4), (1, 5), (2, 6), (3, 7))
if ob.instance_collection:
for obj in ob.instance_collection.objects:
if obj.type == 'MESH' and hasattr(obj, 'bound_box'):
vertex_pos = get_bb_coords_from_obj(obj, instance=ob)
break
elif ob.type == 'EMPTY':
vertex_pos = bbox_from_obj(ob, ob.empty_display_size)
elif ob.type == 'LIGHT':
vertex_pos = bbox_from_obj(ob, ob.data.shadow_soft_size)
elif ob.type == 'LIGHT_PROBE':
vertex_pos = bbox_from_obj(ob, ob.data.influence_distance)
elif ob.type == 'CAMERA':
vertex_pos = bbox_from_obj(ob, ob.data.display_size)
elif hasattr(ob, 'bound_box'):
vertex_indices = (
(0, 1), (1, 2), (2, 3), (0, 3),
(4, 5), (5, 6), (6, 7), (4, 7),
(0, 4), (1, 5), (2, 6), (3, 7))
vertex_pos = get_bb_coords_from_obj(ob)
shader = gpu.shader.from_builtin('3D_UNIFORM_COLOR')
batch = batch_for_shader(
shader,
'LINES',
{"pos": vertex_pos},
indices=vertex_ind)
indices=vertex_indices)
shader.bind()
shader.uniform_float("color", self.data.get('color'))
batch.draw(shader)
class UserNameWidget(Widget):
draw_type = 'POST_PIXEL'
@ -425,62 +380,6 @@ class UserNameWidget(Widget):
blf.color(0, color[0], color[1], color[2], color[3])
blf.draw(0, self.username)
class UserModeWidget(Widget):
draw_type = 'POST_PIXEL'
def __init__(
self,
username):
self.username = username
self.settings = bpy.context.window_manager.session
self.preferences = get_preferences()
@property
def data(self):
user = session.online_users.get(self.username)
if user:
return user.get('metadata')
else:
return None
def poll(self):
if self.data is None:
return False
scene_current = self.data.get('scene_current')
mode_current = self.data.get('mode_current')
user_selection = self.data.get('selected_objects')
return (scene_current == bpy.context.scene.name or
mode_current == bpy.context.mode or
self.settings.presence_show_far_user) and \
user_selection and \
self.settings.presence_show_mode and \
self.settings.enable_presence
def draw(self):
user_selection = self.data.get('selected_objects')
area, region, rv3d = view3d_find()
viewport_coord = project_to_viewport(region, rv3d, (0, 0))
obj = find_from_attr("uuid", user_selection[0], bpy.data.objects)
if not obj:
return
mode_current = self.data.get('mode_current')
color = self.data.get('color')
origin_coord = project_to_screen(obj.location)
distance_viewport_object = math.sqrt((viewport_coord[0]-obj.location[0])**2+(viewport_coord[1]-obj.location[1])**2+(viewport_coord[2]-obj.location[2])**2)
if distance_viewport_object > self.preferences.presence_mode_distance :
return
if origin_coord :
blf.position(0, origin_coord[0]+8, origin_coord[1]-15, 0)
blf.size(0, 16, 72)
blf.color(0, color[0], color[1], color[2], color[3])
blf.draw(0, mode_current)
class SessionStatusWidget(Widget):
draw_type = 'POST_PIXEL'
@ -500,7 +399,7 @@ class SessionStatusWidget(Widget):
text_scale = self.preferences.presence_hud_scale
ui_scale = bpy.context.preferences.view.ui_scale
color = [1, 1, 0, 1]
state = session.state
state = session.state.get('STATE')
state_str = f"{get_state_str(state)}"
if state == STATE_ACTIVE:
@ -563,7 +462,6 @@ class DrawFactory(object):
try:
for widget in self.widgets.values():
if widget.draw_type == 'POST_VIEW' and widget.poll():
widget.configure_bgl()
widget.draw()
except Exception as e:
logging.error(
@ -573,7 +471,6 @@ class DrawFactory(object):
try:
for widget in self.widgets.values():
if widget.draw_type == 'POST_PIXEL' and widget.poll():
widget.configure_bgl()
widget.draw()
except Exception as e:
logging.error(
@ -587,7 +484,6 @@ this.renderer = DrawFactory()
def register():
this.renderer.register_handlers()
this.renderer.add_widget("session_status", SessionStatusWidget())

View File

@ -1,48 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#
# ##### END GPL LICENSE BLOCK #####
from replication.constants import STATE_INITIAL
class SessionData():
""" A structure to share easily the current session data across the addon
modules.
This object will completely replace the Singleton lying in replication
interface module.
"""
def __init__(self):
self.repository = None # The current repository
self.remote = None # The active remote
self.server = None
self.applied_updates = []
@property
def state(self):
if self.remote is None:
return STATE_INITIAL
else:
return self.remote.connection_status
def clear(self):
self.remote = None
self.repository = None
self.server = None
self.applied_updates = []
session = SessionData()

View File

@ -24,15 +24,12 @@ from replication.constants import (FETCHED, RP_COMMON, STATE_ACTIVE,
STATE_SRV_SYNC, STATE_SYNCING, UP)
from replication.exception import NonAuthorizedOperationError, ContextError
from replication.interface import session
from replication import porcelain
from . import operators, utils
from .presence import (UserFrustumWidget, UserNameWidget, UserModeWidget, UserSelectionWidget,
from .presence import (UserFrustumWidget, UserNameWidget, UserSelectionWidget,
generate_user_camera, get_view_matrix, refresh_3d_view,
refresh_sidebar_view, renderer)
from . import shared_data
this = sys.modules[__name__]
# Registered timers
@ -41,8 +38,7 @@ this.registry = dict()
def is_annotating(context: bpy.types.Context):
""" Check if the annotate mode is enabled
"""
active_tool = bpy.context.workspace.tools.from_space_view3d_mode('OBJECT', create=False)
return (active_tool and active_tool.idname == 'builtin.annotate')
return bpy.context.workspace.tools.from_space_view3d_mode('OBJECT', create=False).idname == 'builtin.annotate'
class Timer(object):
@ -75,8 +71,7 @@ class Timer(object):
except Exception as e:
logging.error(e)
self.unregister()
traceback.print_exc()
session.disconnect(reason=f"Error during timer {self.id} execution")
session.disconnect()
else:
if self.is_running:
return self._timeout
@ -103,49 +98,45 @@ class SessionBackupTimer(Timer):
def execute(self):
session.repository.dumps(self._filepath)
class SessionListenTimer(Timer):
def execute(self):
session.listen()
session.save(self._filepath)
class ApplyTimer(Timer):
def execute(self):
if session and session.state == STATE_ACTIVE:
for node in session.repository.graph.keys():
node_ref = session.repository.graph.get(node)
if session and session.state['STATE'] == STATE_ACTIVE:
nodes = session.list()
for node in nodes:
node_ref = session.get(uuid=node)
if node_ref.state == FETCHED:
try:
shared_data.session.applied_updates.append(node)
porcelain.apply(session.repository, node)
session.apply(node)
except Exception as e:
logging.error(f"Fail to apply {node_ref.uuid}")
traceback.print_exc()
else:
impl = session.repository.rdp.get_implementation(node_ref.instance)
if impl.bl_reload_parent:
for parent in session.repository.graph.get_parents(node):
if node_ref.bl_reload_parent:
for parent in session._graph.find_parents(node):
logging.debug("Refresh parent {node}")
porcelain.apply(session.repository,
parent.uuid,
force=True)
if hasattr(impl, 'bl_reload_child') and impl.bl_reload_child:
for dep in node_ref.dependencies:
porcelain.apply(session.repository,
dep,
force=True)
session.apply(parent, force=True)
class AnnotationUpdates(Timer):
def __init__(self, timeout=1):
self._annotating = False
self._settings = utils.get_preferences()
class DynamicRightSelectTimer(Timer):
def __init__(self, timeout=.1):
super().__init__(timeout)
self._last_selection = []
self._user = None
self._annotating = False
def execute(self):
if session and session.state == STATE_ACTIVE:
settings = utils.get_preferences()
if session and session.state['STATE'] == STATE_ACTIVE:
# Find user
if self._user is None:
self._user = session.online_users.get(settings.username)
if self._user:
ctx = bpy.context
annotation_gp = ctx.scene.grease_pencil
@ -154,85 +145,80 @@ class AnnotationUpdates(Timer):
# if an annotation exist and is tracked
if annotation_gp and annotation_gp.uuid:
registered_gp = session.repository.graph.get(annotation_gp.uuid)
registered_gp = session.get(uuid=annotation_gp.uuid)
if is_annotating(bpy.context):
# try to get the right on it
if registered_gp.owner == RP_COMMON:
self._annotating = True
logging.debug(
"Getting the right on the annotation GP")
porcelain.lock(session.repository,
[registered_gp.uuid],
session.change_owner(
registered_gp.uuid,
settings.username,
ignore_warnings=True,
affect_dependencies=False)
if registered_gp.owner == self._settings.username:
porcelain.commit(session.repository, annotation_gp.uuid)
porcelain.push(session.repository, 'origin', annotation_gp.uuid)
if registered_gp.owner == settings.username:
gp_node = session.get(uuid=annotation_gp.uuid)
if gp_node.has_changed():
session.commit(gp_node.uuid)
session.push(gp_node.uuid, check_data=False)
elif self._annotating:
porcelain.unlock(session.repository,
[registered_gp.uuid],
session.change_owner(
registered_gp.uuid,
RP_COMMON,
ignore_warnings=True,
affect_dependencies=False)
self._annotating = False
class DynamicRightSelectTimer(Timer):
def __init__(self, timeout=.1):
super().__init__(timeout)
self._last_selection = set()
self._user = None
def execute(self):
settings = utils.get_preferences()
if session and session.state == STATE_ACTIVE:
# Find user
if self._user is None:
self._user = session.online_users.get(settings.username)
if self._user:
current_selection = set(utils.get_selected_objects(
current_selection = utils.get_selected_objects(
bpy.context.scene,
bpy.data.window_managers['WinMan'].windows[0].view_layer
))
)
if current_selection != self._last_selection:
to_lock = list(current_selection.difference(self._last_selection))
to_release = list(self._last_selection.difference(current_selection))
instances_to_lock = list()
obj_common = [
o for o in self._last_selection if o not in current_selection]
obj_ours = [
o for o in current_selection if o not in self._last_selection]
for node_id in to_lock:
node = session.repository.graph.get(node_id)
if node and hasattr(node,'data'):
instance_mode = node.data.get('instance_type')
if instance_mode and instance_mode == 'COLLECTION':
to_lock.remove(node_id)
instances_to_lock.append(node_id)
if instances_to_lock:
try:
porcelain.lock(session.repository,
instances_to_lock,
ignore_warnings=True,
affect_dependencies=False)
except NonAuthorizedOperationError as e:
logging.warning(e)
# change old selection right to common
for obj in obj_common:
node = session.get(uuid=obj)
if to_release:
if node and (node.owner == settings.username or node.owner == RP_COMMON):
recursive = True
if node.data and 'instance_type' in node.data.keys():
recursive = node.data['instance_type'] != 'COLLECTION'
try:
porcelain.unlock(session.repository,
to_release,
session.change_owner(
node.uuid,
RP_COMMON,
ignore_warnings=True,
affect_dependencies=True)
except NonAuthorizedOperationError as e:
logging.warning(e)
if to_lock:
affect_dependencies=recursive)
except NonAuthorizedOperationError:
logging.warning(
f"Not authorized to change {node} owner")
# change new selection to our
for obj in obj_ours:
node = session.get(uuid=obj)
if node and node.owner == RP_COMMON:
recursive = True
if node.data and 'instance_type' in node.data.keys():
recursive = node.data['instance_type'] != 'COLLECTION'
try:
porcelain.lock(session.repository,
to_lock,
session.change_owner(
node.uuid,
settings.username,
ignore_warnings=True,
affect_dependencies=True)
except NonAuthorizedOperationError as e:
logging.warning(e)
affect_dependencies=recursive)
except NonAuthorizedOperationError:
logging.warning(
f"Not authorized to change {node} owner")
else:
return
self._last_selection = current_selection
@ -240,29 +226,31 @@ class DynamicRightSelectTimer(Timer):
'selected_objects': current_selection
}
porcelain.update_user_metadata(session.repository, user_metadata)
session.update_user_metadata(user_metadata)
logging.debug("Update selection")
# Fix deselection until right managment refactoring (with Roles concepts)
if len(current_selection) == 0 :
owned_keys = [k for k, v in session.repository.graph.items() if v.owner==settings.username]
if owned_keys:
owned_keys = session.list(
filter_owner=settings.username)
for key in owned_keys:
node = session.get(uuid=key)
try:
porcelain.unlock(session.repository,
owned_keys,
session.change_owner(
key,
RP_COMMON,
ignore_warnings=True,
affect_dependencies=True)
except NonAuthorizedOperationError as e:
logging.warning(e)
affect_dependencies=recursive)
except NonAuthorizedOperationError:
logging.warning(
f"Not authorized to change {key} owner")
# Objects selectability
for obj in bpy.data.objects:
object_uuid = getattr(obj, 'uuid', None)
if object_uuid:
is_selectable = not session.repository.is_node_readonly(object_uuid)
is_selectable = not session.is_readonly(object_uuid)
if obj.hide_select != is_selectable:
obj.hide_select = is_selectable
shared_data.session.applied_updates.append(object_uuid)
class ClientUpdate(Timer):
@ -275,7 +263,7 @@ class ClientUpdate(Timer):
settings = utils.get_preferences()
if session and renderer:
if session.state in [STATE_ACTIVE, STATE_LOBBY]:
if session.state['STATE'] in [STATE_ACTIVE, STATE_LOBBY]:
local_user = session.online_users.get(
settings.username)
@ -312,24 +300,20 @@ class ClientUpdate(Timer):
settings.client_color.b,
1),
'frame_current': bpy.context.scene.frame_current,
'scene_current': scene_current,
'mode_current': bpy.context.mode
'scene_current': scene_current
}
porcelain.update_user_metadata(session.repository, metadata)
session.update_user_metadata(metadata)
# Update client representation
# Update client current scene
elif scene_current != local_user_metadata['scene_current']:
local_user_metadata['scene_current'] = scene_current
porcelain.update_user_metadata(session.repository, local_user_metadata)
session.update_user_metadata(local_user_metadata)
elif 'view_corners' in local_user_metadata and current_view_corners != local_user_metadata['view_corners']:
local_user_metadata['view_corners'] = current_view_corners
local_user_metadata['view_matrix'] = get_view_matrix(
)
porcelain.update_user_metadata(session.repository, local_user_metadata)
elif bpy.context.mode != local_user_metadata['mode_current']:
local_user_metadata['mode_current'] = bpy.context.mode
porcelain.update_user_metadata(session.repository, local_user_metadata)
session.update_user_metadata(local_user_metadata)
class SessionStatusUpdate(Timer):
@ -357,7 +341,6 @@ class SessionUserSync(Timer):
renderer.remove_widget(f"{user.username}_cam")
renderer.remove_widget(f"{user.username}_select")
renderer.remove_widget(f"{user.username}_name")
renderer.remove_widget(f"{user.username}_mode")
ui_users.remove(index)
break
@ -373,8 +356,6 @@ class SessionUserSync(Timer):
f"{user}_select", UserSelectionWidget(user))
renderer.add_widget(
f"{user}_name", UserNameWidget(user))
renderer.add_widget(
f"{user}_mode", UserModeWidget(user))
class MainThreadExecutor(Timer):

View File

@ -16,9 +16,7 @@
# ##### END GPL LICENSE BLOCK #####
from logging import log
import bpy
import bpy.utils.previews
from .utils import get_preferences, get_expanded_icon, get_folder_size, get_state_str
from replication.constants import (ADDED, ERROR, FETCHED,
@ -28,7 +26,7 @@ from replication.constants import (ADDED, ERROR, FETCHED,
STATE_INITIAL, STATE_SRV_SYNC,
STATE_WAITING, STATE_QUITTING,
STATE_LOBBY,
CONNECTING)
STATE_LAUNCHING_SERVICES)
from replication import __version__
from replication.interface import session
from .timers import registry
@ -62,41 +60,7 @@ def printProgressBar(iteration, total, prefix='', suffix='', decimals=1, length=
bar = fill * filledLength + fill_empty * (length - filledLength)
return f"{prefix} |{bar}| {iteration}/{total}{suffix}"
def get_mode_icon(mode_name: str) -> str:
""" given a mode name retrieve a built-in icon
"""
mode_icon = "NONE"
if mode_name == "OBJECT" :
mode_icon = "OBJECT_DATAMODE"
elif mode_name == "EDIT_MESH" :
mode_icon = "EDITMODE_HLT"
elif mode_name == 'EDIT_CURVE':
mode_icon = "CURVE_DATA"
elif mode_name == 'EDIT_SURFACE':
mode_icon = "SURFACE_DATA"
elif mode_name == 'EDIT_TEXT':
mode_icon = "FILE_FONT"
elif mode_name == 'EDIT_ARMATURE':
mode_icon = "ARMATURE_DATA"
elif mode_name == 'EDIT_METABALL':
mode_icon = "META_BALL"
elif mode_name == 'EDIT_LATTICE':
mode_icon = "LATTICE_DATA"
elif mode_name == 'POSE':
mode_icon = "POSE_HLT"
elif mode_name == 'SCULPT':
mode_icon = "SCULPTMODE_HLT"
elif mode_name == 'PAINT_WEIGHT':
mode_icon = "WPAINT_HLT"
elif mode_name == 'PAINT_VERTEX':
mode_icon = "VPAINT_HLT"
elif mode_name == 'PAINT_TEXTURE':
mode_icon = "TPAINT_HLT"
elif mode_name == 'PARTICLE':
mode_icon = "PARTICLES"
elif mode_name == 'PAINT_GPENCIL' or mode_name =='EDIT_GPENCIL' or mode_name =='SCULPT_GPENCIL' or mode_name =='WEIGHT_GPENCIL' or mode_name =='VERTEX_GPENCIL':
mode_icon = "GREASEPENCIL"
return mode_icon
class SESSION_PT_settings(bpy.types.Panel):
"""Settings panel"""
bl_idname = "MULTIUSER_SETTINGS_PT_panel"
@ -107,180 +71,157 @@ class SESSION_PT_settings(bpy.types.Panel):
def draw_header(self, context):
layout = self.layout
settings = get_preferences()
from multi_user import icons
offline_icon = icons.icons_col["session_status_offline"]
waiting_icon = icons.icons_col["session_status_waiting"]
online_icon = icons.icons_col["session_status_online"]
if session and session.state != STATE_INITIAL:
if session and session.state['STATE'] != STATE_INITIAL:
cli_state = session.state
state = session.state
connection_icon = offline_icon
state = session.state.get('STATE')
connection_icon = "KEYTYPE_MOVING_HOLD_VEC"
if state == STATE_ACTIVE:
connection_icon = online_icon
connection_icon = 'PROP_ON'
else:
connection_icon = waiting_icon
connection_icon = 'PROP_CON'
layout.label(text=f"{str(settings.server_name)} - {get_state_str(cli_state)}", icon_value=connection_icon.icon_id)
layout.label(text=f"Session - {get_state_str(cli_state['STATE'])}", icon=connection_icon)
else:
layout.label(text=f"Multi-user - v{__version__}", icon="ANTIALIASED")
layout.label(text=f"Session - v{__version__}",icon="PROP_OFF")
def draw(self, context):
layout = self.layout
row = layout.row()
runtime_settings = context.window_manager.session
settings = get_preferences()
if settings.is_first_launch:
# USER SETTINGS
row = layout.row()
row.label(text="1. Enter your username and color:")
row = layout.row()
split = row.split(factor=0.7, align=True)
split.prop(settings, "username", text="")
split.prop(settings, "client_color", text="")
# DOC
row = layout.row()
row.label(text="2. New here ? See the doc:")
row = layout.row()
row.operator("doc.get", text="Documentation", icon="HELP")
# START
row = layout.row()
row.label(text="3: Start the Multi-user:")
row = layout.row()
row.scale_y = 2
row.operator("firstlaunch.verify", text="Continue")
if not settings.is_first_launch:
if hasattr(context.window_manager, 'session'):
# STATE INITIAL
if not session \
or (session and session.state == STATE_INITIAL):
layout = self.layout
settings = get_preferences()
server_preset = settings.server_preset
selected_server = context.window_manager.server_index if context.window_manager.server_index<=len(server_preset)-1 else 0
active_server_name = server_preset[selected_server].name if len(server_preset)>=1 else ""
is_server_selected = True if active_server_name else False
# SERVER LIST
row = layout.row()
box = row.box()
box.scale_y = 0.7
split = box.split(factor=0.7)
split.label(text="Server")
split.label(text="Online")
col = row.column(align=True)
col.operator("session.get_info", icon="FILE_REFRESH", text="")
row = layout.row()
col = row.column(align=True)
col.template_list("SESSION_UL_network", "", settings, "server_preset", context.window_manager, "server_index")
col.separator()
connectOp = col.row()
connectOp.enabled =is_server_selected
connectOp.operator("session.connect", text="Connect")
col = row.column(align=True)
col.operator("session.preset_server_add", icon="ADD", text="") # TODO : add conditions (need a name, etc..)
row_visible = col.row(align=True)
col_visible = row_visible.column(align=True)
col_visible.enabled = is_server_selected
col_visible.operator("session.preset_server_remove", icon="REMOVE", text="").target_server_name = active_server_name
col_visible.separator()
col_visible.operator("session.preset_server_edit", icon="GREASEPENCIL", text="").target_server_name = active_server_name
or (session and session.state['STATE'] == STATE_INITIAL):
pass
else:
exitbutton = layout.row()
exitbutton.scale_y = 1.5
exitbutton.operator("session.stop", icon='QUIT', text="Disconnect")
cli_state = session.state
row = layout.row()
progress = session.state_progress
current_state = session.state
current_state = cli_state['STATE']
info_msg = None
if current_state == STATE_LOBBY:
usr = session.online_users.get(settings.username)
if current_state in [STATE_ACTIVE]:
row = row.grid_flow(row_major=True, columns=0, even_columns=True, even_rows=False, align=True)
row.prop(settings.sync_flags, "sync_render_settings",text="",icon_only=True, icon='SCENE')
row.prop(settings.sync_flags, "sync_during_editmode", text="",icon_only=True, icon='EDITMODE_HLT')
row.prop(settings.sync_flags, "sync_active_camera", text="",icon_only=True, icon='OBJECT_DATAMODE')
row= layout.row()
if current_state in [STATE_ACTIVE] and runtime_settings.is_host:
info_msg = f"LAN: {runtime_settings.internet_ip}"
if current_state == STATE_LOBBY:
info_msg = "Waiting for the session to start."
if usr and usr['admin']:
info_msg = "Init the session to start."
info_box = layout.row()
info_box.label(text=info_msg,icon='INFO')
init_row = layout.row()
init_row.operator("session.init", icon='TOOL_SETTINGS', text="Init")
else:
info_box = layout.row()
if info_msg:
info_box = row.box()
info_box.row().label(text=info_msg,icon='INFO')
# PROGRESS BAR
# Progress bar
if current_state in [STATE_SYNCING, STATE_SRV_SYNC, STATE_WAITING]:
row= layout.row()
row.label(text=f"Status: {get_state_str(current_state)}")
row= layout.row()
info_box = row.box()
info_box.label(text=printProgressBar(
progress['current'],
progress['total'],
info_box.row().label(text=printProgressBar(
cli_state['CURRENT'],
cli_state['TOTAL'],
length=16
))
class SESSION_PT_host_settings(bpy.types.Panel):
bl_idname = "MULTIUSER_SETTINGS_HOST_PT_panel"
bl_label = "Hosting"
layout.row().operator("session.stop", icon='QUIT', text="Exit")
class SESSION_PT_settings_network(bpy.types.Panel):
bl_idname = "MULTIUSER_SETTINGS_NETWORK_PT_panel"
bl_label = "Network"
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_parent_id = 'MULTIUSER_SETTINGS_PT_panel'
bl_options = {'DEFAULT_CLOSED'}
@classmethod
def poll(cls, context):
settings = get_preferences()
return not session \
or (session and session.state == 0) \
and not settings.sidebar_advanced_shown \
and not settings.is_first_launch
or (session and session.state['STATE'] == 0)
def draw_header(self, context):
self.layout.label(text="", icon='NETWORK_DRIVE')
self.layout.label(text="", icon='URL')
def draw(self, context):
layout = self.layout
runtime_settings = context.window_manager.session
settings = get_preferences()
#HOST
host_selection = layout.row().box()
host_selection_row = host_selection.row()
host_selection_row.label(text="Init the session from:")
host_selection_row.prop(settings, "init_method", text="")
host_selection_row = host_selection.row()
host_selection_row.label(text="Port:")
host_selection_row.prop(settings, "host_port", text="")
host_selection_row = host_selection.row()
host_selection_col = host_selection_row.column()
host_selection_col.prop(settings, "host_use_server_password", text="Server password:")
host_selection_col = host_selection_row.column()
host_selection_col.enabled = True if settings.host_use_server_password else False
host_selection_col.prop(settings, "host_server_password", text="")
host_selection_row = host_selection.row()
host_selection_col = host_selection_row.column()
host_selection_col.prop(settings, "host_use_admin_password", text="Admin password:")
host_selection_col = host_selection_row.column()
host_selection_col.enabled = True if settings.host_use_admin_password else False
host_selection_col.prop(settings, "host_admin_password", text="")
# USER SETTINGS
row = layout.row()
row.prop(runtime_settings, "session_mode", expand=True)
row = layout.row()
host_selection = layout.column()
host_selection.operator("session.host", text="Host")
box = row.box()
if runtime_settings.session_mode == 'HOST':
row = box.row()
row.label(text="Port:")
row.prop(settings, "port", text="")
row = box.row()
row.label(text="Start from:")
row.prop(settings, "init_method", text="")
row = box.row()
row.label(text="Admin password:")
row.prop(runtime_settings, "password", text="")
row = box.row()
row.operator("session.start", text="HOST").host = True
else:
row = box.row()
row.prop(settings, "ip", text="IP")
row = box.row()
row.label(text="Port:")
row.prop(settings, "port", text="")
row = box.row()
row.prop(runtime_settings, "admin", text='Connect as admin', icon='DISCLOSURE_TRI_DOWN' if runtime_settings.admin
else 'DISCLOSURE_TRI_RIGHT')
if runtime_settings.admin:
row = box.row()
row.label(text="Password:")
row.prop(runtime_settings, "password", text="")
row = box.row()
row.operator("session.start", text="CONNECT").host = False
class SESSION_PT_settings_user(bpy.types.Panel):
bl_idname = "MULTIUSER_SETTINGS_USER_PT_panel"
bl_label = "User info"
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_parent_id = 'MULTIUSER_SETTINGS_PT_panel'
@classmethod
def poll(cls, context):
return not session \
or (session and session.state['STATE'] == 0)
def draw_header(self, context):
self.layout.label(text="", icon='USER')
def draw(self, context):
layout = self.layout
runtime_settings = context.window_manager.session
settings = get_preferences()
row = layout.row()
# USER SETTINGS
row.prop(settings, "username", text="name")
row = layout.row()
row.prop(settings, "client_color", text="color")
row = layout.row()
class SESSION_PT_advanced_settings(bpy.types.Panel):
bl_idname = "MULTIUSER_SETTINGS_REPLICATION_PT_panel"
bl_label = "General Settings"
bl_label = "Advanced"
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_parent_id = 'MULTIUSER_SETTINGS_PT_panel'
@ -288,34 +229,19 @@ class SESSION_PT_advanced_settings(bpy.types.Panel):
@classmethod
def poll(cls, context):
settings = get_preferences()
return not session \
or (session and session.state == 0) \
and not settings.sidebar_advanced_shown \
and not settings.is_first_launch
or (session and session.state['STATE'] == 0)
def draw_header(self, context):
self.layout.label(text="", icon='PREFERENCES')
def draw(self, context):
layout = self.layout
runtime_settings = context.window_manager.session
settings = get_preferences()
#ADVANCED USER INFO
uinfo_section = layout.row().box()
uinfo_section.prop(
settings,
"sidebar_advanced_uinfo_expanded",
text="User Info",
icon=get_expanded_icon(settings.sidebar_advanced_uinfo_expanded),
emboss=False)
if settings.sidebar_advanced_uinfo_expanded:
uinfo_section_row = uinfo_section.row()
uinfo_section_split = uinfo_section_row.split(factor=0.7, align=True)
uinfo_section_split.prop(settings, "username", text="")
uinfo_section_split.prop(settings, "client_color", text="")
#ADVANCED NET
net_section = layout.row().box()
net_section.prop(
settings,
@ -323,15 +249,15 @@ class SESSION_PT_advanced_settings(bpy.types.Panel):
text="Network",
icon=get_expanded_icon(settings.sidebar_advanced_net_expanded),
emboss=False)
if settings.sidebar_advanced_net_expanded:
net_section_row = net_section.row()
net_section_row.label(text="IPC Port:")
net_section_row.prop(settings, "ipc_port", text="")
net_section_row = net_section.row()
net_section_row.label(text="Timeout (ms):")
net_section_row.prop(settings, "connection_timeout", text="")
net_section_row = net_section.row()
net_section_row.label(text="Server ping (ms):")
net_section_row.prop(settings, "ping_timeout", text="")
#ADVANCED REPLICATION
replication_section = layout.row().box()
replication_section.prop(
settings,
@ -339,12 +265,16 @@ class SESSION_PT_advanced_settings(bpy.types.Panel):
text="Replication",
icon=get_expanded_icon(settings.sidebar_advanced_rep_expanded),
emboss=False)
if settings.sidebar_advanced_rep_expanded:
replication_section_row = replication_section.row()
replication_section_row = replication_section.row()
replication_section_row.prop(settings.sync_flags, "sync_render_settings")
replication_section_row = replication_section.row()
replication_section_row.prop(settings.sync_flags, "sync_active_camera")
replication_section_row = replication_section.row()
replication_section_row.prop(settings.sync_flags, "sync_during_editmode")
replication_section_row = replication_section.row()
if settings.sync_flags.sync_during_editmode:
@ -353,7 +283,7 @@ class SESSION_PT_advanced_settings(bpy.types.Panel):
replication_section_row = replication_section.row()
replication_section_row.prop(settings, "depsgraph_update_rate", text="Apply delay")
#ADVANCED CACHE
cache_section = layout.row().box()
cache_section.prop(
settings,
@ -371,8 +301,6 @@ class SESSION_PT_advanced_settings(bpy.types.Panel):
cache_section_row.prop(settings, "clear_memory_filecache", text="")
cache_section_row = cache_section.row()
cache_section_row.operator('session.clear_cache', text=f"Clear cache ({get_folder_size(settings.cache_directory)})")
#ADVANCED LOG
log_section = layout.row().box()
log_section.prop(
settings,
@ -380,11 +308,11 @@ class SESSION_PT_advanced_settings(bpy.types.Panel):
text="Logging",
icon=get_expanded_icon(settings.sidebar_advanced_log_expanded),
emboss=False)
if settings.sidebar_advanced_log_expanded:
log_section_row = log_section.row()
log_section_row.label(text="Log level:")
log_section_row.prop(settings, 'logging_level', text="")
class SESSION_PT_user(bpy.types.Panel):
bl_idname = "MULTIUSER_USER_PT_panel"
bl_label = "Online users"
@ -394,8 +322,7 @@ class SESSION_PT_user(bpy.types.Panel):
@classmethod
def poll(cls, context):
return session \
and session.state in [STATE_ACTIVE, STATE_LOBBY]
return session and session.state['STATE'] in [STATE_ACTIVE, STATE_LOBBY]
def draw_header(self, context):
self.layout.label(text="", icon='USER')
@ -407,40 +334,26 @@ class SESSION_PT_user(bpy.types.Panel):
settings = get_preferences()
active_user = online_users[selected_user] if len(
online_users)-1 >= selected_user else 0
runtime_settings = context.window_manager.session
#USER LIST
col = layout.column(align=True)
row = col.row(align=True)
row = row.split(factor=0.35, align=True)
# Create a simple row.
row = layout.row()
box = row.box()
split = box.split(factor=0.35)
split.label(text="user")
split = split.split(factor=0.5)
split.label(text="location")
split.label(text="frame")
split.label(text="ping")
box = row.box()
brow = box.row(align=True)
brow.label(text="user")
row = row.split(factor=0.25, align=True)
box = row.box()
brow = box.row(align=True)
brow.label(text="mode")
box = row.box()
brow = box.row(align=True)
brow.label(text="frame")
box = row.box()
brow = box.row(align=True)
brow.label(text="scene")
box = row.box()
brow = box.row(align=True)
brow.label(text="ping")
row = col.row(align=True)
row.template_list("SESSION_UL_users", "", context.window_manager,
row = layout.row()
layout.template_list("SESSION_UL_users", "", context.window_manager,
"online_users", context.window_manager, "user_index")
#OPERATOR ON USER
if active_user != 0 and active_user.username != settings.username:
row = layout.row()
user_operations = row.split()
if session.state == STATE_ACTIVE:
if session.state['STATE'] == STATE_ACTIVE:
user_operations.alert = context.window_manager.session.time_snap_running
user_operations.operator(
@ -468,8 +381,6 @@ class SESSION_UL_users(bpy.types.UIList):
ping = '-'
frame_current = '-'
scene_current = '-'
mode_current = '-'
mode_icon = 'BLANK1'
status_icon = 'BLANK1'
if session:
user = session.online_users.get(item.username)
@ -479,38 +390,59 @@ class SESSION_UL_users(bpy.types.UIList):
if metadata and 'frame_current' in metadata:
frame_current = str(metadata.get('frame_current','-'))
scene_current = metadata.get('scene_current','-')
mode_current = metadata.get('mode_current','-')
mode_current = metadata.get('mode_current','-')
mode_icon = get_mode_icon(mode_current)
user_color = metadata.get('color',[1.0,1.0,1.0,1.0])
item.color = user_color
if user['admin']:
status_icon = 'FAKE_USER_ON'
row = layout.split(factor=0.35, align=True)
entry = row.row(align=True)
entry.scale_x = 0.05
entry.enabled = False
entry.prop(item, 'color', text="", event=False, full_event=False)
entry.enabled = True
entry.scale_x = 1.0
entry.label(icon=status_icon, text="")
entry.label(text=item.username)
split = layout.split(factor=0.35)
split.label(text=item.username, icon=status_icon)
split = split.split(factor=0.5)
split.label(text=scene_current)
split.label(text=frame_current)
split.label(text=ping)
row = row.split(factor=0.25, align=True)
entry = row.row()
entry.label(icon=mode_icon)
entry = row.row()
entry.label(text=frame_current)
entry = row.row()
entry.label(text=scene_current)
entry = row.row()
entry.label(text=ping)
class SESSION_PT_presence(bpy.types.Panel):
bl_idname = "MULTIUSER_MODULE_PT_panel"
bl_label = "Presence overlay"
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_parent_id = 'MULTIUSER_SETTINGS_PT_panel'
bl_options = {'DEFAULT_CLOSED'}
@classmethod
def poll(cls, context):
return not session \
or (session and session.state['STATE'] in [STATE_INITIAL, STATE_ACTIVE])
def draw_header(self, context):
self.layout.prop(context.window_manager.session,
"enable_presence", text="",icon='OVERLAY')
def draw(self, context):
layout = self.layout
settings = context.window_manager.session
pref = get_preferences()
layout.active = settings.enable_presence
col = layout.column()
col.prop(settings, "presence_show_session_status")
row = col.column()
row.active = settings.presence_show_session_status
row.prop(pref, "presence_hud_scale", expand=True)
row = col.column(align=True)
row.active = settings.presence_show_session_status
row.prop(pref, "presence_hud_hpos", expand=True)
row.prop(pref, "presence_hud_vpos", expand=True)
col.prop(settings, "presence_show_selected")
col.prop(settings, "presence_show_user")
row = layout.column()
row.active = settings.presence_show_user
row.prop(settings, "presence_show_far_user")
def draw_property(context, parent, property_uuid, level=0):
settings = get_preferences()
item = session.repository.graph.get(property_uuid)
type_id = item.data.get('type_id')
runtime_settings = context.window_manager.session
item = session.get(uuid=property_uuid)
area_msg = parent.row(align=True)
if item.state == ERROR:
@ -521,25 +453,23 @@ def draw_property(context, parent, property_uuid, level=0):
line = area_msg.box()
name = item.data['name'] if item.data else item.uuid
icon = settings.supported_datablocks[type_id].icon if type_id else 'ERROR'
detail_item_box = line.row(align=True)
detail_item_box.label(text="", icon=icon)
detail_item_box.label(text="",
icon=settings.supported_datablocks[item.str_type].icon)
detail_item_box.label(text=f"{name}")
# Operations
have_right_to_modify = (item.owner == settings.username or \
item.owner == RP_COMMON) and item.state != ERROR
from multi_user import icons
sync_status = icons.icons_col["repository_push"] #TODO: Link all icons to the right sync (push/merge/issue). For issue use "UNLINKED" for icon
# sync_status = icons.icons_col["repository_merge"]
if have_right_to_modify:
detail_item_box.operator(
"session.commit",
text="",
icon_value=sync_status.icon_id).target = item.uuid
icon='TRIA_UP').target = item.uuid
detail_item_box.separator()
if item.state in [FETCHED, UP]:
@ -571,40 +501,12 @@ def draw_property(context, parent, property_uuid, level=0):
else:
detail_item_box.label(text="", icon="DECORATE_LOCKED")
class SESSION_PT_sync(bpy.types.Panel):
bl_idname = "MULTIUSER_SYNC_PT_panel"
bl_label = "Synchronize"
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_parent_id = 'MULTIUSER_SETTINGS_PT_panel'
bl_options = {'DEFAULT_CLOSED'}
@classmethod
def poll(cls, context):
return session \
and session.state in [STATE_ACTIVE]
def draw_header(self, context):
self.layout.label(text="", icon='UV_SYNC_SELECT')
def draw(self, context):
layout = self.layout
settings = get_preferences()
row= layout.row()
row = row.grid_flow(row_major=True, columns=0, even_columns=True, even_rows=False, align=True)
row.prop(settings.sync_flags, "sync_render_settings",text="",icon_only=True, icon='SCENE')
row.prop(settings.sync_flags, "sync_during_editmode", text="",icon_only=True, icon='EDITMODE_HLT')
row.prop(settings.sync_flags, "sync_active_camera", text="",icon_only=True, icon='VIEW_CAMERA')
class SESSION_PT_repository(bpy.types.Panel):
bl_idname = "MULTIUSER_PROPERTIES_PT_panel"
bl_label = "Repository"
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_parent_id = 'MULTIUSER_SETTINGS_PT_panel'
bl_options = {'DEFAULT_CLOSED'}
@classmethod
def poll(cls, context):
@ -617,8 +519,8 @@ class SESSION_PT_repository(bpy.types.Panel):
admin = usr['admin']
return hasattr(context.window_manager, 'session') and \
session and \
session.state == STATE_ACTIVE and \
not settings.sidebar_repository_shown
(session.state['STATE'] == STATE_ACTIVE or \
session.state['STATE'] == STATE_LOBBY and admin)
def draw_header(self, context):
self.layout.label(text="", icon='OUTLINER_OB_GROUP_INSTANCE')
@ -632,37 +534,55 @@ class SESSION_PT_repository(bpy.types.Panel):
usr = session.online_users.get(settings.username)
if session.state == STATE_ACTIVE:
if 'SessionBackupTimer' in registry:
row = layout.row()
if session.state['STATE'] == STATE_ACTIVE:
if 'SessionBackupTimer' in registry:
row.alert = True
row.operator('session.cancel_autosave', icon="CANCEL")
row.alert = False
# else:
# row.operator('session.save', icon="FILE_TICK")
else:
row.operator('session.save', icon="FILE_TICK")
box = layout.box()
row = box.row()
row.prop(runtime_settings, "filter_owned", text="Only show owned data blocks", icon_only=True, icon="DECORATE_UNLOCKED")
row = box.row()
row.prop(runtime_settings, "filter_name", text="Filter")
row = box.row()
flow = layout.grid_flow(
row_major=True,
columns=0,
even_columns=True,
even_rows=False,
align=True)
for item in settings.supported_datablocks:
col = flow.column(align=True)
col.prop(item, "use_as_filter", text="", icon=item.icon)
row = layout.row(align=True)
row.prop(runtime_settings, "filter_owned", text="Show only owned")
row = layout.row(align=True)
# Properties
owned_nodes = [k for k, v in session.repository.graph.items() if v.owner==settings.username]
types_filter = [t.type_name for t in settings.supported_datablocks
if t.use_as_filter]
filtered_node = owned_nodes if runtime_settings.filter_owned else list(session.repository.graph.keys())
key_to_filter = session.list(
filter_owner=settings.username) if runtime_settings.filter_owned else session.list()
if runtime_settings.filter_name:
filtered_node = [n for n in filtered_node if runtime_settings.filter_name.lower() in session.repository.graph.get(n).data.get('name').lower()]
client_keys = [key for key in key_to_filter
if session.get(uuid=key).str_type
in types_filter]
if filtered_node:
if client_keys:
col = layout.column(align=True)
for key in filtered_node:
for key in client_keys:
draw_property(context, col, key)
else:
layout.row().label(text="Empty")
else:
row.label(text="Empty")
elif session.state['STATE'] == STATE_LOBBY and usr and usr['admin']:
row.operator("session.init", icon='TOOL_SETTINGS', text="Init")
else:
row.label(text="Waiting to start")
class VIEW3D_PT_overlay_session(bpy.types.Panel):
bl_space_type = 'VIEW_3D'
@ -677,74 +597,71 @@ class VIEW3D_PT_overlay_session(bpy.types.Panel):
def draw(self, context):
layout = self.layout
settings = context.window_manager.session
pref = get_preferences()
layout.active = settings.enable_presence
row = layout.row()
row.prop(settings, "enable_presence",text="Presence Overlay")
row = layout.row()
row.prop(settings, "presence_show_selected",text="Selected Objects")
row = layout.row(align=True)
row.prop(settings, "presence_show_user", text="Users camera")
row.prop(settings, "presence_show_mode", text="Users mode")
view = context.space_data
overlay = view.overlay
display_all = overlay.show_overlays
col = layout.column()
if settings.presence_show_mode or settings.presence_show_user:
row = col.column()
row.prop(pref, "presence_text_distance", expand=True)
row = col.column()
row.prop(settings, "presence_show_far_user", text="Users on different scenes")
row = col.row(align=True)
settings = context.window_manager.session
layout.active = settings.enable_presence
col = layout.column()
col.prop(settings, "presence_show_session_status")
if settings.presence_show_session_status :
split = layout.split()
text_pos = split.column(align=True)
text_pos.active = settings.presence_show_session_status
text_pos.prop(pref, "presence_hud_hpos", expand=True)
text_pos.prop(pref, "presence_hud_vpos", expand=True)
text_scale = split.column()
text_scale.active = settings.presence_show_session_status
text_scale.prop(pref, "presence_hud_scale", expand=True)
col.prop(settings, "presence_show_selected")
col.prop(settings, "presence_show_user")
row = layout.column()
row.active = settings.presence_show_user
row.prop(settings, "presence_show_far_user")
class SESSION_UL_network(bpy.types.UIList):
def draw_item(self, context, layout, data, item, icon, active_data, active_propname, index, flt_flag):
settings = get_preferences()
server_name = '-'
server_status = 'BLANK1'
server_private = 'BLANK1'
class SESSION_PT_replay(bpy.types.Panel):
bl_idname = "MULTIUSER_REPLAY_PT_panel"
bl_label = "Replay"
bl_space_type = 'VIEW_3D'
bl_region_type = 'UI'
bl_parent_id = 'MULTIUSER_SETTINGS_PT_panel'
bl_options = {'DEFAULT_CLOSED'}
server_name = item.server_name
@classmethod
def poll(cls, context):
return context.window_manager.session.replay_files
split = layout.split(factor=0.7)
if item.is_private:
server_private = 'LOCKED'
split.label(text=server_name, icon=server_private)
def draw_header(self, context):
self.layout.label(text="", icon='RECOVER_LAST')
def draw(self, context):
layout = self.layout
settings = context.window_manager.session
row= layout.row()
row.prop(settings,'replay_mode', toggle=True, expand=True)
row= layout.row()
if settings.replay_mode == 'MANUAL':
row.prop(bpy.context.scene, 'active_replay_file', text="Snapshot index")
else:
split.label(text=server_name)
row.prop(settings, 'replay_duration', text="Replay Duration")
row= layout.row()
row.prop(settings, 'replay_persistent_collection', text="persistent collection", toggle=True, icon='OUTLINER_COLLECTION')
if settings.replay_persistent_collection:
row= layout.row()
row.prop(settings, 'replay_camera', text="", icon='VIEW_CAMERA')
from multi_user import icons
server_status = icons.icons_col["server_offline"]
if item.is_online:
server_status = icons.icons_col["server_online"]
split.label(icon_value=server_status.icon_id)
classes = (
SESSION_UL_users,
SESSION_UL_network,
SESSION_PT_settings,
SESSION_PT_host_settings,
SESSION_PT_settings_user,
SESSION_PT_settings_network,
SESSION_PT_presence,
SESSION_PT_advanced_settings,
SESSION_PT_user,
SESSION_PT_sync,
SESSION_PT_repository,
SESSION_PT_replay,
VIEW3D_PT_overlay_session,
)
register, unregister = bpy.utils.register_classes_factory(classes)
if __name__ == "__main__":

View File

@ -36,16 +36,8 @@ from replication.constants import (STATE_ACTIVE, STATE_AUTH,
STATE_INITIAL, STATE_SRV_SYNC,
STATE_WAITING, STATE_QUITTING,
STATE_LOBBY,
CONNECTING)
STATE_LAUNCHING_SERVICES)
CLEARED_DATABLOCKS = ['actions', 'armatures', 'cache_files', 'cameras',
'collections', 'curves', 'filepath', 'fonts',
'grease_pencils', 'images', 'lattices', 'libraries',
'lightprobes', 'lights', 'linestyles', 'masks',
'materials', 'meshes', 'metaballs', 'movieclips',
'node_groups', 'objects', 'paint_curves', 'particles',
'scenes', 'shape_keys', 'sounds', 'speakers', 'texts',
'textures', 'volumes', 'worlds']
def find_from_attr(attr_name, attr_value, list):
for item in list:
@ -100,7 +92,7 @@ def get_state_str(state):
state_str = 'OFFLINE'
elif state == STATE_QUITTING:
state_str = 'QUITTING'
elif state == CONNECTING:
elif state == STATE_LAUNCHING_SERVICES:
state_str = 'LAUNCHING SERVICES'
elif state == STATE_LOBBY:
state_str = 'LOBBY'
@ -108,26 +100,6 @@ def get_state_str(state):
return state_str
def clean_scene():
for type_name in CLEARED_DATABLOCKS:
sub_collection_to_avoid = [
bpy.data.linestyles.get('LineStyle'),
bpy.data.materials.get('Dots Stroke')
]
type_collection = getattr(bpy.data, type_name)
items_to_remove = [i for i in type_collection if i not in sub_collection_to_avoid]
for item in items_to_remove:
try:
type_collection.remove(item)
logging.info(item.name)
except:
continue
# Clear sequencer
bpy.context.scene.sequence_editor_clear()
def get_selected_objects(scene, active_view_layer):
return [obj.uuid for obj in scene.objects if obj.select_get(view_layer=active_view_layer)]

View File

@ -1,7 +1,7 @@
# Download base image debian jessie
FROM python:slim
ARG replication_version=0.9.1
ARG replication_version=0.1.13
ARG version=0.1.1
# Infos
@ -22,4 +22,4 @@ RUN pip install replication==$replication_version
# Run the server with parameters
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["replication.serve -apwd ${password} -spwd '' -p ${port} -t ${timeout} -l ${log_level} -lf ${log_file}"]
CMD ["python3 -m replication.server -pwd ${password} -p ${port} -t ${timeout} -l ${log_level} -lf ${log_file}"]

View File

@ -1,4 +1,4 @@
import re
init_py = open("multi_user/libs/replication/replication/__init__.py").read()
init_py = open("multi_user/__init__.py").read()
print(re.search("\d+\.\d+\.\d+\w\d+|\d+\.\d+\.\d+", init_py).group(0))

View File

@ -13,7 +13,7 @@ def main():
if len(sys.argv) > 2:
blender_rev = sys.argv[2]
else:
blender_rev = "2.93.0"
blender_rev = "2.92.0"
try:
exit_val = BAT.test_blender_addon(addon_path=addon, blender_revision=blender_rev)

View File

@ -8,7 +8,6 @@ import random
from multi_user.bl_types.bl_action import BlAction
INTERPOLATION = ['CONSTANT', 'LINEAR', 'BEZIER', 'SINE', 'QUAD', 'CUBIC', 'QUART', 'QUINT', 'EXPO', 'CIRC', 'BACK', 'BOUNCE', 'ELASTIC']
FMODIFIERS = ['GENERATOR', 'FNGENERATOR', 'ENVELOPE', 'CYCLES', 'NOISE', 'LIMITS', 'STEPPED']
# @pytest.mark.parametrize('blendname', ['test_action.blend'])
def test_action(clear_blend):
@ -23,20 +22,17 @@ def test_action(clear_blend):
point.co[1] = random.randint(-10,10)
point.interpolation = INTERPOLATION[random.randint(0, len(INTERPOLATION)-1)]
for mod_type in FMODIFIERS:
fcurve_sample.modifiers.new(mod_type)
bpy.ops.mesh.primitive_plane_add()
bpy.data.objects[0].animation_data_create()
bpy.data.objects[0].animation_data.action = datablock
# Test
implementation = BlAction()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.actions.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -12,11 +12,11 @@ def test_armature(clear_blend):
datablock = bpy.data.armatures[0]
implementation = BlArmature()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.armatures.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -15,11 +15,11 @@ def test_camera(clear_blend, camera_type):
datablock.type = camera_type
camera_dumper = BlCamera()
expected = camera_dumper.dump(datablock)
expected = camera_dumper._dump(datablock)
bpy.data.cameras.remove(datablock)
test = camera_dumper.construct(expected)
camera_dumper.load(expected, test)
result = camera_dumper.dump(test)
test = camera_dumper._construct(expected)
camera_dumper._load(expected, test)
result = camera_dumper._dump(test)
assert not DeepDiff(expected, result)

View File

@ -23,11 +23,11 @@ def test_collection(clear_blend):
# Test
implementation = BlCollection()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.collections.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -19,11 +19,11 @@ def test_curve(clear_blend, curve_type):
datablock = bpy.data.curves[0]
implementation = BlCurve()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.curves.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -13,11 +13,11 @@ def test_gpencil(clear_blend):
datablock = bpy.data.grease_pencils[0]
implementation = BlGpencil()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.grease_pencils.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -13,11 +13,11 @@ def test_lattice(clear_blend):
datablock = bpy.data.lattices[0]
implementation = BlLattice()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.lattices.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -7,18 +7,18 @@ import bpy
from multi_user.bl_types.bl_lightprobe import BlLightprobe
@pytest.mark.skipif(bpy.app.version < (2,83,0), reason="requires blender 2.83 or higher")
@pytest.mark.skipif(bpy.app.version[1] < 83, reason="requires blender 2.83 or higher")
@pytest.mark.parametrize('lightprobe_type', ['PLANAR','GRID','CUBEMAP'])
def test_lightprobes(clear_blend, lightprobe_type):
bpy.ops.object.lightprobe_add(type=lightprobe_type)
blender_light = bpy.data.lightprobes[0]
lightprobe_dumper = BlLightprobe()
expected = lightprobe_dumper.dump(blender_light)
expected = lightprobe_dumper._dump(blender_light)
bpy.data.lightprobes.remove(blender_light)
test = lightprobe_dumper.construct(expected)
lightprobe_dumper.load(expected, test)
result = lightprobe_dumper.dump(test)
test = lightprobe_dumper._construct(expected)
lightprobe_dumper._load(expected, test)
result = lightprobe_dumper._dump(test)
assert not DeepDiff(expected, result)

View File

@ -13,11 +13,11 @@ def test_light(clear_blend, light_type):
blender_light = bpy.data.lights[0]
light_dumper = BlLight()
expected = light_dumper.dump(blender_light)
expected = light_dumper._dump(blender_light)
bpy.data.lights.remove(blender_light)
test = light_dumper.construct(expected)
light_dumper.load(expected, test)
result = light_dumper.dump(test)
test = light_dumper._construct(expected)
light_dumper._load(expected, test)
result = light_dumper._dump(test)
assert not DeepDiff(expected, result)

View File

@ -17,12 +17,12 @@ def test_material_nodes(clear_blend):
datablock.node_tree.nodes.new(ntype)
implementation = BlMaterial()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.materials.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)
@ -32,11 +32,11 @@ def test_material_gpencil(clear_blend):
bpy.data.materials.create_gpencil_data(datablock)
implementation = BlMaterial()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.materials.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -18,11 +18,11 @@ def test_mesh(clear_blend, mesh_type):
# Test
implementation = BlMesh()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.meshes.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -13,11 +13,11 @@ def test_metaball(clear_blend, metaballs_type):
datablock = bpy.data.metaballs[0]
dumper = BlMetaball()
expected = dumper.dump(datablock)
expected = dumper._dump(datablock)
bpy.data.metaballs.remove(datablock)
test = dumper.construct(expected)
dumper.load(expected, test)
result = dumper.dump(test)
test = dumper._construct(expected)
dumper._load(expected, test)
result = dumper._dump(test)
assert not DeepDiff(expected, result)

View File

@ -65,11 +65,11 @@ def test_object(clear_blend):
datablock.shape_key_add(name='shape2')
implementation = BlObject()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.objects.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
print(DeepDiff(expected, result))
assert not DeepDiff(expected, result)

View File

@ -12,16 +12,14 @@ def test_scene(clear_blend):
get_preferences().sync_flags.sync_render_settings = True
datablock = bpy.data.scenes.new("toto")
datablock.timeline_markers.new('toto', frame=10)
datablock.timeline_markers.new('tata', frame=1)
datablock.view_settings.use_curve_mapping = True
# Test
implementation = BlScene()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.scenes.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -12,11 +12,11 @@ def test_speaker(clear_blend):
datablock = bpy.data.speakers[0]
implementation = BlSpeaker()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.speakers.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -14,11 +14,11 @@ def test_texture(clear_blend, texture_type):
datablock = bpy.data.textures.new('test', texture_type)
implementation = BlTexture()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.textures.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -11,11 +11,11 @@ def test_volume(clear_blend):
datablock = bpy.data.volumes.new("Test")
implementation = BlVolume()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.volumes.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

View File

@ -12,11 +12,11 @@ def test_world(clear_blend):
datablock.use_nodes = True
implementation = BlWorld()
expected = implementation.dump(datablock)
expected = implementation._dump(datablock)
bpy.data.worlds.remove(datablock)
test = implementation.construct(expected)
implementation.load(expected, test)
result = implementation.dump(test)
test = implementation._construct(expected)
implementation._load(expected, test)
result = implementation._dump(test)
assert not DeepDiff(expected, result)

20
tests/test_operators.py Normal file
View File

@ -0,0 +1,20 @@
import os
import pytest
from deepdiff import DeepDiff
import bpy
import random
def test_start_session():
result = bpy.ops.session.start()
assert 'FINISHED' in result
def test_stop_session():
result = bpy.ops.session.stop()
assert 'FINISHED' in result