Support for Volume Shadow Copy Service (VSS) on windows#2274
Support for Volume Shadow Copy Service (VSS) on windows#2274MichaelEischer merged 1 commit intorestic:masterfrom
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2274 +/- ##
=========================================
Coverage ? 46.78%
=========================================
Files ? 180
Lines ? 14617
Branches ? 0
=========================================
Hits ? 6839
Misses ? 6756
Partials ? 1022
Continue to review full report at Codecov.
|
|
Right now this pull request reached a state where I'd like to get some feedback. I'm interested in test results but also feedback concerning the code to finish this pull request. |
|
two consecutives run give me error with thunderbird open ` C:\User\restic>restic -r c:\temp\r backup C:\Users\User\AppData\Roaming\Thunderbird\Profiles\9ceckvqy.default --use-windows-vss |
|
@fbkopp Can you try to backup using a relative path to your thunderbird folder? It seems like there is some strange behavior when using an absolute path. There is a problem when accessing the volume root folder e.g. c:\ inside the VSS snapshot which is only done by restic when an absolute path is given. |
|
@fbkopp I've updated the pull request which hopefully fixes your problem. Please give it a try. |
Okay, it worked now, but the repository is locked from the backup. |
|
@fbkopp I reproduced the issue. The problem is related to runBackup() in cmd/restic/cmd_backup.go: lock, err := lockRepo(repo) For testing I've added a debug message to unlockRepo(). When using VSS snapshots the defer statement seems to be ignored from time to time but it works when not using VSS snapshots. I'm not sure about the actual reason for this behavior. |
@fgma, created a new repository, now worked with the last change. |
|
@fbkopp Do you have any additional feedback? |
|
I will just add that I am super excited for this and hopeful that we can see it merged soon! Thank you for your hard work! |
|
I downloaded your fork and am testing it too with a new restic repo, I'll let you know how it goes. |
|
Tried to use it with an external drive (which I know might not make too much sense) and got this error: |
|
Putting a path after the D:\ fixed the problem. So:
|
Same issue here. |
|
I'd love to see this merged! +1 |
|
Greatly desired / awaited / appreciated feature here. |
|
@nixzahlen Thanks for your feedback. Excluding files might be a good addition when the basic functionality got merged. Can anyone give me a hint how to progress with this pull-request? |
|
Would really appreciate this getting merged! |
|
@fgma Thanks a lot for giving this a Go! There seems to be some things not finished in this PR, for example tests, logging and cleaning up the code (as per the original post in this PR). Can this be fixed? Also, the PR needs to be rebased on the latest master, there are currently conflicts. |
|
@rawtaz Of course I'd like to finish this PR. I hope I got some free time during the weekend to work on this. Meanwhile it would be nice to get some specific feedback what I need to change for a merge. Concerning tests I really don't know what kind of tests are expected. To ensure it works as expected one would need a windows system with locked files and backup them successfully. |
|
seems to work since some time, see #2274 (comment) Because of this feedback, I've also merged this (and also some other pull requests) ~10 days ago and so far never had problems. You could find more infos in this restic fork @fgma The only thing I kind of miss is... if this option is given AND the user is not elevated / privileged, I'd like restic to print a friendly reminder and just exit ;) Beside this, it works great for me and I stopped to use a former Powershell wrapper now ;) |
|
@AntonOks Thanks for the suggestion. I've added it to the code. I hope CI will pass this time. |
Fixed in c4792c6 |
This looks like the expected behavior to me. If you backup mountpoints each mountpoint will create its own snapshot as part of the snapshot set on its own storage. If you are out of disk space for a mountpoint this will fail so the whole snapshot will fail. |
But I don't backup files under this mountpoint, I backup unrelated folder. Current behavior make this PR very fragile. Until we can't use |
This should be a separate PR. After this PR is only going into the development branch we will have a lot of time to add this until the next release. |
|
You a right, this PR hasn't been merged for a long time already. If someone needs this, I have implemented three options: Click to expanddiff --git a/cmd/restic/cmd_backup.go b/cmd/restic/cmd_backup.go
index 07ef0a4b..e0cf61e8 100644
--- a/cmd/restic/cmd_backup.go
+++ b/cmd/restic/cmd_backup.go
@@ -421,7 +421,17 @@ func findParentSnapshot(ctx context.Context, repo restic.Repository, opts Backup
}
func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Terminal, args []string) error {
- err := opts.Check(gopts, args)
+ var cfg interface{}
+ var err error
+
+ switch runtime.GOOS {
+ case "windows":
+ if cfg, err = fs.ParseVSSConfig(gopts.extended); err != nil {
+ return err
+ }
+ }
+
+ err = opts.Check(gopts, args)
if err != nil {
return err
}
@@ -570,7 +580,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
}
}
- localVss := fs.NewLocalVss(errorHandler, messageHandler)
+ localVss := fs.NewLocalVss(errorHandler, messageHandler, cfg.(fs.VSSConfig))
defer localVss.DeleteSnapshots()
targetFS = localVss
}
diff --git a/internal/fs/fs_local_vss.go b/internal/fs/fs_local_vss.go
index 60e13462..bbeeb9fe 100644
--- a/internal/fs/fs_local_vss.go
+++ b/internal/fs/fs_local_vss.go
@@ -3,27 +3,59 @@ package fs
import (
"os"
"path/filepath"
+ "runtime"
"strings"
"sync"
+ "time"
"github.com/restic/restic/internal/errors"
+ "github.com/restic/restic/internal/options"
)
+// VSSConfig holds extended options of windows volume shadow copy service.
+type VSSConfig struct {
+ ExludeAllMountPoints bool `option:"exludeallmountpoints" help:"exclude mountpoints from snapshoting on all volumes"`
+ ExludeVolumes string `option:"exludevolumes" help:"semicolon separated list of volumes to exclude from snapshoting (c:\\;e:\\mnt;\\\\?\\Volume{...})"`
+ Timeout time.Duration `option:"timeout" help:"time that the VSS can spend creating snapshots before timing out"`
+}
+
+func init() {
+ if runtime.GOOS == "windows" {
+ options.Register("vss", VSSConfig{})
+ }
+}
+
+// ParseVSSConfig parses a VSS extended options to VSSConfig struct.
+func ParseVSSConfig(o options.Options) (interface{}, error) {
+ cfg := VSSConfig{Timeout: time.Second * 120}
+ o = o.Extract("vss")
+ if err := o.Apply("vss", &cfg); err != nil {
+ return nil, err
+ }
+ return cfg, nil
+}
+
// ErrorHandler is used to report errors via callback
type ErrorHandler func(item string, err error) error
// MessageHandler is used to report errors/messages via callbacks.
type MessageHandler func(msg string, args ...interface{})
+// VolumeFilter is used to filter volumes by it's mount point or GUID path.
+type VolumeFilter func(volume string) bool
+
// LocalVss is a wrapper around the local file system which uses windows volume
// shadow copy service (VSS) in a transparent way.
type LocalVss struct {
FS
- snapshots map[string]VssSnapshot
- failedSnapshots map[string]struct{}
- mutex *sync.RWMutex
- msgError ErrorHandler
- msgMessage MessageHandler
+ snapshots map[string]VssSnapshot
+ failedSnapshots map[string]struct{}
+ mutex *sync.RWMutex
+ msgError ErrorHandler
+ msgMessage MessageHandler
+ exludeAllMountPoints bool
+ volumesToExclude []string
+ timeout time.Duration
}
// statically ensure that LocalVss implements FS.
@@ -31,15 +63,19 @@ var _ FS = &LocalVss{}
// NewLocalVss creates a new wrapper around the windows filesystem using volume
// shadow copy service to access locked files.
-func NewLocalVss(msgError ErrorHandler, msgMessage MessageHandler) *LocalVss {
- return &LocalVss{
- FS: Local{},
- snapshots: make(map[string]VssSnapshot),
- failedSnapshots: make(map[string]struct{}),
- mutex: &sync.RWMutex{},
- msgError: msgError,
- msgMessage: msgMessage,
+func NewLocalVss(msgError ErrorHandler, msgMessage MessageHandler, cfg VSSConfig) *LocalVss {
+ result := &LocalVss{
+ FS: Local{},
+ snapshots: make(map[string]VssSnapshot),
+ failedSnapshots: make(map[string]struct{}),
+ mutex: &sync.RWMutex{},
+ msgError: msgError,
+ msgMessage: msgMessage,
+ exludeAllMountPoints: cfg.ExludeAllMountPoints,
+ timeout: cfg.Timeout,
}
+ result.volumesToExclude = result.parseMountPoints(cfg.ExludeVolumes)
+ return result
}
// DeleteSnapshots deletes all snapshots that were created automatically.
@@ -79,6 +115,44 @@ func (fs *LocalVss) Lstat(name string) (os.FileInfo, error) {
return os.Lstat(fs.snapshotPath(name))
}
+// parseMountPoints try to convert semicolon separated list of mount points
+// to array of lowercased volume GUID pathes. Mountpoints already in volume
+// GUID path format will be validated and converted to itself.
+func (fs *LocalVss) parseMountPoints(list string) (volumes []string) {
+ for _, s := range strings.Split(list, ";") {
+ if v, err := GetVolumeNameForVolumeMountPoint(s); err != nil {
+ fs.msgError(s, errors.Errorf("failed to parse vss.exludevolumes [%s]: %s\n", s, err))
+ } else {
+ volumes = append(volumes, strings.ToLower(v))
+ }
+ }
+
+ return volumes
+}
+
+// isMountPointExcluded is true if given mountpoint excluded by user.
+func (fs *LocalVss) isMountPointExcluded(mountPoint string) bool {
+ if fs.volumesToExclude == nil {
+ return false
+ }
+
+ volume, err := GetVolumeNameForVolumeMountPoint(mountPoint)
+ if err != nil {
+ // maybe panic instead?
+ fs.msgError(mountPoint, errors.Errorf("failed to convert mount point [%s]: %s\n", mountPoint, err))
+ return false
+ }
+
+ volume = strings.ToLower(volume)
+ for _, v := range fs.volumesToExclude {
+ if v == volume {
+ return true
+ }
+ }
+
+ return false
+}
+
// snapshotPath returns the path inside a VSS snapshots if it already exists.
// If the path is not yet available as a snapshot, a snapshot is created.
// If creation of a snapshot fails the file's original path is returned as
@@ -115,23 +189,35 @@ func (fs *LocalVss) snapshotPath(path string) string {
if !snapshotExists && !snapshotFailed {
vssVolume := volumeNameLower + string(filepath.Separator)
- fs.msgMessage("creating VSS snapshot for [%s]\n", vssVolume)
- if snapshot, err := NewVssSnapshot(vssVolume, 120, fs.msgError); err != nil {
- fs.msgError(vssVolume, errors.Errorf("failed to create snapshot for [%s]: %s\n",
- vssVolume, err))
+ if fs.isMountPointExcluded(vssVolume) {
+ fs.msgMessage("snapshots for [%s] excluded by user\n", vssVolume)
fs.failedSnapshots[volumeNameLower] = struct{}{}
} else {
- fs.snapshots[volumeNameLower] = snapshot
- fs.msgMessage("successfully created snapshot for [%s]\n", vssVolume)
- if len(snapshot.mountPointInfo) > 0 {
- fs.msgMessage("mountpoints in snapshot volume [%s]:\n", vssVolume)
- for mp, mpInfo := range snapshot.mountPointInfo {
- info := ""
- if !mpInfo.IsSnapshotted() {
- info = " (not snapshotted)"
+ fs.msgMessage("creating VSS snapshot for [%s]\n", vssVolume)
+
+ var filter VolumeFilter
+ if !fs.exludeAllMountPoints {
+ filter = func(volume string) bool {
+ return !fs.isMountPointExcluded(volume)
+ }
+ }
+ if snapshot, err := NewVssSnapshot(vssVolume, fs.timeout, filter, fs.msgError); err != nil {
+ fs.msgError(vssVolume, errors.Errorf("failed to create snapshot for [%s]: %s\n",
+ vssVolume, err))
+ fs.failedSnapshots[volumeNameLower] = struct{}{}
+ } else {
+ fs.snapshots[volumeNameLower] = snapshot
+ fs.msgMessage("successfully created snapshot for [%s]\n", vssVolume)
+ if len(snapshot.mountPointInfo) > 0 {
+ fs.msgMessage("mountpoints in snapshot volume [%s]:\n", vssVolume)
+ for mp, mpInfo := range snapshot.mountPointInfo {
+ info := ""
+ if !mpInfo.IsSnapshotted() {
+ info = " (not snapshotted)"
+ }
+ fs.msgMessage(" - %s%s\n", mp, info)
}
- fs.msgMessage(" - %s%s\n", mp, info)
}
}
}
diff --git a/internal/fs/vss.go b/internal/fs/vss.go
index a515d75b..bdf8c294 100644
--- a/internal/fs/vss.go
+++ b/internal/fs/vss.go
@@ -3,6 +3,8 @@
package fs
import (
+ "time"
+
"github.com/restic/restic/internal/errors"
)
@@ -30,10 +32,16 @@ func HasSufficientPrivilegesForVSS() bool {
return false
}
+// GetVolumeNameForVolumeMountPoint clear input parameter
+// and calls the equivalent windows api.
+func GetVolumeNameForVolumeMountPoint(mountPoint string) (string, error) {
+ return mountPoint, nil
+}
+
// NewVssSnapshot creates a new vss snapshot. If creating the snapshots doesn't
// finish within the timeout an error is returned.
func NewVssSnapshot(
- volume string, timeoutInSeconds uint, msgError ErrorHandler) (VssSnapshot, error) {
+ volume string, timeout time.Duration, filter VolumeFilter, msgError ErrorHandler) (VssSnapshot, error) {
return VssSnapshot{}, errors.New("VSS snapshots are only supported on windows")
}
diff --git a/internal/fs/vss_windows.go b/internal/fs/vss_windows.go
index b63ad4cd..244f04d8 100644
--- a/internal/fs/vss_windows.go
+++ b/internal/fs/vss_windows.go
@@ -8,6 +8,7 @@ import (
"runtime"
"strings"
"syscall"
+ "time"
"unsafe"
ole "github.com/go-ole/go-ole"
@@ -616,8 +617,13 @@ func (vssAsync *IVSSAsync) QueryStatus() (HRESULT, uint32) {
// WaitUntilAsyncFinished waits until either the async call is finshed or
// the given timeout is reached.
-func (vssAsync *IVSSAsync) WaitUntilAsyncFinished(millis uint32) error {
- hresult := vssAsync.Wait(millis)
+func (vssAsync *IVSSAsync) WaitUntilAsyncFinished(timeout time.Duration) error {
+ const maxTimeout = 2147483647 * time.Millisecond
+ if timeout > maxTimeout {
+ timeout = maxTimeout
+ }
+
+ hresult := vssAsync.Wait(uint32(timeout.Milliseconds()))
err := newVssErrorIfResultNotOK("Wait() failed", hresult)
if err != nil {
vssAsync.Cancel()
@@ -676,7 +682,7 @@ type VssSnapshot struct {
snapshotProperties VssSnapshotProperties
snapshotDeviceObject string
mountPointInfo map[string]MountPoint
- timeoutInMillis uint32
+ timeout time.Duration
}
// GetSnapshotDeviceObject returns root path to access the snapshot files
@@ -715,10 +721,33 @@ func HasSufficientPrivilegesForVSS() bool {
return !(HRESULT(result) == E_ACCESSDENIED)
}
+// GetVolumeNameForVolumeMountPoint clear input parameter
+// and calls the equivalent windows api.
+func GetVolumeNameForVolumeMountPoint(mountPoint string) (string, error) {
+ if mountPoint != "" && mountPoint[len(mountPoint)-1] != filepath.Separator {
+ mountPoint = mountPoint + string(filepath.Separator)
+ }
+
+ mountPointPointer, err := syscall.UTF16PtrFromString(mountPoint)
+ if err != nil {
+ return mountPoint, err
+ }
+
+ // A reasonable size for the buffer to accommodate the largest possible
+ // volume GUID path is 50 characters.
+ volumeNameBuffer := make([]uint16, 50)
+ if err := windows.GetVolumeNameForVolumeMountPoint(
+ mountPointPointer, &volumeNameBuffer[0], 50); err != nil {
+ return mountPoint, err
+ }
+
+ return syscall.UTF16ToString(volumeNameBuffer), nil
+}
+
// NewVssSnapshot creates a new vss snapshot. If creating the snapshots doesn't
// finish within the timeout an error is returned.
func NewVssSnapshot(
- volume string, timeoutInSeconds uint, msgError ErrorHandler) (VssSnapshot, error) {
+ volume string, timeout time.Duration, filter VolumeFilter, msgError ErrorHandler) (VssSnapshot, error) {
is64Bit, err := isRunningOn64BitWindows()
if err != nil {
@@ -732,7 +761,7 @@ func NewVssSnapshot(
runtime.GOARCH))
}
- timeoutInMillis := uint32(timeoutInSeconds * 1000)
+ deadline := time.Now().Add(timeout)
oleIUnknown, result, err := initializeVssCOMInterface()
if err != nil {
@@ -796,7 +825,7 @@ func NewVssSnapshot(
}
err = callAsyncFunctionAndWait(iVssBackupComponents.GatherWriterMetadata,
- "GatherWriterMetadata", timeoutInMillis)
+ "GatherWriterMetadata", deadline)
if err != nil {
iVssBackupComponents.Release()
return VssSnapshot{}, err
@@ -822,39 +851,44 @@ func NewVssSnapshot(
return VssSnapshot{}, err
}
- mountPoints, err := enumerateMountedFolders(volume)
- if err != nil {
- iVssBackupComponents.Release()
- return VssSnapshot{}, newVssTextError(fmt.Sprintf(
- "failed to enumerate mount points for volume %s: %s", volume, err))
- }
-
mountPointInfo := make(map[string]MountPoint)
- for _, mountPoint := range mountPoints {
- // ensure every mountpoint is available even without a valid
- // snapshot because we need to consider this when backing up files
- mountPointInfo[mountPoint] = MountPoint{isSnapshotted: false}
-
- if isSupported, err := iVssBackupComponents.IsVolumeSupported(mountPoint); err != nil {
- continue
- } else if !isSupported {
- continue
- }
-
- var mountPointSnapshotSetID ole.GUID
- err := iVssBackupComponents.AddToSnapshotSet(mountPoint, &mountPointSnapshotSetID)
+ //if filter==nil just don't process mount points for this volume at all
+ if filter != nil {
+ mountPoints, err := enumerateMountedFolders(volume)
if err != nil {
iVssBackupComponents.Release()
- return VssSnapshot{}, err
+ return VssSnapshot{}, newVssTextError(fmt.Sprintf(
+ "failed to enumerate mount points for volume %s: %s", volume, err))
}
- mountPointInfo[mountPoint] = MountPoint{isSnapshotted: true,
- snapshotSetID: mountPointSnapshotSetID}
+ for _, mountPoint := range mountPoints {
+ // ensure every mountpoint is available even without a valid
+ // snapshot because we need to consider this when backing up files
+ mountPointInfo[mountPoint] = MountPoint{isSnapshotted: false}
+
+ if !filter(mountPoint) {
+ continue
+ } else if isSupported, err := iVssBackupComponents.IsVolumeSupported(mountPoint); err != nil {
+ continue
+ } else if !isSupported {
+ continue
+ }
+
+ var mountPointSnapshotSetID ole.GUID
+ err := iVssBackupComponents.AddToSnapshotSet(mountPoint, &mountPointSnapshotSetID)
+ if err != nil {
+ iVssBackupComponents.Release()
+ return VssSnapshot{}, err
+ }
+
+ mountPointInfo[mountPoint] = MountPoint{isSnapshotted: true,
+ snapshotSetID: mountPointSnapshotSetID}
+ }
}
err = callAsyncFunctionAndWait(iVssBackupComponents.PrepareForBackup, "PrepareForBackup",
- timeoutInMillis)
+ deadline)
if err != nil {
// After calling PrepareForBackup one needs to call AbortBackup() before releasing the VSS
// instance for proper cleanup.
@@ -865,7 +899,7 @@ func NewVssSnapshot(
}
err = callAsyncFunctionAndWait(iVssBackupComponents.DoSnapshotSet, "DoSnapshotSet",
- timeoutInMillis)
+ deadline)
if err != nil {
iVssBackupComponents.AbortBackup()
iVssBackupComponents.Release()
@@ -901,7 +934,7 @@ func NewVssSnapshot(
}
return VssSnapshot{iVssBackupComponents, snapshotSetID, snapshotProperties,
- snapshotProperties.GetSnapshotDeviceObject(), mountPointInfo, timeoutInMillis}, nil
+ snapshotProperties.GetSnapshotDeviceObject(), mountPointInfo, time.Until(deadline)}, nil
}
// Delete deletes the created snapshot.
@@ -922,8 +955,10 @@ func (p *VssSnapshot) Delete() error {
if p.iVssBackupComponents != nil {
defer p.iVssBackupComponents.Release()
+ deadline := time.Now().Add(p.timeout)
+
err = callAsyncFunctionAndWait(p.iVssBackupComponents.BackupComplete, "BackupComplete",
- p.timeoutInMillis)
+ deadline)
if err != nil {
return err
}
@@ -945,7 +980,7 @@ type asyncCallFunc func() (*IVSSAsync, error)
// callAsyncFunctionAndWait calls an async functions and waits for it to either
// finish or timeout.
-func callAsyncFunctionAndWait(function asyncCallFunc, name string, timeoutInMillis uint32) error {
+func callAsyncFunctionAndWait(function asyncCallFunc, name string, deadline time.Time) error {
iVssAsync, err := function()
if err != nil {
return err
@@ -955,7 +990,12 @@ func callAsyncFunctionAndWait(function asyncCallFunc, name string, timeoutInMill
return newVssTextError(fmt.Sprintf("%s() returned nil", name))
}
- err = iVssAsync.WaitUntilAsyncFinished(timeoutInMillis)
+ timeout := time.Until(deadline)
+ if timeout <= 0 {
+ return newVssTextError(fmt.Sprintf("%s() deadline exceded", name))
+ }
+
+ err = iVssAsync.WaitUntilAsyncFinished(timeout)
iVssAsync.Release()
return err
}
diff --git a/internal/options/options.go b/internal/options/options.go
index f03eb609..90fb97ec 100644
--- a/internal/options/options.go
+++ b/internal/options/options.go
@@ -183,6 +183,13 @@ func (o Options) Apply(ns string, dst interface{}) error {
case "string":
v.Field(i).SetString(value)
+ case "bool":
+ b, err := strconv.ParseBool(value)
+ if err != nil {
+ return err
+ }
+ v.Field(i).SetBool(b)
+
case "int":
vi, err := strconv.ParseInt(value, 0, 32)
if err != nil { |
|
Apparently I'm done here: can't find anything in code and by thorough testing. |
|
I've just tried to use the beta build on a system with a standard user account (not Admin!), which has its backup disk mounted as a network volume with a drive letter. I do understand that VSS on Windows needs elevated privileges. So I've created the repo (say What is going on here? I'm suspecting this is another Windows specialty which has nothing to do with restic, but on the other hand it would be nice if restic could support such situations as well. |
|
@bjoe2k4 It's probably the fact that when you elevate, you run as the admin privileged account, and it doesn't have the drive letter mounted. Can you try using a UNC path instead (e.g. \server\share)? |
|
@rawtaz Just tried and it also receives a fatal error: username and password are wrong. I guess the local administrators credentials are used, which do not exist in the AD domain, where the backup storage resides. |
|
I think executing |
|
@bjoe2k4 Yeah, that's what I'd expect too. Either logging in as the higher-privs account and connecting to the share, so you can provide and then also save the credentials, or doing what @DRON-666 suggests, should make it work I think. I'd probably do the former, as it seems rather unnecessary to create a drive letter for it. |
|
And you can view and edit all credentials with Accessing Credential Manager. |
The VSS support works for 32 and 64-bit windows, this includes a check that the restic version matches the OS architecture as required by VSS. The backup operation will fail the user has not sufficient permissions to use VSS. Snapshotting volumes also covers mountpoints but skips UNC paths.
04c1491 to
5695f9e
Compare
MichaelEischer
left a comment
There was a problem hiding this comment.
LGTM. I've merged the commits into a single one. So now we just have to wait for the CI to complete and then this PR is ready for merging :-) .
Thanks a lot to everyone helping with testing and improving this PR. And of course a big thanks to @fgma for writing the PR in the first place and then tirelessly adressing hundreds of comments.
rawtaz
left a comment
There was a problem hiding this comment.
One of the best PRs ever :)
|
Thanks to everyone for all the effort you put into reviewing and testing! |
|
Hi, @fgma, thank's for implementing vss support for restic. I was looking arround to find some implementation example that use syscall to create shadow copy. I have played a little bit with your implementation. I encounter a nil pointer dereference when trying to create a VssSnasphot with insufficient privileges. From what I can see, it's due to not checking nil value for Again, thank's for this huge amount of work. |
|
@tigerwill90 You mean the defered call in vss_windows.go:744? All other calls check for nil. |
|
@fgma yes exactly. |
|
@tigerwill90 I've created a PR to fix this: #3045 |
What is the purpose of this change? What does it change?
Add transparent support for Volume Shadow Copy Service (VSS) on windows.
Use of VSS needs to be activated via the new flag
--use-windows-vssfor the backup command e.g.restic backup --verbose --use-windows-vss dataTo easily test the new feature I prepared some bat/powershell files that will run restic with the new flag and also get an exclusive lock on a file via the included powershell script to test VSS allows reading the file:
vss_test.zip
Right now it is not a finished pull request. It is missing many details:
But basic VSS functionality should work!
Right now I've tested it only on Windows 10 Professional 64 bit.
Was the change discussed in an issue or in the forum before?
closes #340
Checklist
changelog/unreleased/that describes the changes for our users (template here)gofmton the code in all commits