Troubleshooting
Common errors during install, mount, and import, and how to clear them.
The orchestrator never sees your S3 credentials, so we can't help debug from the server side. Reproduce locally first:
aws s3 ls s3://<your-bucket>/ --endpoint-url <your-endpoint>If that fails, fix the bucket / key first; if it succeeds, re-export
the env vars in the same shell that's running artifacts and
re-run.
env | grep ARTIFACTS_S3A common mistake: set without export, or exporting in a parent
shell that doesn't pass through to a sudo'd or container-shelled
child.
Tonbo's Redis admin connection blipped. Re-run the failing command
(workspace create, mount, etc.). If it persists past 60 s, ping
us; we have an alert on our side too.
Self-explanatory. The CLI reads /proc/self/mountinfo to detect this
before issuing a new mount session, so you'll see this clean error
instead of a confusing EBUSY deep in juicefs.
Don't kill -9 the existing mount. SIGKILL bypasses the unmount
handler and leaves a stale FUSE mount that needs umount -f to
clear.
The FUSE supervisor died ungracefully (SIGKILL, OOM, container kill) and left a stale mount. Force-unmount as root:
sudo umount -f /mnt/work # try force first
sudo umount -l /mnt/work # lazy fallback if force failsThen artifacts mount cases /mnt/work to issue a fresh session.
If a workspace's mount gets repeatedly into this state under normal operation, let us know; we want to investigate.
A previous workspace create died mid-format. Rerunning the same
command resumes from the last successful phase: the orchestrator
returns the same hashtag prefix, the format token is re-issued, and
the namespace bootstrap is re-pointed at the existing keys.
To start over cleanly:
artifacts workspace delete cases --yes
artifacts workspace create cases --bucket ... --endpoint ...Bucket chunks stay; only the per-workspace metadata in Tonbo's Redis is cleared.
The mount-session ACL was revoked between the orchestrator issuing it and your client connecting. This almost always means an operator restarted Redis or rotated credentials.
Run:
artifacts unmount /mnt/work
artifacts mount cases /mnt/workto issue a fresh session. If you see this without an operator notice, ping us; it shouldn't happen mid-flight.
Cosmetic. Your kernel sent a FUSE_POLL opcode (some app called
poll() / select() / epoll() on a file under the mount); juicefs
returned ENOSYS, the kernel doesn't ask again. Logged once per
mount, never repeated. No action needed.
v0 ships a single Linux x86_64 statically-linked binary. On other platforms:
- macOS: use the Mac as a staging source only. See Import data → from a Mac folder. Mount happens on a Linux host that pulls from your bucket.
- Linux arm64: contact us. We can build for arm64 if you need it; we just don't ship it by default in v0.
- Windows: WSL2 should work (it's Linux x86_64 under the hood).
Run
install.shinside the WSL distro.
The installer tries apt-get, dnf, yum, apk in order. On a
host without any of those (extremely minimal containers, custom
distros), install fuse3 manually:
# Ubuntu / Debian
apt-get install -y fuse3
# Fedora / RHEL / Amazon Linux
dnf install -y fuse3
# Alpine
apk add fuse3Then re-run install.sh.
The daemon parent reported ready but the FUSE child died right after. Check the per-mount log:
ls ~/.cache/artifacts/
# mount-<hash>/
tail -50 ~/.cache/artifacts/mount-<hash>/artifacts.logThe most common causes are bad bucket credentials (juicefs format
phase failed at chunk-write time) and a workspace that got into
pending_format from a previous half-run. Re-run artifacts workspace create to resume; if the bucket creds are wrong, fix and
artifacts storage set with the new ones.
Still stuck?
Open a Slack ticket with:
artifacts --version- The command you ran and its full output
tail -50 ~/.cache/artifacts/mount-*/artifacts.log(for mount issues)- The workspace identifier
artifacts://<handle>/<name>(for orchestrator issues, so we can correlate on our side)