HUGO

On upgrade to v0.76.5: fatal error: pipe failed

I’m getting the following error message on upgrading from v0.75.1 to v0.76.5:

Watching for config changes in /config.toml
fatal error: pipe failed

    goroutine 1 [running]:
    runtime.throw(0x2b42b07, 0xb)
            /usr/local/go/src/runtime/panic.go:1116 +0x72 fp=0xc03989b860 sp=0xc03989b830 pc=0x1036712
    runtime.sigNoteSetup(0x3c1c160)
            /usr/local/go/src/runtime/os_darwin.go:98 +0xc5 fp=0xc03989b888 sp=0xc03989b860 pc=0x1033465
    os/signal.signal_enable(0xc6366d4000000002)
            /usr/local/go/src/runtime/sigqueue.go:198 +0xa5 fp=0xc03989b8a8 sp=0xc03989b888 pc=0x106aea5
    os/signal.enableSignal(...)
            /usr/local/go/src/os/signal/signal_unix.go:49
    os/signal.Notify.func1(0x2)
            /usr/local/go/src/os/signal/signal.go:144 +0x88 fp=0xc03989b8c8 sp=0xc03989b8a8 pc=0x25ee7c8
    os/signal.Notify(0xc03f72faa0, 0xc03989bae0, 0x2, 0x2)
            /usr/local/go/src/os/signal/signal.go:164 +0x162 fp=0xc03989b940 sp=0xc03989b8c8 pc=0x25ee1e2
    github.com/gohugoio/hugo/commands.(*commandeer).serve(0xc0004e8750, 0xc0004ed9c0, 0x160, 0x200)
            /root/project/hugo/commands/server.go:498 +0x625 fp=0xc03989bb60 sp=0xc03989b940 pc=0x2626ea5
    github.com/gohugoio/hugo/commands.(*serverCmd).server(0xc0004ed9c0, 0xc00028e580, 0x3c1bb78, 0x0, 0x0, 0x0, 0x0)
            /root/project/hugo/commands/server.go:274 +0x2b6 fp=0xc03989bca8 sp=0xc03989bb60 pc=0x2625676
    github.com/gohugoio/hugo/commands.(*serverCmd).server-fm(0xc00028e580, 0x3c1bb78, 0x0, 0x0, 0x0, 0x0)
            /root/project/hugo/commands/server.go:131 +0x52 fp=0xc03989bcf0 sp=0xc03989bca8 pc=0x2635ad2
    github.com/spf13/cobra.(*Command).execute(0xc00028e580, 0x3c1bb78, 0x0, 0x0, 0xc00028e580, 0x3c1bb78)
            /go/pkg/mod/github.com/spf13/cobra@v0.0.7/command.go:838 +0x47c fp=0xc03989bdc0 sp=0xc03989bcf0 pc=0x11efc7c
    github.com/spf13/cobra.(*Command).ExecuteC(0xc000337340, 0xc000316010, 0x8, 0xc000517600)
            /go/pkg/mod/github.com/spf13/cobra@v0.0.7/command.go:943 +0x336 fp=0xc03989be98 sp=0xc03989bdc0 pc=0x11f07b6
    github.com/gohugoio/hugo/commands.Execute(0xc00000c090, 0x1, 0x1, 0x1005fe5, 0xc000100058, 0x0, 0x0)
            /root/project/hugo/commands/hugo.go:90 +0xb9 fp=0xc03989bf28 sp=0xc03989be98 pc=0x2612519
    main.main()
            /root/project/hugo/main.go:23 +0x76 fp=0xc03989bf88 sp=0xc03989bf28 pc=0x2637216
    runtime.main()
            /usr/local/go/src/runtime/proc.go:204 +0x209 fp=0xc03989bfe0 sp=0xc03989bf88 pc=0x1038ee9
    runtime.goexit()
            /usr/local/go/src/runtime/asm_amd64.s:1374 +0x1 fp=0xc03989bfe8 sp=0xc03989bfe0 pc=0x106eaa1

    goroutine 18 [select]:
    go.opencensus.io/stats/view.(*worker).start(0xc00058c050)
            /go/pkg/mod/go.opencensus.io@v0.22.0/stats/view/worker.go:154 +0x105
    created by go.opencensus.io/stats/view.init.0
            /go/pkg/mod/go.opencensus.io@v0.22.0/stats/view/worker.go:32 +0x57

    goroutine 6342 [select]:
    github.com/gohugoio/hugo/watcher.(*Batcher).run(0xc0402f6800)
            /root/project/hugo/watcher/batcher.go:53 +0x174
    created by github.com/gohugoio/hugo/watcher.New
            /root/project/hugo/watcher/batcher.go:42 +0x125

    goroutine 6341 [syscall]:
    syscall.syscall6(0x122fa40, 0x8, 0x0, 0x0, 0xc000506688, 0xa, 0x3c1be20, 0x0, 0x0, 0x0)
            /usr/local/go/src/runtime/sys_darwin.go:85 +0x2e
    golang.org/x/sys/unix.kevent(0x8, 0x0, 0x0, 0xc000506688, 0xa, 0x3c1be20, 0x0, 0x0, 0x0)
            /go/pkg/mod/golang.org/x/sys@v0.0.0-20200501145240-bc7a7d42d5c3/unix/zsyscall_darwin_amd64.go:292 +0xa6
    golang.org/x/sys/unix.Kevent(0x8, 0x0, 0x0, 0x0, 0xc000506688, 0xa, 0xa, 0x3c1be20, 0x0, 0x0, ...)
            /go/pkg/mod/golang.org/x/sys@v0.0.0-20200501145240-bc7a7d42d5c3/unix/syscall_bsd.go:413 +0x71
    github.com/fsnotify/fsnotify.read(0x8, 0xc000506688, 0xa, 0xa, 0x3c1be20, 0xc000506688, 0x0, 0xa, 0x0, 0x0)
            /go/pkg/mod/github.com/fsnotify/fsnotify@v1.4.9/kqueue.go:511 +0x6e
    github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc040255ec0)
            /go/pkg/mod/github.com/fsnotify/fsnotify@v1.4.9/kqueue.go:274 +0x831
    created by github.com/fsnotify/fsnotify.NewWatcher
            /go/pkg/mod/github.com/fsnotify/fsnotify@v1.4.9/kqueue.go:62 +0x199

    goroutine 6368 [select]:
    github.com/gohugoio/hugo/livereload.(*hub).run(0x3bde940)
            /root/project/hugo/livereload/hub.go:39 +0x1e9
    created by github.com/gohugoio/hugo/livereload.Initialize
            /root/project/hugo/livereload/livereload.go:98 +0x45

    goroutine 6328 [select]:
    github.com/gohugoio/hugo/commands.(*commandeer).newWatcher.func1(0xc0402f6800, 0xc0004e8750, 0xc030f9de40, 0xc041882360)
            /root/project/hugo/commands/hugo.go:873 +0xe5
    created by github.com/gohugoio/hugo/commands.(*commandeer).newWatcher
            /root/project/hugo/commands/hugo.go:871 +0x2ac

Update

I found this thread which references a similar issue.

I can now get Hugo to run by using hugo server --watch=false instead of hugo server.

Could you check what MacOS version you’re running?

macOS mojave, v10.14.6

Any updates on what may be causing this issue to occur?

I have the same issue in version v0.78.2
MacOS version 10.15.7 Catalina
go version: 1.15.2

We ran into this “fatal error: pipe failed” problem on MacOS 10.14.6 running Hugo v0.79.0-1415EFDC/extended after adding a small chunk (422 files and 250 folders) to the hugo content tree.

Similar to OP, hugo server --watch=false worked as a workaround but it hampered development. Notably this same repo built successfully on Windows 10 with similar specs, so it felt like a possibly MacOS thing? On a whim we tried this and the hugo server no longer gave the pipe failed fatal error:

sudo launchctl limit maxfiles 65535 200000
ulimit -n 65535
sudo sysctl -w kern.maxfiles=100000
sudo sysctl -w kern.maxfilesperproc=65535

Presumably with the complexity of our templates we just ran against a simultaneous process limit during the build, that happened when we finally added more files than the OS could handle.
Not clear that this is a general solution, but sharing what worked for us.