mar 09 22:10:54 tux systemd-logind[929]: The system will suspend now!
mar 09 22:10:54 tux NetworkManager[924]: [1741554654.2390] manager: sleep: sleep requested (sleeping: no enabled: yes)
mar 09 22:10:54 tux warp-svc[14722]: 2025-03-09T21:10:54.239Z DEBUG actor_connectivity::connectivity: New power state Suspend
mar 09 22:10:54 tux warp-svc[14722]: 2025-03-09T21:10:54.239Z INFO handle_update{update=PowerStatusChanged}: actor_device_state: close time.busy=8.60µs time.idle=35.0µs
mar 09 22:10:54 tux warp-svc[14722]: 2025-03-09T21:10:54.239Z INFO handle_update{update=PowerStatusChanged}: actor_alternate_network: close time.busy=7.52µs time.idle=77.3µs
mar 09 22:10:54 tux NetworkManager[924]: [1741554654.2393] device (wlan0): state change: unavailable -> unmanaged (reason 'unmanaged-sleeping', managed-type: 'full')
mar 09 22:10:54 tux NetworkManager[924]: [1741554654.2398] device (wlan0): set-hw-addr: reset MAC address to 60:45:2E:19:C3:EB (unmanage)
mar 09 22:10:54 tux NetworkManager[924]: [1741554654.2404] manager: NetworkManager state is now ASLEEP
mar 09 22:10:54 tux warp-svc[14722]: 2025-03-09T21:10:54.244Z DEBUG main_loop: warp::warp_service: Entering main loop arm arm="main_loop_compat_rx"
mar 09 22:10:54 tux warp-svc[14722]: 2025-03-09T21:10:54.244Z DEBUG main_loop:handle_update{update=PowerStatusChanged}: warp::warp_service::actor_compat: Handling power status change new_power_status=Suspend
mar 09 22:10:54 tux warp-svc[14722]: 2025-03-09T21:10:54.244Z DEBUG main_loop:handle_update{update=PowerStatusChanged}: warp::warp_service::actor_compat: close time.busy=43.1µs time.idle=12.4µs
mar 09 22:10:54 tux systemd[1]: Reached target Sleep.
mar 09 22:10:54 tux systemd[1]: Starting NVIDIA system suspend actions...
mar 09 22:10:54 tux suspend[31799]: nvidia-suspend.service
mar 09 22:10:54 tux logger[31799]: <13>Mar 9 22:10:54 suspend: nvidia-suspend.service
mar 09 22:10:54 tux kernel: rfkill: input handler enabled
mar 09 22:10:54 tux gsd-media-keys[1813]: Unable to get default sink
mar 09 22:10:54 tux gsd-media-keys[1813]: Unable to get default source
mar 09 22:10:55 tux systemd[1]: nvidia-suspend.service: Deactivated successfully.
mar 09 22:10:55 tux systemd[1]: Finished NVIDIA system suspend actions.
mar 09 22:10:55 tux systemd[1]: nvidia-suspend.service: Consumed 777ms CPU time, 197.3M memory peak.
mar 09 22:10:55 tux systemd[1]: Starting System Suspend...
mar 09 22:10:55 tux systemd-sleep[31820]: User sessions remain unfrozen on explicit request ($SYSTEMD_SLEEP_FREEZE_USER_SESSIONS=0).
mar 09 22:10:55 tux systemd-sleep[31820]: This is not recommended, and might result in unexpected behavior, particularly
mar 09 22:10:55 tux systemd-sleep[31820]: in suspend-then-hibernate operations or setups with encrypted home directories.
mar 09 22:10:55 tux systemd-sleep[31820]: Performing sleep operation 'suspend'...
mar 09 22:10:55 tux kernel: PM: suspend entry (deep)
mar 09 22:10:55 tux kernel: Filesystems sync: 0.050 seconds
mar 09 22:11:15 tux kernel: Freezing user space processes
mar 09 22:11:15 tux kernel: Freezing user space processes failed after 20.008 seconds (1 tasks refusing to freeze, wq_busy=0):
mar 09 22:11:15 tux kernel: task:gnome-shell state:R running task stack:0 pid:1731 tgid:1731 ppid:1558 flags:0x0000000e
mar 09 22:11:15 tux kernel: Call Trace:
mar 09 22:11:15 tux kernel:
mar 09 22:11:15 tux kernel: ? nv_drm_mmap+0xda/0x160 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:15 tux kernel: ? __mmap_region+0x745/0xb10
mar 09 22:11:15 tux kernel: ? mmap_region+0x78/0xa0
mar 09 22:11:15 tux kernel: ? do_mmap+0x499/0x690
mar 09 22:11:15 tux kernel: ? vm_mmap_pgoff+0xec/0x1c0
mar 09 22:11:15 tux kernel: ? ksys_mmap_pgoff+0x144/0x1e0
mar 09 22:11:15 tux kernel: ? do_syscall_64+0x82/0x190
mar 09 22:11:15 tux kernel: ? syscall_exit_to_user_mode+0x37/0x1c0
mar 09 22:11:15 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:15 tux kernel: ? vma_node_allow+0xb9/0xf0
mar 09 22:11:15 tux kernel: ? drm_gem_handle_create_tail+0xb3/0x180
mar 09 22:11:15 tux kernel: ? nv_drm_gem_alloc_nvkms_memory_ioctl+0x12c/0x1a0 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:15 tux kernel: ? __pfx_nv_drm_gem_alloc_nvkms_memory_ioctl+0x10/0x10 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:15 tux kernel: ? drm_ioctl_kernel+0xad/0x100
mar 09 22:11:15 tux kernel: ? __check_object_size+0x50/0x210
mar 09 22:11:15 tux kernel: ? drm_ioctl+0x2a1/0x4d0
mar 09 22:11:15 tux kernel: ? __pfx_nv_drm_gem_alloc_nvkms_memory_ioctl+0x10/0x10 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:15 tux kernel: ? sched_mm_cid_remote_clear+0x4f/0xe0
mar 09 22:11:15 tux kernel: ? __rseq_handle_notify_resume+0xa2/0x4d0
mar 09 22:11:15 tux kernel: ? switch_fpu_return+0x4e/0xd0
mar 09 22:11:15 tux kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x79/0x90
mar 09 22:11:15 tux kernel: ? syscall_exit_to_user_mode+0x37/0x1c0
mar 09 22:11:15 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:15 tux kernel: ? syscall_exit_to_user_mode+0x37/0x1c0
mar 09 22:11:15 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:15 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:15 tux kernel: ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
mar 09 22:11:15 tux kernel:
mar 09 22:11:15 tux kernel: OOM killer enabled.
mar 09 22:11:15 tux kernel: Restarting tasks ... done.
mar 09 22:11:15 tux kernel: random: crng reseeded on system resumption
mar 09 22:11:15 tux kernel: PM: suspend exit
mar 09 22:11:15 tux kernel: PM: suspend entry (s2idle)
mar 09 22:11:15 tux rtkit-daemon[1166]: The canary thread is apparently starving. Taking action.
mar 09 22:11:15 tux warp-svc[14722]: 2025-03-09T21:11:15.360Z WARN watchdog: warp::watchdog: Watchdog reports hung daemon watchdog_name="main loop" hang_count=1 hang_tick=1
mar 09 22:11:15 tux rtkit-daemon[1166]: Demoting known real-time threads.
mar 09 22:11:15 tux rtkit-daemon[1166]: Successfully demoted thread 2079 of process 2051.
mar 09 22:11:15 tux rtkit-daemon[1166]: Successfully demoted thread 2051 of process 2051.
mar 09 22:11:15 tux rtkit-daemon[1166]: Successfully demoted thread 2083 of process 2049.
mar 09 22:11:15 tux rtkit-daemon[1166]: Successfully demoted thread 2049 of process 2049.
mar 09 22:11:15 tux rtkit-daemon[1166]: Successfully demoted thread 2073 of process 2048.
mar 09 22:11:15 tux rtkit-daemon[1166]: Successfully demoted thread 2048 of process 2048.
mar 09 22:11:15 tux rtkit-daemon[1166]: Successfully demoted thread 1746 of process 1731.
mar 09 22:11:15 tux rtkit-daemon[1166]: Demoted 7 threads.
mar 09 22:11:35 tux kernel: Filesystems sync: 0.011 seconds
mar 09 22:11:35 tux kernel: Freezing user space processes
mar 09 22:11:35 tux kernel: Freezing user space processes failed after 20.005 seconds (2 tasks refusing to freeze, wq_busy=0):
mar 09 22:11:35 tux kernel: task:gnome-shell state:R running task stack:0 pid:1731 tgid:1731 ppid:1558 flags:0x0000400e
mar 09 22:11:35 tux kernel: Call Trace:
mar 09 22:11:35 tux kernel:
mar 09 22:11:35 tux kernel: ? jiffies_to_timespec64+0x24/0x40
mar 09 22:11:35 tux kernel: ? os_get_current_tick+0x3b/0xa0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv011586rm+0x119/0x280 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv038389rm+0x8a/0xd0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv038592rm+0x107/0x360 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv032741rm+0xed/0x1f0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv032741rm+0xbd/0x1f0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv032709rm+0x6ed/0x1240 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv024647rm+0x139d/0x1e80 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv013134rm+0x161/0x290 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv037336rm+0x1e5/0x4a0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv037336rm+0x18f/0x4a0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv041111rm+0xb64/0xf00 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv053233rm+0x28a/0x3a0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv051240rm+0xfd/0x160 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv051238rm+0x5c/0x90 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv051238rm+0x32/0x90 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv013395rm+0x64/0xa0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? _nv013395rm+0x28/0xa0 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? rm_kernel_rmapi_op+0x92/0x273 [nvidia b16fc6fcdba4f4243ffe75c81b4914b42f067c2d]
mar 09 22:11:35 tux kernel: ? nvkms_call_rm+0x4a/0x80 [nvidia_modeset 10b11162e18a7478e606c131b426b2e381b3ecc6]
mar 09 22:11:35 tux kernel: ? _nv003120kms+0x4c/0x60 [nvidia_modeset 10b11162e18a7478e606c131b426b2e381b3ecc6]
mar 09 22:11:35 tux kernel: ? _nv000585kms+0xb4/0x110 [nvidia_modeset 10b11162e18a7478e606c131b426b2e381b3ecc6]
mar 09 22:11:35 tux kernel: ? _nv000585kms+0x8e/0x110 [nvidia_modeset 10b11162e18a7478e606c131b426b2e381b3ecc6]
mar 09 22:11:35 tux kernel: ? __nv_drm_gem_nvkms_map+0x6c/0xd0 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:35 tux kernel: ? __nv_drm_gem_nvkms_mmap+0x16/0x40 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:35 tux kernel: ? nv_drm_mmap+0xda/0x160 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:35 tux kernel: ? __mmap_region+0x745/0xb10
mar 09 22:11:35 tux kernel: ? mmap_region+0x78/0xa0
mar 09 22:11:35 tux kernel: ? do_mmap+0x499/0x690
mar 09 22:11:35 tux kernel: ? vm_mmap_pgoff+0xec/0x1c0
mar 09 22:11:35 tux kernel: ? ksys_mmap_pgoff+0x144/0x1e0
mar 09 22:11:35 tux kernel: ? do_syscall_64+0x82/0x190
mar 09 22:11:35 tux kernel: ? syscall_exit_to_user_mode+0x37/0x1c0
mar 09 22:11:35 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:35 tux kernel: ? vma_node_allow+0xb9/0xf0
mar 09 22:11:35 tux kernel: ? drm_gem_handle_create_tail+0xb3/0x180
mar 09 22:11:35 tux kernel: ? nv_drm_gem_alloc_nvkms_memory_ioctl+0x12c/0x1a0 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:35 tux kernel: ? __pfx_nv_drm_gem_alloc_nvkms_memory_ioctl+0x10/0x10 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:35 tux kernel: ? drm_ioctl_kernel+0xad/0x100
mar 09 22:11:35 tux kernel: ? __check_object_size+0x50/0x210
mar 09 22:11:35 tux kernel: ? drm_ioctl+0x2a1/0x4d0
mar 09 22:11:35 tux kernel: ? __pfx_nv_drm_gem_alloc_nvkms_memory_ioctl+0x10/0x10 [nvidia_drm 83070fb9976dd254908cf431268b2e32d17e9adf]
mar 09 22:11:35 tux kernel: ? sched_mm_cid_remote_clear+0x4f/0xe0
mar 09 22:11:35 tux kernel: ? __rseq_handle_notify_resume+0xa2/0x4d0
mar 09 22:11:35 tux kernel: ? switch_fpu_return+0x4e/0xd0
mar 09 22:11:35 tux kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x79/0x90
mar 09 22:11:35 tux kernel: ? syscall_exit_to_user_mode+0x37/0x1c0
mar 09 22:11:35 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:35 tux kernel: ? syscall_exit_to_user_mode+0x37/0x1c0
mar 09 22:11:35 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:35 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:35 tux kernel: ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
mar 09 22:11:35 tux kernel:
mar 09 22:11:35 tux kernel: task:pool-85 state:D stack:0 pid:31798 tgid:1731 ppid:1558 flags:0x00000006
mar 09 22:11:35 tux kernel: Call Trace:
mar 09 22:11:35 tux kernel:
mar 09 22:11:35 tux kernel: __schedule+0x425/0x12b0
mar 09 22:11:35 tux kernel: ? __pfx_futex_wake_mark+0x10/0x10
mar 09 22:11:35 tux kernel: schedule+0x27/0xf0
mar 09 22:11:35 tux kernel: schedule_preempt_disabled+0x15/0x30
mar 09 22:11:35 tux kernel: rwsem_down_read_slowpath+0x26f/0x4e0
mar 09 22:11:35 tux kernel: down_read+0x48/0xa0
mar 09 22:11:35 tux kernel: do_madvise+0xfe/0x420
mar 09 22:11:35 tux kernel: __x64_sys_madvise+0x2b/0x40
mar 09 22:11:35 tux kernel: do_syscall_64+0x82/0x190
mar 09 22:11:35 tux kernel: ? __x64_sys_rt_sigprocmask+0xdb/0x150
mar 09 22:11:35 tux kernel: ? syscall_exit_to_user_mode+0x37/0x1c0
mar 09 22:11:35 tux kernel: ? do_syscall_64+0x8e/0x190
mar 09 22:11:35 tux kernel: ? do_user_addr_fault+0x36c/0x620
mar 09 22:11:35 tux kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
mar 09 22:11:35 tux kernel: RIP: 0033:0x7c10a1d23aeb
mar 09 22:11:35 tux kernel: RSP: 002b:00007c103e3fc888 EFLAGS: 00000206 ORIG_RAX: 000000000000001c
mar 09 22:11:35 tux kernel: RAX: ffffffffffffffda RBX: 00007c103e3fecdc RCX: 00007c10a1d23aeb
mar 09 22:11:35 tux kernel: RDX: 0000000000000004 RSI: 00000000007fa000 RDI: 00007c103dbfe000
mar 09 22:11:35 tux kernel: RBP: 00007c103e3fc930 R08: 00007c103e3fe6c0 R09: 0000000000000002
mar 09 22:11:35 tux kernel: R10: 0000000000000008 R11: 0000000000000206 R12: 00007c103dbfe000
mar 09 22:11:35 tux kernel: R13: 0000000000801000 R14: 0000000000000000 R15: 00007c109a9fd530
mar 09 22:11:35 tux kernel:
mar 09 22:11:35 tux kernel: OOM killer enabled.
mar 09 22:11:35 tux kernel: Restarting tasks ... done.
mar 09 22:11:35 tux kernel: random: crng reseeded on system resumption
mar 09 22:11:35 tux kernel: PM: suspend exit
mar 09 22:11:35 tux warp-svc[14722]: 2025-03-09T21:11:35.385Z WARN watchdog: warp::watchdog: Watchdog reports hung daemon watchdog_name="main loop" hang_count=2 hang_tick=1
mar 09 22:11:35 tux systemd-sleep[31820]: Failed to put system to sleep. System resumed again: Device or resource busy
mar 09 22:11:36 tux warp-svc[14722]: 2025-03-09T21:11:36.763Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:11:36 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:11:36 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.2;
mar 09 22:11:54 tux warp-svc[14722]: 2025-03-09T21:11:54.043Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:11:54 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.2;
mar 09 22:11:54 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:11:54 tux warp-svc[14722]: 2025-03-09T21:11:54.045Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:11:54 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:11:54 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.2;
mar 09 22:12:04 tux warp-svc[14722]: 2025-03-09T21:12:04.079Z DEBUG collect_device_state: actor_device_state::collector: close time.busy=1.81µs time.idle=2.90µs
mar 09 22:12:38 tux warp-svc[14722]: 2025-03-09T21:12:38.203Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:12:38 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.2;
mar 09 22:12:38 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:12:53 tux warp-svc[14722]: 2025-03-09T21:12:53.990Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:12:53 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:12:54 tux warp-svc[14722]: 2025-03-09T21:12:53.992Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:12:54 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:13:05 tux systemd[1]: systemd-suspend.service: Main process exited, code=exited, status=1/FAILURE
mar 09 22:13:37 tux warp-svc[14722]: 2025-03-09T21:13:37.936Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:13:37 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:13:53 tux warp-svc[14722]: 2025-03-09T21:13:53.936Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:13:53 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:13:53 tux warp-svc[14722]: 2025-03-09T21:13:53.938Z DEBUG actor_connectivity::connectivity: Routes changed:
mar 09 22:13:53 tux warp-svc[14722]: NewNeighbour; Destination: 192.168.1.1;
mar 09 22:14:04 tux warp-svc[14722]: 2025-03-09T21:14:04.079Z DEBUG collect_device_state: actor_device_state::collector: close time.busy=1.94µs time.idle=2.88µs
mar 09 22:14:25 tux kernel: [drm:__nv_drm_gem_nvkms_map [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to map NvKmsKapiMemory 0x00000000a13fb1a7
mar 09 22:14:25 tux gnome-shell[1731]: Realizing HW cursor failed: Failed write to gbm_bo: Cannot allocate memory
mar 09 22:14:25 tux gnome-shell[1731]: Failed to set hardware cursor (Failed write to gbm_bo: Cannot allocate memory), using OpenGL from now on
-- Boot aa2f32d4c5cc44540000000000000000 --
-- Boot aa2f32d4c5cc4454b187bd1db45cac98 --
mar 09 22:14:25 tux warp-taskbar[14310]: 2025-03-09T21:14:25.915Z INFO ThreadId(28) warp_taskbar::ipc: Received status status=Unable(NoNetwork)