• 1 Post
  • 13 Comments
Joined 2 years ago
cake
Cake day: December 2nd, 2023

help-circle

  • Sure thing, I’ll edit this reply when I get back to my computer. Just note that I also have a tailscale and nginx container in the pod which are not necessary.

    You’ll see my nginx config which reverse proxies to the port the service is running on. On public servers I have another nginx running with SSL that proxies to the port I map the pod’s port 80 to.

    I usually run my pods as an unpriviledged user with loginctl enable-linger which starts the enabled systemctl --user services on boot.

    All that being said I haven’t publically exposed linkwarden yet, mainly because it’s the second most resource intensive service I run and I have all my public stuff on a shitty vps.

    Edit: My opsec is so bad hahaha

    Edit2: I just realized the caps I gave were to the tailscale container, not the linkwarden container. Linkwarden can run with no caps :)

    I added the tailscale stuff back

    files:

    linkwarden-pod.kube:

    [Install]
    WantedBy=default.target
    
    [Kube]
    # Point to the yaml in the same directory
    Yaml=linkwarden-pod.yml
    PublishPort=127.0.0.1:7777:80
    AutoUpdate=registry
    
    [Service]
    Restart=always
    

    linkwarden-pod.yml:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: linkwarden
    spec:
      containers:
        - name: ts-linkwarden
          image: docker.io/tailscale/tailscale:latest
          env:
            - name: TS_HOSTNAME
              value: "link"
            - name: TS_STATE_DIR
              value: /var/lib/tailscale
            - name: TS_AUTHKEY
              valueFrom:
                secretKeyRef:
                  name: ts-auth-kube
                  key: ts-auth
          volumeMounts:
            - name: linkwarden-ts-storage
              mountPath: /var/lib/tailscale
          securityContext:
            capabilities:
              add:
                - NET_ADMIN
                - SYS_MODULE
    
        - name: linkwarden
          image: ghcr.io/linkwarden/linkwarden:latest
          env:
            - name: INSTANCE_NAME
              value: link.mydomain.com
            - name: AUTH_URL
              value: http://linkwarden:3000/api/v1/auth
            - name: NEXTAUTH_SECRET
              value: LOL_I_JUST_PUBLISHED_THIS_I_CHANGED_IT
            - name: DATABASE_URL
              value: postgresql://postgres:password@linkwarden-postgres:5432/postgres
            - name: NEXT_PUBLIC_DISABLE_REGISTRATION
              value: "true"
    
        - name: linkwarden-nginx
          image: docker.io/library/nginx:alpine
          volumeMounts:
            - name: linkwarden-nginx-conf
              subPath: nginx.conf
              mountPath: /etc/nginx/nginx.conf
              readOnly: true
    
        - name: linkwarden-postgres
          image: docker.io/library/postgres:latest
          env:
            - name: POSTGRES_PASSWORD
              value: "password"
          volumeMounts:
            - name: linkwarden-postgres-db
              mountPath: /var/lib/postgresql/data
    
      volumes:
        - name: linkwarden-nginx-conf
          configMap:
            name: linkwarden-nginx-conf
            items:
              - key: nginx.conf
                path: nginx.conf
        - name: linkwarden-postgres-db
          persistentVolumeClaim:
            claimName: linkwarden-postgres-db-claim
        - name: linkwarden-ts-storage
          persistentVolumeClaim:
            claimName: linkwarden-ts-pv-claim
    
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: linkwarden-nginx-conf
    data:
      nginx.conf: |
        #user  nobody;
        worker_processes  1;
         #pid        logs/nginx.pid;
    
    
        events {
            worker_connections  1024;
        }
    
    
        http {
            include       mime.types;
            default_type  application/octet-stream;
    
    
            sendfile        on;
    
            #keepalive_timeout  0;
            keepalive_timeout  65;
    
            gzip  off;
    
            # set_real_ip_from cw.55.55.1;
            real_ip_header X-Forwarded-For;
            real_ip_recursive on;
    
            server {
                listen       80;
                server_name  _;
    
                location / {
                        proxy_pass http://localhost:3000/;
    
                        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                        proxy_set_header X-Forwarded-Port $server_port;
                        proxy_set_header X-Forwarded-Scheme $scheme;
                        proxy_set_header X-Forwarded-Proto  $scheme;
                        proxy_set_header X-Real-IP $remote_addr;
                        proxy_set_header Accept-Encoding "";
                        proxy_set_header Host $host;
                }
            }
        }
    
    

    I also have a little helper script you might like

    copy.sh:

    #!/bin/bash
    
    SYSTEMD_DIRECTORY="${HOME}/.config/containers/systemd"
    POD_NAME="linkwarden-pod"
    
    mkdir -p "$SYSTEMD_DIRECTORY"
    cp "${POD_NAME}".{kube,yml} "${SYSTEMD_DIRECTORY}"/
    
    systemctl --user daemon-reload
    
    





  • I’m pretty clueless at that point. Never seen something like it. Apologies for not actually being able to help out. Hopefully it will be resolved soon, though.

    Right? It’s super strange. On a normal system I would try reinstalling GRUB and maybe manually generating the initramfs, maybe compile a new kernel, but idk if I can do that here.

    This is probably not the advice you’d like to hear, but I wonder if rebasing to uBlue’s Kinoite would make any difference. Regardless, wish you the best!

    That’s actually really interesting, maybe getting away from the fedora ecosystem would help. I do think uBlue is downstream. But it wouldn’t hurt


  • Perhaps I had to be more clear and explicit, boot twice So yeah it’s not showing anything from the “first” broken deployment. Just to be empirical,

    • I ran date > time.txt && reboot
    • Tried to boot the new deployment, waited about 10 min
    • reset and booted into working deployment
    • ran journalctl -b -1 and the last entry was 2 seconds after the date output in step one.

    It is not even getting to systemd.

    I tried uninstalling the packages, and rebasing to silverblue. Nothing changed. I think I’m going to just put a pin in it and hope that the next update fixes something haha

    speaking of pins

    If you’re afraid of losing your working deployment, ensure to evoke the sudo ostree admin pin 1 command to pin it.

    This is so important and the first thing I did. I got screwed a few years ago when I first tried silverblue. Everyone talks about how you can just rollback, no one ever mentions that you can lose your last working deployment hahah thanks though


  • No Nvidia, just intel integrated graphics. Its a dell ispiron 7500.

    My system isn’t pristine but it’s not that bad, and I don’t see how any of the packages would cause this problem. I can upgrade fine, there’s no conflicts. I could see it messing up if I tried to layer a different bootloader but it’s nothing like that. Here’s my layered packages anyway

    LayeredPackages: alacritty bat btop distrobox fish flatpak-builder kpeoplevcard light lsd mako neovim parallel python3-neovim shellcheck swaybg swayfx swayidle swaylock syncthing
                               tailscale tmux virt-manager waybar wlogout wlsunset wofi
    

    journalctl shows nothing. That’s what I meant by no logs, I should have been clearer. It’s not even getting that far. It’s like it’s stuck in grub, but only when I try the new kernel.

    The only deviation from the vanilla kargs have been for troubleshooting this and I did it in the grub editor so they aren’t persistent. I tried removing rhgb changed quiet to verbose and debug and loglevel=7 just to see if anything would happen. It still just hangs