Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clients report: Already Closed #1997

Open
angelAtSequent opened this issue Jun 14, 2024 · 4 comments
Open

clients report: Already Closed #1997

angelAtSequent opened this issue Jun 14, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@angelAtSequent
Copy link

What happened

immudb running with the backend on S3 reports "Already Closed" so clients can not connect to the database.
Tested from multiple clients, including the webinterface:

imagen

Some commands still works:

./immuadmin -a immudb-grpc database create helloissue
database 'helloissue' {replica: false} successfully created

Others, doesn't

./immuadmin -a immudb-grpc database list 
Error: rpc error: code = Unknown desc = already closed

What you expected to happen

No error message at all

How to reproduce it (as minimally and precisely as possible)

Not sure

Environment

immudb 1.9.3
Commit  : 5487dd300655083ff68c4a72c2edb38ca84dd1bb
Built at: Thu, 23 May 2024 10:07:40 UTC
Static  : true

Additional info (any other context about the problem)
As usual, it was running fine on S3, eventually it stopped working

@angelAtSequent angelAtSequent added the bug Something isn't working label Jun 14, 2024
@angelAtSequent angelAtSequent changed the title Already Closed clients reports: Already Closed Jun 14, 2024
@angelAtSequent angelAtSequent changed the title clients reports: Already Closed clients report: Already Closed Jun 14, 2024
@ostafen
Copy link
Collaborator

ostafen commented Jun 14, 2024

Hey, @angelAtSequent, I tried to execute the sequence of commands using minio, and the error didn't appear.
Could you provide more context, such as the configuration used? Did the error appear when upgrading to 1.9.3?

@angelAtSequent
Copy link
Author

This is how the stateful set looks like:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: immudb
spec:
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Retain
    whenScaled: Retain
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: immudb
      app.kubernetes.io/name: immudb
  serviceName: immudb
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: immudb
        app.kubernetes.io/name: immudb
    spec:
      containers:
      - env:
        - name: IMMUDB_ADMIN_PASSWORD
          valueFrom:
            secretKeyRef:
              key: adminPassword
              name: immudb
        - name: IMMUDB_S3_STORAGE
          value: "true"
        - name: IMMUDB_S3_EXTERNAL_IDENTIFIER
          value: "true"
        - name: IMMUDB_S3_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              key: AWS_ACCESS_KEY_ID
              name: immudb-s3-backend
        - name: IMMUDB_S3_SECRET_KEY
          valueFrom:
            secretKeyRef:
              key: AWS_SECRET_ACCESS_KEY
              name: immudb-s3-backend
        - name: IMMUDB_S3_BUCKET_NAME
          valueFrom:
            secretKeyRef:
              key: BUCKET_NAME
              name: immudb-s3-backend
        - name: IMMUDB_S3_LOCATION
          valueFrom:
            secretKeyRef:
              key: AWS_REGION
              name: immudb-s3-backend
        - name: IMMUDB_S3_PATH_PREFIX
          value: immudb
        - name: IMMUDB_S3_ENDPOINT
          valueFrom:
            secretKeyRef:
              key: AWS_S3_ENDPOINT
              name: immudb-s3-backend
        image: codenotary/immudb:1.9.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 9
          httpGet:
            path: /readyz
            port: metrics
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: immudb
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 3322
          name: grpc
          protocol: TCP
        - containerPort: 9497
          name: metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: metrics
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        securityContext:
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/immudb
          name: immudb-storage
          subPath: immudb
        - mountPath: /mnt/secrets
          name: secrets-store-inline
          readOnly: true
        - mountPath: /mnt/s3-backend-secrets
          name: s3-backend-secrets-store-inline
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 3322
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 3322
        runAsNonRoot: true
        runAsUser: 3322
      serviceAccount: immudb
      serviceAccountName: immudb
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir:
          sizeLimit: 5Gi
        name: immudb-storage
      - csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass: immudb
        name: secrets-store-inline
      - csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass: immudb-s3-backend
        name: s3-backend-secrets-store-inline
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate

I would say yes, it has been updated from 1.9DOM2 but im not a 100% sure.

@angelAtSequent
Copy link
Author

I tried downgrading from 1.9.3 to 1.9DOM2 and it "works" again.

The downside... the resource utilization is HUGE on 1.9DOM2.
With a few databases (mostly empty) consumes 12Gb Ram and 3 cores.
Is this resource consumption expected?

Does it make sense to... while running on 1.9DOM2 dump all the databases, then spin up a fresh 1.9.3 server and restore the databases?

Thanks

@angelAtSequent
Copy link
Author

Does it make sense to... while running on 1.9DOM2 dump all the databases, then spin up a fresh 1.9.3 server and restore the databases?

Is this resource consumption expected?

WDYT @ostafen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants