Actix vs Fiber: what a difference a tweak makes

Introduction

In my previous post I compared the performance of Actix with Rust and Fiber with Go. However, this was not entirely fair, since for one I did not use connection pooling in the Fiber part. Let us see, two simple tweaks can improve the performance considerably.

Test setup

To keep things simple I used the same setup as in my previous article.

First tweak: increasing the number of replicas in the cluster

The first tweak was quite simple: just increase the number of replicas. This is done easily in Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deployment
spec:
  replicas: 5
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: web
          image: <your dockerhub username>/<your dockerhub image>:<your dockerhub tag>
          imagePullPolicy: IfNotPresent

          env:
            - name: "host"
              valueFrom:
                configMapKeyRef:
                  key: HOST
                  name: db-secret-credentials
            - name: "user"
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_USER
                  name: db-secret-credentials
            - name: "password"
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_PASSWORD
                  name: db-secret-credentials
            - name: "dbname"
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_DB
                  name: db-secret-credentials
            - name: "port"
              valueFrom:
                configMapKeyRef:
                  key: PORT
                  name: db-secret-credentials


You can see that the number of replicas has been increased to 5. Still each instance gets one connection, so this leads to our second tweak.

Tweak 2: increase the number of database connections

The second tweak is also quite simple: increase the number of database connection and use a form of pooling. Information about these functions can be found here.

All we need to do is change the InitializeDatabaseConnection() method in the ‘db.go’ file:

func InitializeDatabaseConnection() (*gorm.DB, error) {
	dsn := ConstructDsn()
	db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
	if err != nil {
		return nil, err
	}
	defer func() {
		if r := recover(); r != nil {
			fmt.Println("Recovered in InitializeDatabaseConnection")
		}
	}()
	connection, err := db.DB()
	if err == nil {
		connection.SetMaxOpenConns(10)
		connection.SetMaxIdleConns(5)
		connection.SetConnMaxLifetime(time.Hour)
	} else {
		return nil, err
	}
	return db, err
}

The block with connection.SetMaxOpenConns() calls is what has been added.

Results (or no results)

So, what happens in the test. Let’s start with 20 users like in the previous article:

Well, not that much of an improvement, perhaps we will see some more improvement with 100 users:

Wow, here we see an improvement of around 30% in the number of requests. In our previous test it was around 366 RPS, here it is over 450. What is more important, is the fact the performance seems to be stable.

Time to put on a bit more pressure with 250 users.

Again, the performance seems to be stable and much better than in our previous test, where had around 339 RPS. The performance is also very stable unlike the performance we saw in the previous test. Also unlike the previous test there were no failures, another improvement

Now it is time for the big test: 1000 users:

Only a small fall in the RPS-number, but still about 30% beter than in our previous. Also, as per usual very stable performance. Sadly however there were some failures:

Well, one failure, I let it run around 50,000 request, and all I got is one failure, not bad.

Conclusion

It looks like the performance improved on all the fronts, both in the requests per second, the overall stability and the number of failures.

To make this more definite more usecases need to be explored. But it looks like these two simple tweaks made Fiber a worthy competitor for Actix when it comes to performance.

Leave a Reply

Your email address will not be published. Required fields are marked *