A tricount like expense-sharing system written in Go
Go to file
Muyao CHEN 322b441c70
All checks were successful
Build and test / Build (push) Successful in 2m21s
doc: add diary
2024-10-15 21:08:06 +02:00
.gitea/workflows fix(ci): install npm 2024-10-08 23:43:05 +02:00
api/openapi api: add session delete api doc 2024-10-13 22:22:44 +02:00
cmd/howmuch fix: do some housework 2024-10-02 22:48:27 +02:00
configs feat: rework db conn to use sql.DB for better portability 2024-10-15 14:01:53 +02:00
deployment deploy: add redis image 2024-10-13 21:11:23 +02:00
docs docs: Add design schemas 2024-10-09 23:26:21 +02:00
internal feat: rework db conn to use sql.DB for better portability 2024-10-15 14:01:53 +02:00
migrations feat(sql): Add unique constraint to admin's email 2024-10-06 13:44:27 +02:00
pkg/version fix: remove testing code, fix version print, change watch conf place 2024-10-02 23:26:42 +02:00
scripts feat: project init 2024-09-30 22:32:53 +02:00
web fix(ci): install npm 2024-10-08 23:43:05 +02:00
.air.toml fix: make create session works 2024-10-13 21:55:26 +02:00
.gitignore feat: implement session delete using redis 2024-10-15 10:14:40 +02:00
database.yml db: create user table 2024-10-04 21:21:03 +02:00
go.mod feat: rework db conn to use sql.DB for better portability 2024-10-15 14:01:53 +02:00
go.sum feat: implement session delete using redis 2024-10-15 10:14:40 +02:00
LICENSE feat: use nishanths/license tool to generate license 2024-09-30 23:24:17 +02:00
Makefile feat: rework db conn to use sql.DB for better portability 2024-10-15 14:01:53 +02:00
README.md doc: add diary 2024-10-15 21:08:06 +02:00
sqlc.yml feat: rework db conn to use sql.DB for better portability 2024-10-15 14:01:53 +02:00

howmuch

A tricount like expense-sharing system written in Go


It is a personal project to learn go and relative technologies.


Gitea action

Project Diary

2024/09/30

The idea comes from a discussion with my mom. I was thinking about doing some personal budget management thing but she brought up the expense-sharing application that could be a good idea. I explained why it was a terrible idea and had no value but in fact it was a really a good idea.

First I have to set up a web server. I'm thinking about using gin, since I have played with chi in other projects.

Then I have to add some basic support functions like system logging, versioning, and other stuffs.

Next I need to design the API.

  • User management: signup, login, logout.
  • A logged-in user must be able to:
    • create an event
    • add other users to that event
    • A user can only view their own events, but not the events of other users'
    • A user can add an expense to the event (reason, date, who payed how much, who benefited how much)
    • Users in the event can edit or delete one entry
    • changes are sent to friends in the event
    • User can get the money they spent themselves and the money they must pay to each other
    • User can also get the total amount or the histories.

That is what I thought of for now.

Thus, Besides a web server, I must have a database that can store all the data. ex. PostgreSQL. I need a message queue system (RabbitMQ?) to handle changes for an event. That will results in a messaging service sending emails.

I also want to use Redis for cache management.

What else?

OpenAPI + swagger for API management.

And last but not least, Docker + Kubernetes for the deployment.

That is what I am thinking of for now. I will note down other ideas during the project.

2024/10/01

A Go application has 3 parts:

  • Config
  • Business logic
  • Startup framework

Config

The application provides a command-line tool with options to load configs directly and it should also be able to read configs from the yaml/json files. And we should keep credentials in those files for the security reasons.

To do this, we can use pflag to read command line parameters, viper to read from config files in different formats, os.Getenv to read from environment variables and cobra for the command line tool.

The execution of the program is then just a command like howmuch run.

Moreover, in a distributed system, configs can be stored on etcd.

Kubernetes stores configuration data into etcd for service discovery and cluster management; etcds consistency is crucial for correctly scheduling and operating services. The Kubernetes API server persists cluster state into etcd. It uses etcds watch API to monitor the cluster and roll out critical configuration changes.

Business logic

  • init cache
  • init DBs (Redis, SQL, Kafka, etc.)
  • init web service (http, https, gRPC, etc.)
  • start async tasks like watch kube-apiserver; pull data from third-party services; store, register /metrics and listen on some port; start kafka consumer queue, etc.
  • Run specific business logic
  • Stop the program
  • others...

Startup framework

When business logic becomes complicated, we cannot spread them into a simple main function. We need something to handle all those task, sync or async. That is why we use cobra.

So for this project, we will use the combination of pflag, viper and cobra.

2024/10/02

Logging

Use zap for logging system. Log will be output to stdout for dev purpose, but it is also output to files. The log files can then be fetched to Elasticsearch for analyzing.

Version

Add versioning into the app.

2024/10/03

Set up the web server with some necessary/nice to have middlewares.

  • Recovery, Logger (already included in Default mode)
  • CORS
  • RequestId

Using channel and signal to gracefully shutdown the server.

A more comprehensible error code design :

  • Classical HTTP code.
  • Service error code composed by "PlatformError.ServiceError", e.g. "ResourceNotFound.PageNotFound"
  • error message.

The service error code helps to identify the problem more precisely.

2024/10/04

Application architecture design follows Clean Architecture that has several layers:

  • Entities: the models of the product
  • Use cases: the core business rule
  • Interface Adapters: convert data-in to entities and convert data-out to output ports.
  • Frameworks and drivers: Web server, DB.

Based on this logic, we create the following directories:

  • model: entities
  • infra: Provides the necessary functions to setup the infrastructure, especially the DB (output-port), but also the router (input-port). Once setup, we don't touch them anymore.
  • registry: Provides a register function for the main to register a service. It takes the pass to the output-port (ex.DBs) and gives back a pass (controller) to the input-port
  • adapter: Controllers are one of the adapters, when they are called, they parse the user input and parse them into models and run the usecase rules. Then they send back the response(input-port). For the output-port part, the repo is the implementation of interfaces defined in usecase/repo.
  • usecase: with the input of adapter, do what have to be done, and answer with the result. In the meantime, we may have to store things into DBs. Here we use the Repository model to decouple the implementation of the repo with the interface. Thus in usecase/repo we only define interfaces.

Then it comes the real design for the app.

Following the Agile method, I don't try to define the entire project at the beginning but step by step, starting at the user part.

type User struct {
	CreatedAt   time.Time
	UpdatedAt   time.Time
	FirstName   string
	LastName    string
	Email       string
	Password    string
	ID          int
}

Use Buffalo pop Soda CLI to create database migrations.

2024/10/06

Implement the architecture design for User entity.

Checked out OpenAPI, and found that it was not that simple at all. It needs a whole package of knowledge about the web development!

For the test-driven part,

  • model layer: just model designs, nothing to test
  • infra: routes and db connections, it works when it works. Nothing to test.
  • registry: Just return some structs, no logic. Not worth testing
  • adapter:
    • input-port (controller) test: it is about testing parsing the input value, and the output results writing. The unit test of controller is to make sure that they behave as defined in the API documentation. To test, we have to mock the business service.
    • output-port (repo) test: it is about testing converting business model to database model and the interaction with the database. If we are going to test them, it's about simulating different type of database behaviour (success, timeout, etc.). To test, we have to mock the database connection.
  • usecase: This is the core part to test, it's about the core business. We provide the data input and we check the data output in a fake repository.

With this design, although it may seem overkill for this little project, fits perfectly well with the TDD method.

Concretely, I will do the TDD for my usecase level development, and for the rest, I just put unit tests aside for later.

Workflow

  1. OAS Definition
  2. (Integration/Validation test)
  3. Usecase unit test cases
  4. Usecase development
  5. Refactor (2-3-4)
  6. Input-port/Output-port

That should be the correct workflow. But to save time, I will cut off the integration test part (the 2nd point).

2024/10/07

I rethought about the whole API design (even though I have only one yet). I have created /signup and /login without thinking too much, but in fact it is not quite RESTful.

REST is all about resources. While /signup and /login is quite comprehensible, thus service-oriented, they don't follow the REST philosophy, that is to say, resource-oriented.

If we rethink about /signup, what it does is to create a resource of User. Thus, for a backend API, it'd better be named as User.Create. But what about /login, it doesn't do anything about User. It would be strange to declare it as a User-relevant method.

Instead, what /login really does, is to create a session. In consequence, we have to create a new struct Session that can be created, deleted, or updated.

It might seem overkill, and in real life, even in the official Pet store example of OpenAPI, signup and login are under /user. But it just opened my mind and forces me to think and design RESTfully!

That being said, for the user side, we shall still have /signup and /login, because on the Front-end, we must be user-centered. We can even make this 2 functions on the same page with the same endpoint /login. The user enter the email and the password, then clicks on Login or Signup. If the login is successful, then he is logged in. Otherwise, if the user doesn't exist yet, we open up 2 more inputs (first name and last name) for signup. They can just provide the extra information and click again on Signup.

That, again, being said, I am thinking about doing some Front-end stuff just to make the validation tests of the product simpler.

The choice of the front end framework

I have considered several choices.

If I didn't purposely make the backend code to provide a REST API, I might choose server-side-rendering with templ + htmx, or even template+vanilla javascript.

I can still write a rather static Go-frontend-server to serve HTMLs and call my Go backend. And it might be a good idea if they communicate on Go native rpc. It worth a try.

And I have moved on to Svelte which seems very simple by design and the whole compile thing makes it really charm. But this is mainly a Go project, to learn something new with a rather small community means potentially more investment. I can learn it later.

Among Angular, React and Vue, I prefer Vue, for several reasons. First, Angular is clearly overkill for this small demo project. Second, React is good but I personally like the way of Vue doing things. And I work with Vue at work, so I might have more technical help from my colleagues.

So the plan for this week is to have both the Front end part and Backend part working, just for user signup and login.

I would like to directly put this stuff on a CI-pipeline for tests and deployment, even I have barely nothing yet. It is always good to do this preparation stuff at the early stage of the project. So we can benefit from them all the way along.

Moreover, even I am not really finishing the project, it can still be something representable that I can show to a future interviewer.

2024/10/08

Gitea action setup ! 🎉🎉🎉

Next step is to run some check and build and test!

2024/10/09

No code for today neither. But I did some design for the user story and the database model design.

Core user story part 1

Database model

2024/10/11

I spent 2 days learning some basic of Vue. Learning Vue takes time. There are a lot of concepts and it needs a lot of practice. Even though I may not need a professional level web page, I don't want to copy one module from this blog and another one from another tutorial. I might just put aside the front-end for now and concentrate on my backend Go app.

For now, I will just test my backend with curl.

And today's job is to get the login part done!

2024/10/13

Finally it took more than just one night for me to figure out the JWT.

The JWT token is simple because it doesn't need to be stored to and fetched from a database. But there is no way to revoke it instead of waiting for the expiry date.

To do so, we still have to use a database. We can store a logged out user's jti into Redis, and each time we log in, look up the cache to find if the user is logged out. And set the cache's timeout to the expiry time of the token, so that it is removed automatically.

It'd better to inject the dependency of Redis connection into the Authn middleware so that it's simpler to test.

2024/10/15

Redis is integrated to keep a blacklist of logged out users. BTW memcached is also interesting. In case later I want to switch to another key-value storage, I have made an interface. It also helps for the test. I can even just drop the redis and use a bare-hand native hashmap.

Quite a lot benefits. And then I realised that I have done "wrong" about sqlc. I shouldn't have used the pgx driver, instead the database/sql driver is more universal, if I want to switch to sqlite or mysql later.

Well it's not about changing the technical solution every 3 days, but a system than can survive those changes elegantly must be a robust system, with functionalities well decoupled and interfaces well defined.

I will add some tests for existing code and then it's time to move on to my core business logic.