content-sources-backend

Content Sources

What is it?

Content Sources is an application for storing information about external content (currently YUM repositories) in a central location as well as creating snapshots of those repositories, backed by a Pulp server.

To read more about Content Sources use cases see:

  1. Introspection
  2. Snapshots

Developing

Requirements:

  1. podman & podman-compose installed or docker & docker-compose installed (and docker running)
    • This is used to start a set of containers that are dependencies for content-sources-backend
  2. yaml2json tool installed (pip install json2yaml).

Create your configuration

Create a config file from the example:

$ cp ./configs/config.yaml.example ./configs/config.yaml

Add pulp.content to /etc/hosts for integration tests and client access

sudo echo "127.0.0.1 pulp.content" | sudo tee -a /etc/hosts

Import Public Repos

$ make repos-import

Start dependency containers

$ make compose-up

Run the server!

$ make run

###

Hit the API:

  $ curl -H "$( ./scripts/header.sh 9999 1111 )" http://localhost:8000/api/content-sources/v1.0/repositories/

Stop dependency containers

When its time to shut down the running containers:

$ make compose-down

And clean the volume that it uses by (this stops the container before doing it if it were running):

$ make compose-clean

There are other make rules that could be helpful, run make help to list them. Some are highlighted below

HOW TO ADD NEW MIGRATION FILES

You can add new migration files, with the prefixed date attached to the file name, by running the following:

$ go run cmd/dbmigrate/main.go new <name of migration>

Database Commands

Migrate the Database

$ make db-migrate-up

Seed the database

$ make db-migrate-seed

Get an interactive shell:

$ make db-shell

Or open directly a postgres client by running:

$ make db-cli-connect

Kafka commands

You can open an interactive shell by:

$ make kafka-shell

You can run kafka-console-consumer.sh using KAFKA_TOPIC by:

$ make kafka-topic-consume KAFKA_TOPIC=my-kafka-topic
$ make kafka-topic-consume # Use the first topic at KAFKA_TOPICS list

There are other make rules that could be helpful, run make help to list them.

Start / Stop prometheus

Create the configuration for prometheus, getting started with the example one.

Update the configs/prometheus.yaml file to set your hostname instead of localhost at scrape_configs.job_name.targets:

# Note that the targets object cannot reference localhost, it needs the name of your host where
# the prometheus container is executed.
$ cat ./configs/prometheus.example.yaml | sed "s/localhost/$(hostname)/g" > ./configs/prometheus.yaml

To start prometheus run:

$ make prometheus-up

To stop prometheus container run:

$ make prometheus-down

To open the prometheus web UI, once the container is up, run the below:

$ make prometheus-ui

Start / Stop mock for rbac

Configuration requirements

Running it

RBAC mock service is started for make run To use it running directly the service: ./release/content-sources api consumer instrumentation mock_rbac Add the option mock_rbac

Migrate your database (and seed it if desired)

$ make db-migrate-up
$ make db-migrate-seed

Run the server!

$ make run

###

Hit the API:

$ curl -H "$( ./scripts/header.sh 9999 1111 )" http://localhost:8000/api/content-sources/v1.0/repositories/

Generating new openapi docs:

$ make openapi

Generating new mocks:

$ make mock

Live Reloading Server

This is completely optional way of running the server that is useful for local development. It rebuilds the project after every change you make, so you always have the most up-to-date server running. To set this up, all you need to do is install the “Air” go tool, here is how. The recommended way is doing:

$ go install github.com/air-verse/air@latest

After that, all that needs to be done is just running air, it should automatically use the defined config for this project (.air.toml).

$ air

Configuration

The default configuration file in ./configs/config.yaml.example shows all available config options. Any of these can be overridden with an environment variable. For example “database.name” can be passed in via an environment variable named “DATABASE_NAME”.

Linting

To use golangci-lint:

  1. make install-golangci-lint
  2. make lint

To use pre-commit linter: make install-pre-commit

Code Layout

Path Description  
api Openapi docs and doc generation code  
db/migrations Database Migrations  
pkg/api API Structures that are used for handling data within our API Handlers  
pkg/config Config loading and application bootstrapping code  
pkg/dao Database Access Object. Abstraction layer that provides an interface and implements it for our default database provider (postgresql). It is separated out for abstraction and easier testing  
pkg/db Database connection and migration related code  
pkg/handler Methods that directly handle API requests  
pkg/middleware Holds all the middleware components created for the service  
pkg/event Event message logic. More info here  
pkg/models Structs that represent database models (Gorm)  
pkg/seeds Code to help seed the database for both development and testing  
pkg/candlepin_client Candlepin client  
pkg/pulp_client Pulp client  
pkg/tasks Tasking system. More info here  
scripts Helper scripts for identity header generation and testing  

More info