Content Sources is an application for storing information about external content (currently YUM repositories) in a central location as well as creating snapshots of those repositories, backed by a Pulp server.
To read more about Content Sources use cases see:
pip install json2yaml
).Create a config file from the example:
$ cp ./configs/config.yaml.example ./configs/config.yaml
sudo echo "127.0.0.1 pulp.content" | sudo tee -a /etc/hosts
$ make repos-import
$ make compose-up
$ make run
###
Hit the API:
$ curl -H "$( ./scripts/header.sh 9999 1111 )" http://localhost:8000/api/content-sources/v1.0/repositories/
When its time to shut down the running containers:
$ make compose-down
And clean the volume that it uses by (this stops the container before doing it if it were running):
$ make compose-clean
There are other make rules that could be helpful, run
make help
to list them. Some are highlighted below
You can add new migration files, with the prefixed date attached to the file name, by running the following:
$ go run cmd/dbmigrate/main.go new <name of migration>
Migrate the Database
$ make db-migrate-up
Seed the database
$ make db-migrate-seed
Get an interactive shell:
$ make db-shell
Or open directly a postgres client by running:
$ make db-cli-connect
You can open an interactive shell by:
$ make kafka-shell
You can run kafka-console-consumer.sh using KAFKA_TOPIC
by:
$ make kafka-topic-consume KAFKA_TOPIC=my-kafka-topic
$ make kafka-topic-consume # Use the first topic at KAFKA_TOPICS list
There are other make rules that could be helpful, run
make help
to list them.
Create the configuration for prometheus, getting started with the example one.
Update the configs/prometheus.yaml
file to set your hostname instead of localhost
at scrape_configs.job_name.targets
:
# Note that the targets object cannot reference localhost, it needs the name of your host where
# the prometheus container is executed.
$ cat ./configs/prometheus.example.yaml | sed "s/localhost/$(hostname)/g" > ./configs/prometheus.yaml
To start prometheus run:
$ make prometheus-up
To stop prometheus container run:
$ make prometheus-down
To open the prometheus web UI, once the container is up, run the below:
$ make prometheus-ui
Configuration requirements
To use this you need to enable RBAC into config/configs.yaml
file:
clients:
rbac_enabled: True
rbac_base_url: http://localhost:8800/api/rbac/v1
rbac_timeout: 30
mocks:
rbac:
user_read_write: ["jdoe@example.com", "jdoe"]
user_read: ["tdoe@example.com", "tdoe"]
Running it
make run
or ./release/content-sources api consumer instrumentation mock_rbac
../scripts/header.sh 12345 jdoe@example.com
for admin or ./scripts/header.sh 12345 tdoe@example.com
for viewer.RBAC mock service is started for
make run
To use it running directly the service:./release/content-sources api consumer instrumentation mock_rbac
Add the optionmock_rbac
$ make db-migrate-up
$ make db-migrate-seed
$ make run
###
Hit the API:
$ curl -H "$( ./scripts/header.sh 9999 1111 )" http://localhost:8000/api/content-sources/v1.0/repositories/
$ make openapi
$ make mock
This is completely optional way of running the server that is useful for local development. It rebuilds the project after every change you make, so you always have the most up-to-date server running. To set this up, all you need to do is install the “Air” go tool, here is how. The recommended way is doing:
$ go install github.com/air-verse/air@latest
After that, all that needs to be done is just running air
, it should automatically use the defined config for this project (.air.toml).
$ air
The default configuration file in ./configs/config.yaml.example shows all available config options. Any of these can be overridden with an environment variable. For example “database.name” can be passed in via an environment variable named “DATABASE_NAME”.
To use golangci-lint:
make install-golangci-lint
make lint
To use pre-commit linter: make install-pre-commit
Path | Description | |
---|---|---|
api | Openapi docs and doc generation code | |
db/migrations | Database Migrations | |
pkg/api | API Structures that are used for handling data within our API Handlers | |
pkg/config | Config loading and application bootstrapping code | |
pkg/dao | Database Access Object. Abstraction layer that provides an interface and implements it for our default database provider (postgresql). It is separated out for abstraction and easier testing | |
pkg/db | Database connection and migration related code | |
pkg/handler | Methods that directly handle API requests | |
pkg/middleware | Holds all the middleware components created for the service | |
pkg/event | Event message logic. More info here | |
pkg/models | Structs that represent database models (Gorm) | |
pkg/seeds | Code to help seed the database for both development and testing | |
pkg/candlepin_client | Candlepin client | |
pkg/pulp_client | Pulp client | |
pkg/tasks | Tasking system. More info here | |
scripts | Helper scripts for identity header generation and testing |