ktomk / pipelines
bitbucket pipelines runner
Installs: 17 107
Dependents: 0
Suggesters: 0
Security: 0
Stars: 109
Watchers: 7
Forks: 10
Open Issues: 12
Requires
- php: ^5.3.3 || ^7.0 || ^8.0
- ext-json: *
- justinrainbow/json-schema: ^5.2
- ktomk/symfony-yaml: ~2.6.13
Requires (Dev)
- friendsofphp/php-cs-fixer: ~3.2.1
- kubawerlos/php-cs-fixer-custom-fixers: ~3.2.1
- phpunit/phpunit: ^4 || ^5 || ^6.5 || ^7.0 || ^8.0 || ^9.0
- roave/security-advisories: dev-latest
Suggests
- ext-yaml: Preferred YAML parser; highly recommended.
- dev-master
- 0.0.70
- 0.0.69
- 0.0.68
- 0.0.67
- 0.0.66
- 0.0.65
- 0.0.64
- 0.0.63
- 0.0.62
- 0.0.61
- 0.0.60
- 0.0.59
- 0.0.58
- 0.0.57
- 0.0.56
- 0.0.55
- 0.0.54
- 0.0.53
- 0.0.52
- 0.0.51
- 0.0.50
- 0.0.49
- 0.0.48
- 0.0.47
- 0.0.46
- 0.0.45
- 0.0.44
- 0.0.43
- 0.0.42
- 0.0.41
- 0.0.40
- 0.0.39
- 0.0.38
- 0.0.37
- 0.0.36
- 0.0.35
- 0.0.34
- 0.0.33
- 0.0.32
- 0.0.31
- 0.0.30
- 0.0.29
- 0.0.28
- 0.0.27
- 0.0.26
- 0.0.25
- 0.0.24
- 0.0.23
- 0.0.22
- 0.0.21
- 0.0.20
- 0.0.19
- 0.0.18
- 0.0.17
- 0.0.16
- 0.0.15
- 0.0.14
- 0.0.13
- 0.0.12
- 0.0.11
- 0.0.10
- 0.0.9
- 0.0.8
- 0.0.7
- 0.0.6
- 0.0.5
- 0.0.4
- 0.0.3
- 0.0.2
- 0.0.1
- dev-dependabot/composer/symfony/process-5.4.46
- dev-test
- dev-fix/issue-30-git-app-path
- dev-issue-25-fail-fast-format
- dev-issue-17-bash
- dev-test-issue-15
- dev-docker-pipelines
This package is auto-updated.
Last update: 2024-11-06 19:26:18 UTC
README
Run Bitbucket Pipelines Wherever They Dock
Command line pipeline runner written in PHP. Available from Github or Packagist.
Usage | Environment | Exit Status | Details | References
Usage
From anywhere within a project or (Git) repository with a Bitbucket Pipeline file:
$ pipelines
Runs pipeline commands from bitbucket-pipelines.yml
[BBPL].
Memory and time limits are ignored. Press ctrl + c to quit.
The Bitbucket limit of 100 (previously 10) steps per pipeline is ignored.
Exit status is from last pipeline script command, if a command fails the following script commands and steps are not executed.
The default pipeline is run, if there is no default pipeline in the file, pipelines tells it and exists with non-zero status.
To execute a different pipeline use the --pipeline <id>
option
where <id>
is one of the list by the --list
option. Even more
information about the pipelines is available via --show
. Both
--list
and --show
output and exit.
Use --steps <steps>
to specify which step(s) to execute (also in
which order).
If the next pipeline step has a manual trigger, pipelines stops
the execution and outputs a short message on standard error
giving info about the fact. Manual triggers can be ignored with
the --no-manual
option.
Run the pipeline as if a tag/branch or bookmark has been pushed
with --trigger <ref>
where <ref>
is tag:<name>
,
branch:<name>
, bookmark:<name>
or
pr:<branch-name>[:<destination-branch>]
. If there is no tag,
branch, bookmark or pull-request pipeline with that name, the
name is compared against the patterns of the referenced type and
if found, that pipeline is run.
Otherwise the default pipeline is run, if there is no default pipeline, no pipeline at all is run and the command exits with non-zero status.
--pipeline
and --trigger
can be used together, --pipeline
overrides pipeline from --trigger
but --trigger
still
influences the container environment variables.
To specify a different file use the --basename <basename>
or --file <path>
option and/or set the working directory
--working-dir <path>
in which the file is looked for unless
an absolute path is set by --file <path>
.
By default pipelines
operates on the current working tree which
is copied into the container to isolate running the pipeline from
the working directory (implicit --deploy copy
).
Alternatively the working directory can be mounted into the
pipelines container by using --deploy mount
.
Use --keep
flag to keep containers after the pipeline has
finished for further inspection. By default all containers are
destroyed. Sometimes for development it is interesting to keep
containers on error only, the --error-keep
flag is for that.
In any case, if a pipeline runs again and it finds an existing container with the same name (generated by the pipeline name etc.), the existing container will be re-used. This can be very useful to re-iterate quickly.
Manage leftover containers with --docker-list
showing all
pipeline containers, --docker-kill
to kill running containers
and --docker-clean
to remove stopped pipeline containers. Use
in combination to fully clean, e.g.:
$ pipelines --docker-list --docker-kill --docker-clean
Or just run for a more shy clean-up:
$ pipelines --docker-zap
to kill and remove all pipeline containers (w/o showing a list)
first. "zap" is pipelines "make clean" equivalent for --keep
.
All containers run by pipelines
are labeled to ease maintaining
them.
Validate your bitbucket-pipelines.yml
file with --show
which
highlights errors found.
For schema-validation use --validate [<file>]
. Schema validation
might show errors that are not an issue when executing a pipeline
(--show
and/or --dry-run
is better for that) but validates
against a schema which is aligned with the one that Atlassian/
Bitbucket provides (the schema is more lax compared to upstream
for the cases known to offer a better practical experience). E.g.
use it for checks in your CI pipeline or linting files before push
in a pre-commit hook or your local build.
Inspect your pipeline with --dry-run
which will process the
pipeline but not execute anything. Combine with -v
(, --verbose
)
to show the commands which would have run verbatim which allows
to better understand how pipelines
actually works. Nothing to
hide here.
Use --no-run
to not run the pipeline at all, this can be used
to test the utilities' options.
Pipeline environment variables can be passed/exported to or set
for your pipeline by name or file with -e
, --env
and
--env-file
options.
Environment variables are also loaded from dot env files named
.env.dist
and .env
and processed in that order before the
environment options. Use of --no-dot-env-files
prevents
automatic loading, --no-dot-env-dot-dist
for the .env.dist
file only.
More information on pipelines environment variables in the environment section below.
Help
A full display of the pipelines utility options and arguments is
available via -h
, --help
:
usage: pipelines [<options>] --version | -h | --help
pipelines [<options>] [--working-dir <path>] [--file <path>]
[--basename <basename>] [--prefix <prefix>]
[--verbatim] [--[no-|error-]keep] [--no-run]
[(-e | --env) <variable>] [--env-file <path>]
[--no-dot-env-files] [--no-dot-env-dot-dist]
[--docker-client <package>] [--ssh]
[--user[=<name|uid>[:<group|gid>]]]
[--deploy mount | copy ] [--pipeline <id>]
[(--step | --steps) <steps>] [--no-manual]
[--trigger <ref>] [--no-cache]
pipelines [<options>] --service <service>
pipelines [<options>] --list | --show | --images
| --show-pipelines | --show-services
| --step-script[=(<id> | <step>[:<id>])]
| --validate[=<path>]
pipelines [<options>] --docker-client-pkgs
pipelines [<options>] [--docker-list] [--docker-kill]
[--docker-clean] [--docker-zap]
Generic options
-h, --help show usage and help information
--version show version information
-v, --verbose be more verbose, show more information and
commands to be executed
--dry-run do not execute commands, e.g. invoke docker or
run containers, with --verbose show the
commands that would have run w/o --dry-run
-c <name>=<value> pass a configuration parameter to the command
Pipeline runner options
--basename <basename> set basename for pipelines file, defaults to
'bitbucket-pipelines.yml'
--deploy mount|copy how files from the working directory are
placed into the pipeline container:
copy (default) working dir is copied into
the container. stronger isolation as
the pipeline scripts can change all
files without side-effects in the
working directory
mount the working directory is mounted.
fastest, no isolation
--file <path> path to the pipelines file, overrides looking
up the <basename> file from the current
working directory, use '-' to read from stdin
--trigger <ref> build trigger; <ref> can be either of:
tag:<name>, branch:<name>, bookmark:<name> or
pr:<branch-name>[:<destination-branch>]
determines the pipeline to run
--pipeline <id> run pipeline with <id>, use --list for a list
of all pipeline ids available. overrides
--trigger for the pipeline while keeping
environment from --trigger.
--step, --steps <steps>
execute not all but this/these <steps>. all
duplicates and orderings allowed, <steps> are
a comma/space separated list of step and step
ranges, e.g. 1 2 3; 1-3; 1,2-3; 3-1 or -1,3-
and 1,1,3,3,2,2
--no-manual ignore manual steps, by default manual steps
stop the pipeline execution when not the first
step in invocation of a pipeline
--verbatim only give verbatim output of the pipeline, do
not display other information like which step
currently executes, which image is in use ...
--working-dir <path> run as if pipelines was started in <path>
--no-run do not run the pipeline
--prefix <prefix> use a different prefix for container names,
default is 'pipelines'
--no-cache disable step caches; docker always caches
File information options
--images list all images in file, in order of use, w/o
duplicate names and exit
--list list pipeline <id>s in file and exit
--show show information about pipelines in file and
exit
--show-pipelines same as --show but with old --show output
format without services and images / steps are
summarized - one line for each pipeline
--show-services show all defined services in use by pipeline
steps and exit
--validate[=<path>] schema-validate file, shows errors if any,
exits; can be used more than once, exit status
is non-zero on error
--step-script[=(<id> | <step>[:<id>])]
write the step-script of pipeline <id> and
<step> to standard output and exit
Environment control options
-e, --env <variable> pass or set an environment <variable> for the
docker container, just like a docker run,
<variable> can be the name of a variable which
adds the variable to the container as export
or a variable definition with the name of the
variable, the equal sign "=" and the value,
e.g. --env NAME=<value>
--env-file <path> pass variables from environment file to the
docker container
--no-dot-env-files do not pass .env.dist and .env files as
environment files to docker
--no-dot-env-dot-dist dot not pass .env.dist as environment file to
docker only
Keep options
--keep always keep docker containers
--error-keep keep docker containers if a step failed;
outputs non-zero exit status and the id of the
container kept and exit w/ container exec exit
status
--no-keep do not keep docker containers; default
Container runner options
--ssh ssh agent forwarding: if $SSH_AUTH_SOCK is set
and accessible, mount SSH authentication
socket read only and set SSH_AUTH_SOCK in the
pipeline step container to the mount point.
--user[=<name|uid>[:<group|gid>]]
run pipeline step container as current or
given <user>/<group>; overrides container
default <user> - often root, (better) run
rootless by default.
Service runner options
--service <service> runs <service> attached to the current shell
and waits until the service exits, exit status
is the one of the docker run service
container; for testing services, run in a
shell of its own or background
Docker service options
--docker-client <package>
which docker client binary to use for the
pipeline service 'docker' defaults to the
'docker-19.03.1-linux-static-x86_64' package
--docker-client-pkgs list all docker client packages that ship with
pipelines and exit
Docker container maintenance options
usage might leave containers on the system. either by interrupting
a running pipeline step or by keeping the running containers
(--keep, --error-keep)
pipelines uses a <prefix> 'pipelines' by default, followed by '-'
and a compound name based on step-number, step-name, pipeline id
and image name for container names. the prefix can be set by the
--prefix <prefix> option and argument.
three options are built-in to monitor and interact with leftovers,
if one or more of these are given, the following operations are
executed in the order from top to down:
--docker-list list prefixed containers
--docker-kill kills prefixed containers
--docker-clean remove (non-running) containers with
pipelines prefix
for ease of use:
--docker-zap kill and remove all prefixed containers at
once; no show/listing
Less common options
--debug flag for trouble-shooting (fatal) errors,
warnings, notices and strict warnings; useful
for trouble-shooting and bug-reports
Usage Scenario
Give your project and pipeline changes a quick test run from the staging area. As pipelines are normally executed far away, setting them up becomes cumbersome, the guide given in Bitbucket Pipelines documentation [BBPL-LOCAL-RUN] has some hints and is of help, but it is not about a bitbucket pipelines runner.
This is where the pipelines
command jumps in.
The pipelines
command closes the gap between local development
and remote pipeline execution by executing any pipeline
configured on your local development box. As long as Docker is
accessible locally, the bitbucket-pipelines.yml
file is parsed
and it is taken care of to execute all steps and their commands
within the container of choice.
Pipelines YAML file parsing, container creation and script execution is done as closely as possible compared to the Atlassian Bitbucket Pipeline service. Environment variables can be passed into each pipeline as needed. You can even switch to a different CI/CD service like Github/Travis with little integration work fostering your agility and vendor independence.
Features
Features include:
Dev Mode
Pipeline from your working tree like never before. Pretend to be
on any branch, tag or bookmark (--trigger
) even in a different
repository or none at all.
Check if the reference matches a pipeline or just run the default
(default) or a specific one (--list
, --pipeline
). Use a
different pipelines file (--file
) or swap the "repository" by
changing the working directory (--working-dir <path>
).
If a pipeline step fails, the steps container can be kept for
further inspection on error with the --error-keep
option. The
container id is shown then which makes it easy to spawn a shell
inside:
$ docker exec -it $ID /bin/sh
Containers can be always kept for debugging and manual testing
of a pipeline with --keep
and with the said --error-keep
on
error only. Kept containers are re-used by their name regardless
of any --keep
(, --error-keep
) option.
Continue on a (failed) step with the --steps <steps>
argument,
the <steps>
option can be any step number or sequence (1-3
),
separate multiple with comma (3-,1-2
), you can even repeat steps
or reverse order (4,3,2,1
).
For example, if the second step failed, continue with use of
--steps 2-
to re-run the second and all following steps
(--steps 2
or --step 2
will run only the next step; to do a
step-by-step approach).
Afterwards manage left overs with --docker-list|kill|clean
or
clean up with --docker-zap
.
Debugging options to dream for; benefit from the local build, the pipeline container.
Container Isolation
There is one container per step, like it is on Bitbucket.
Files are isolated by being copied into the container before
the pipeline step script is executed (implicit --deploy copy
).
Alternatively files can be mounted into the container instead
with --deploy mount
which normally is faster on Linux, but the
working tree might become changed by the container script which
causes side-effect that may be unwanted. Docker runs system-wide
and containers do not isolate users (e.g. root is root).
Better with --deploy mount
(and peace of mind) is using Docker
in rootless mode where files manipulated in the pipeline container
are accessible to the own user account (like root is your user
automatically mapped).
- Further reading: How-To Rootless Pipelines
Pipeline Integration
Export files from the pipeline by making use of artifacts, these
are copied back into the working tree while in (implicit)
--deploy copy
mode. Artifacts' files are always created by the
user running pipelines. This also (near) perfectly emulates the
file format artifacts
section with the benefit/downside that
you might want to prepare a clean build in a pipeline step script
while you can keep artifacts from pipelines locally. This is a
trade-off that has turned out to be acceptable over the years.
wrap pipelines
in a script for clean checkouts or wait for
future options to stage first (git-deployment feature). In any
case, control your build first of all.
Ready for Offline
On the plane? Riding Deutsche Bahn? Or just a rainy day on a remote location with broken net? Coding while abroad? Or just Bitbucket down again?
Before going into offline mode, read about Working Offline you'll love it.
Services? Check!
The local pipeline runner runs service containers on your local box/system (that is your pipelines' host). This is similar to use services and databases in Bitbucket Pipelines [BBPL-SRV].
Even before any pipeline step makes use of a service,
a service definition can already be tested with the --service
option turning setting up services in pipelines into a new
experience. A good way to test service definitions and to get
an impression on additional resources being consumed.
- Further reading: Working with Pipeline Services
Default Image
The pipelines command uses the default image like Bitbucket
Pipelines does ("atlassian/default-image
"). Get started out
of the box, but keep in mind it has roughly 1.4 GB.
Pipelines inside Pipeline
As a special feature and by default pipelines mounts the docker
socket into each container (on systems where the socket is
available).
This allows to launch pipelines from a pipeline as long as
pipelines
and the Docker client is available in the
pipelines' container. pipelines
will take care of the Docker
client as /usr/bin/docker
as long as the pipeline has the
docker
service (services: [docker]
).
This feature is similar to run Docker commands in Bitbucket Pipelines [BBPL-DCK].
The pipelines inside pipeline feature serves pipelines
itself
well for integration testing the projects build. In combination
with --deploy mount
, the original working-directory is mounted
from the host (again). Additional protection against endless
loops by recursion is implemented to prevent accidental
pipelines inside pipeline invocations that would be endlessly
on-going.
- Further reading: How-To Docker Client Binary Packages for Pipelines
Environment
Pipelines mimics "all" of the Bitbucket Pipeline in-container environment variables [BBPL-ENV], also known as environment parameters:
BITBUCKET_BOOKMARK
- conditionally set by--trigger
BITBUCKET_BUILD_NUMBER
- always set to "0
"BITBUCKET_BRANCH
- conditionally set by--trigger
BITBUCKET_CLONE_DIR
- always set to deploy point in containerBITBUCKET_COMMIT
- faux as no revision triggers a build; always set to "0000000000000000000000000000000000000000
"BITBUCKET_REPO_OWNER
- current username from environment or if not available "nobody
"BITBUCKET_REPO_SLUG
- base name of project directoryBITBUCKET_TAG
- conditionally set by--trigger
CI
- always set to "true
"
All of these (but not BITBUCKET_CLONE_DIR
) can be set within
the environment pipelines runs in and are taken over into container
environment. Example:
$ BITBUCKET_BUILD_NUMBER=123 pipelines # build no. 123
More information on (Bitbucket) pipelines environment variables can be found in the Pipelines Environment Variable Usage Reference.
Additionally pipelines sets some environment variables for introspection:
PIPELINES_CONTAINER_NAME
- name of the container itselfPIPELINES_ID
-<id>
of the pipeline that currently runsPIPELINES_IDS
- list of space separated md5 hashes of so far running<id>
s. used to detect pipelines inside pipeline recursion, preventing execution until system failure.PIPELINES_PARENT_CONTAINER_NAME
- name of the container name if it was already set when the pipeline started (pipelines inside pipeline "pip").PIPELINES_PIP_CONTAINER_NAME
- name of the first (initial) pipeline container. Used by pipelines inside pipelines ("pip").PIPELINES_PROJECT_PATH
- path of the original project as if it would be used for--deploy
withcopy
ormount
so that it is possible inside a pipeline to do--deploy mount
when the current container did not mount. A mount always requires the path of the project directory on the system running pipelines. With no existing mount (e.g.--deploy copy
) it would otherwise be unknown. Manipulating this parameter within a pipeline leads to undefined behaviour and can have system security implications.
These environment variables are managed by pipelines itself. Some of them can be injected which can lead to undefined behaviour and can have system security implications as well.
Next to these special purpose environment variables, any other
environment variable can be imported into or set in the container
via the -e
, --env
and --env-file
options. These behave
exactly as documented for the docker run
command
[DCK-RN].
Instead of specifying custom environment parameters for each
invocation, pipelines by default automatically uses the
.env.dist
and .env
files from each project supporting the
same file-format for environment variables as docker.
Exit Status
Exit status on success is 0 (zero).
A non zero exit status denotes an error:
- 1 : An argument supplied (also a missing one) caused the error.
- 2 : An error is caused by the system not being able to fulfill the command (e.g. a file could not be read).
- 127: Running pipelines inside pipelines failed detecting an endless loop.
Example
Not finding a file might cause exit status 2 (two) on error
because a file is not found, however with a switch like --show
the exit status might still be 1 (one) as there was an error
showing that the file does not exists (indirectly) and the error
is more prominently showing all pipelines of that file.
Details
Requirements | User Tests | Installation | Known Bugs | Todo
Requirements
Pipelines works best on a POSIX compatible system having a PHP runtime.
Docker needs to be available locally as docker
command as it is
used to run the pipelines. Rootless Docker is supported.
A recent PHP version is favored, the pipelines
command needs
PHP to run. It should work with PHP 5.3.3+. A development
environment should be PHP 7+, this is especially suggested for
future releases. PHP 8+ is supported as well.
Installing the PHP YAML extension [PHP-YAML] is highly recommended as it does greatly improve parsing the pipelines file which is otherwise with a YAML parser on it's own as a fall-back and is not bad at all. There are subtle differences between these parsers, so why not have both at hand?
User Tests
Successful use on Ubuntu (16.04 LTS, 18.04 LTS, 20.04 LTS, 22.04 LTS) and Mac OS X (Sierra and High Sierra) with PHP and Docker installed (incl. Rootless).
Installation
Phar (Download) | Composer | Phive | Source (also w/ Phar) | Project (Development)
Installation is available by downloading the phar archive from Github, via Composer/Packagist or with Phive and it should always work from source which includes building the phar file.
Download the PHAR (PHP Archive) File
Downloads are available on Github. To obtain the latest released version, use the following URL:
https://github.com/ktomk/pipelines/releases/latest/download/pipelines.phar
Rename the phar file to just "pipelines
", set the executable
bit and move it into a directory where executables are found.
Downloads from Github are available since version 0.0.4. All releases are listed on the following website:
https://github.com/ktomk/pipelines/releases
Install with Composer
Suggested is to install it globally (and to have the global composer vendor/bin in $PATH) so that it can be called with ease and there are no dependencies in a local project:
$ composer global require ktomk/pipelines
This will automatically install the latest available version. Verify the installation by invoking pipelines and output the version:
$ pipelines --version
pipelines version 0.0.19
To uninstall remove the package:
$ composer global remove ktomk/pipelines
Take a look at Composer from getcomposer.org
[COMPOSER], a Dependency Manager for PHP. Pipelines has
support for composer based installations, which might include
upstream patches (composer 2 is supported, incl. upstream
patches).
Install with Phive
Perhaps the most easy way to install when phive is available:
$ phive install pipelines
Even if your PHP version does not have the Yaml extension this should work out of the box. If you use composer and you're a PHP aficionado, dig into phive for your systems and workflow.
Take a look at Phive from phar.io
[PHARIO], the PHAR
Installation and Verification Environment (PHIVE). Pipelines has
full support for phar.io/phar based installations which includes
support for the phive utility including upstream patches.
Install from Source
To install from source, checkout the source repository and
symlink the executable file bin/pipelines
into a segment of
$PATH, e.g. your $HOME/bin directory or similar. Verify the
installation by invoking pipelines and output the version:
$ pipelines --version
pipelines version 0.0.19 # NOTE: the version is exemplary
To create a phar archive from sources, invoke from within the projects root directory the build script:
$ composer build
building 0.0.19-1-gbba5a43 ...
pipelines version 0.0.19-1-gbba5a43
file.....: build/pipelines.phar
size.....: 240 191 bytes
SHA-1....: 9F118A276FC755C21EA548A77A9DBAF769B93524
SHA-256..: 0C38CBBB12E10E80F37ECA5C4C335BF87111AC8E8D0490D38683BB3DA7E82DEF
file.....: 1.1.0
api......: 1.1.1
extension: 2.0.2
php......: 7.2.16-[...]
uname....: [...]
count....: 62 file(s)
signature: SHA-1 E638E7B56FAAD7171AE9838DF6074714630BD486
The phar archive then is (as written in the output of the build):
build/pipelines.phar
Check the version by invoking it:
$ build/pipelines.phar --version
pipelines version 0.0.19-1-gbba5a43
# NOTE: the version is exemplary
Php Compatibility and Undefined Behaviour
The pipelines project aims to support php 5.3.3 up to php 8.1.
Using any of its PHP functions or methods with named parameters falls into undefined behaviour.
Reproducible Phar Builds
The pipelines project practices reproducible builds since it's first phar build. The build is self-contained, which means that the repository ships with all required files to build with only little dependencies:
- PHP (for
build.php
) - Composer
- Git
Reproducible builds of the phar file would be incomplete without the
fine work from the composer projects phar-utils
(Seldaek/Jordi
Boggiano) which is forked by the pipelines project in Timestamps.php
by keeping the original license with the
file (MIT), providing bug-fixes to upstream under that license (see
Phar-Utils #2 and
Phar-Utils #3).
This file is used to set the timestamps inside the phar file to that of the release as otherwise those would be at the time of build. This is the same as the Composer project does (see Composer #3927).
Additionally in the pipelines project that file is used to change the access permissions of the files in the phar. That is because across PHP versions the behaviour has changed so the build is kept backwards and forwards compatible. As this has been noticed later in the projects' history, the build might show different binaries depending on which PHP version is used (see PHP #77022 and PHP #79082) and the patch state of the timestamps file.
Install the Project for Development
When working with git
, clone the repository and then invoke
composer install
. The project is setup for development then.
Alternatively it's possible to do the same via composer directly:
$ composer create-project --prefer-source --keep-vcs ktomk/pipelines
...
$ cd pipelines
Verify the installation by invoking the local build:
$ composer ci
Should exit with status 0
when it went fine, non 0
when there is an
issue. Composer tells which individual script did fail.
Follow the instructions in Install from Source
to use the development version for pipelines
.
Known Bugs
-
The command "
:
" in pipelines exec layer is never really executed but emulated having exit status 0 and no standard or error output. It is intended for pipelines testing. -
Brace expansion (used for glob patterns with braces) is known to fail in some cases. This could affect matching pipelines, collecting asset paths and did affect building the phar file.
For the first two, this has never been reported nor experienced, for building the phar file the workaround was to entail the larger parts of the pattern.
-
The sf2yaml based parser does not support the backslash at the end of a line to fold without a space with double quoted strings.
-
The libyaml based parser does not support dots ("
.
") in anchor names. -
The libyaml based parser does not support folded scalar ("
>
") as block style indicator. Suggested workaround is to use literal style ("|
"). -
NUL bytes ("
\0
") are not supported verbatim in step-scripts due to defense-in-depth protection onpassthru
in the PHP-runtime to prevent Null character injection. -
When the project directory is large (e.g. a couple of GBs) and copying it into the pipeline container, it may appear as if pipelines hangs as the copying operation is ongoing and taking a long time.
Pressing ctrl + c may stop pipelines but not the copying operation. Kill the process of the copy operation (
tar
pipe todocker cp
) to stop the operation.
Todo
- Support for private Docker repositories
- Inject docker client if docker service is enabled
- Run specific steps of a pipeline (only) to put the user back into command on errors w/o re-running everything
- Stop at manual steps (
--no-manual
to override) - Support BITBUCKET_PR_DESTINATION_BRANCH with
--trigger pr:<source>:<destination>
- Pipeline services
- Run as current user with
--user
(--deploy mount
should not enforce the container default user [often "root"] for project file operations any longer), however the Docker utility still requires you (the current user) to be root like, so technically there is little win (see Rootless Pipelines for what works better in this regard) - Have caches on a per-project basis
- Copy local composer cache into container for better (offline) usage in PHP projects (see Populate Caches)
- Run scripts with
/bin/bash
if available (#17) (bash-runner feature) - Support for
BITBUCKET_DOCKER_HOST_INTERNAL
environment variable / host.docker.internal hostname within pipelines - Count
BITBUCKET_BUILD_NUMBER
on a per project basis (build-number feature) - Option to not mount docker.sock
- Limit projects' paths below
$HOME
, excluding dot.
directory children. - More accessible offline preparation (e.g.
--docker-pull-images
,--go-offline
or similar) - Check Docker existence before running a pipeline
- Pipes support (pipe feature)
- Show scripts with pipe/s
- Fake run script with pipe/s showing information
- Create test/demo pipe
- Run script with pipe/s
- Write about differences from Bitbucket Pipelines
- Write about the file format support/limitations
- Pipeline file properties support:
- step.after-script (after-script feature)
- step.trigger (
--steps
/--no-manual
options) - step.caches (to disable use
--no-cache
option) - definitions
- services (services feature)
- caches (caches feature)
- step.condition (#13)
- clone (git-deployment feature)
- max-time (never needed this for local run)
- size (likely neglected for local run, limited support for Rootless Pipelines)
- Get VCS revision from working directory (git-deployment feature)
- Use a different project directory
--project-dir <path>
to specify the root path to deploy into the container, which currently is the working directory (--working-dir <path>
works already) - Run on a specific revision, reference it (
--revision <ref>
); needs a clean VCS checkout into a temporary folder which then should be copied into the container (git-deployment feature) - Override the default image name (
--default-image <name>
; never needed this for local run)
References
- [BBPL]: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
- [BBPL-ENV]: https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html
- [BBPL-LOCAL-RUN]: https://confluence.atlassian.com/bitbucket/debug-your-pipelines-locally-with-docker-838273569.html
- [BBPL-DCK]: https://confluence.atlassian.com/bitbucket/run-docker-commands-in-bitbucket-pipelines-879254331.html
- [BBPL-SRV]: https://confluence.atlassian.com/bitbucket/use-services-and-databases-in-bitbucket-pipelines-874786688.html
- [COMPOSER]: https://getcomposer.org/
- [DCK-RN]: https://docs.docker.com/engine/reference/commandline/run/
- [PHARIO]: https://phar.io/
- [PHP-YAML]: https://pecl.php.net/package/yaml