Content is compiled from official development documentation

A series of

  • 1 minute Quick use of The latest version of Docker Sentry-CLI – create version
  • Quick use of Docker start Sentry-CLI – 30 seconds start Source Maps
  • Sentry For React
  • Sentry For Vue
  • Sentry-CLI usage details
  • Sentry Web Performance Monitoring – Web Vitals
  • Sentry Web performance monitoring – Metrics
  • Sentry Web Performance Monitoring – Trends
  • Sentry Web Front-end Monitoring – Best Practices (Official Tutorial)
  • Sentry Backend Monitoring – Best Practices (Official Tutorial)
  • Sentry Monitor – Discover Big data query analysis engine
  • Sentry Monitoring – Dashboards Large screen for visualizing data
  • Sentry Monitor – Environments Distinguishes event data from different deployment Environments
  • Sentry monitoring – Security Policy Reports Security policies
  • Sentry monitoring – Search Indicates the actual Search
  • Sentry monitoring – Alerts Indicates an alarm
  • Sentry Monitor – Distributed Tracing
  • Sentry Monitoring – Distributed Tracing 101 for Full-stack Developers
  • Sentry Monitoring – Snuba Data Mid platform Architecture introduction (Kafka+Clickhouse)
  • Sentry – Snuba Data Model
  • Sentry Monitoring – Snuba Data Mid-Platform Architecture (Introduction to Query Processing)
  • Sentry official JavaScript SDK introduction and debugging guide
  • Sentry Monitoring – Snuba Data Mid-platform architecture (writing and testing Snuba queries)
  • Sentry Monitoring – Snuba Data Medium Architecture (SnQL Query Language introduction)
  • Sentry Monitoring – Snuba Data Mid – platform local development environment configuration combat
  • Sentry Monitor – Private Docker Compose deployment and troubleshooting
  • Sentry Developer Contribution Guide – Front End (ReactJS Ecosystem)

directory

  • Service Management (devservices)
    • View service logs
    • forredis,postgresclickhouserunCLIThe client
    • Remove container state
  • Port assignments
    • Find out what is running on your machine
  • Asynchronous Worker
    • Registered task
    • runWorker
    • Start theCronprocess
    • configurationBroker
      • Redis
      • RabbitMQ
  • Email
    • The outboundEmail
    • The inboundEmail
      • Mailgun
  • The node
    • DjangoThe back-end
    • Customize the back end
  • File storage
    • File system backend
    • Google CloudStorage backend
    • Amazon S3The back-end
    • MinIO S3The back-end
  • Time series storage
    • RedisSnubaBack-end (recommended)
    • DummyThe back-end
    • RedisThe back-end
  • writeBuffer
    • configuration
      • Redis
  • indicators
    • StatsdThe back-end
    • DatadogThe back-end
    • DogStatsDThe back-end
    • LoggingThe back-end
  • The quota
    • Event quota
      • configuration
      • System-wide rate limit
      • Rate limit based on user
      • Rate limits based on the project
    • NotificationRate limit
      • configuration
  • Notification in this paper,
    • configuration
    • The back-end
      • DummyThe back-end
      • RedisThe back-end
      • Sample configuration
  • Relay
  • Snuba
  • The back-endChartApply colours to a drawing
    • inSentryFor back-end useChartcuterie
    • configurationchartTo render
      • Service initialization
      • Add/Removecharttype
    • Run in developmentChartcuterie
      • Update locallychart type
  • The working principle of
    • ChartcuterieStart the
    • fromSentryRendercall

Service Management (DevServices)

Sentry provides an abstraction for Docker to run the required services in development, called devservices.

Usage: sentry devservices [OPTIONS] COMMAND [ARGS]...

  Manage dependent development services required for Sentry.

  Do not use in production!

Options:
  --help  Show this message and exit.

Commands:
  attach  Run a single devservice in foreground, as...
  down    Shut down all services.
  rm      Delete all services and associated data.
  up      Run/update dependent services.
Copy the code

View service logs

# Follow snuba logs
docker logs -f sentry_snuba
Copy the code

Run CLI clients for Redis, Postgres, and Clickhouse

# redis
docker exec -it sentry_redis redis-cli

# clickhouse
docker exec -it sentry_clickhouse clickhouse-client

# psql
docker exec -it sentry_postgres psql -U postgres
Copy the code

Remove container state

If you do mess up your container or volume, you can start over using devServices RM.

#Delete all data (containers, volumes, and networks) associated with all services
sentry devservices rm
Copy the code

For example, suppose we manage to corrupt the Postgres database during a migration and you want to reset the Postgres data, you could do the following:

#Delete all data (containers, volumes, and networks) associated with a single service
sentry devservices rm postgres
Copy the code

Port assignments

The following is a simple list of ports used by the Sentry service or any dependencies of the Sentry service in the development Settings. It serves two purposes:

  • Find out why ports are used on your work machine and which process to kill to make it idle.
  • Find out which ports can be safely assigned to the new service.
Port Service Description
9000 Clickhouse Devservice clickhouseSnuba’s database.
8123 Clickhouse
9009 Clickhouse
3021 Symbolicator Devservice symbolicator. Used to handle stack traces.
1218 Snuba Devservice snuba. Used to search for events.
9092 Kafka Devservice kafka. Used forrelay-sentryCommunication and optional forsentry-snubacommunication
6379 Redis Devservice redis(Or mayberustierSet in theHomebrewInstall), responsible for caching,relayProject configuration andCeleryThe queue
5432 Postgres Devservice postgres(Or mayberustierSet in theHomebrewInstallation)
7899 Relay Devservice relay. ToSDKProvides the object to which events are sentAPI(Also known as event ingestionevent ingestion).Webpack8000Port reverse proxy to this server. usesentry devserverStart/stop.
8000 Sentry Dev Sentry API+ front end.WebpackListen on this port and proxyAPIrequestDjango app
8001 uWSGI usesentry devserverStart/stop. forDjango app/APIProvide services.Webpack8000Port reverse proxy to this server.
7999 Sentry frontend prod proxy Used to test againstprod APIThe localUITo change the
8000 Develop docs Web site around this document.withSentry DevThe conflict of.
3000 User docs User-facing documentation. ifRelaydevserviceRun outside, it is possible withRelayConflict.
9001 Sentry Dev Styleguide server runsentry devserver --styleguideWhen binding
9000 sentry run web sentry run webThe traditional default port of the9001In order to avoid andClickhouseConflict.
9001 sentry run web There is nowebpackRelayQuasi system front end. Sentry Dev is probably better.Conflicts with Sentry Dev Styleguide Server.
8000 Relay mkdocs documentation At some point, this will be merged into our existing document repository.Conflict with Sentry Dev.
  • Relay
    • getsentry.github.io/relay/
  • Snuba
    • Github.com/getsentry/s…
  • Develop docs
    • Github.com/getsentry/d…
  • User docs
    • Github.com/getsentry/s…

Find out what is running on your machine

  • uselsof -nP -i4 | grep LISTENmacOSTo find the occupied port.
  • Docker for MacDashboard UIDisplays what you are runningDocker container/development servicesAnd distributiveportandStart/stopOptions.

Asynchronous Worker

Sentry comes with a built-in queue to process tasks in a more asynchronous manner. For example, when an event is entered instead of immediately written to the database, it sends a job to the queue so that the request can be returned immediately, and the background worker actually processes and saves the data.

Sentry relies on Celery repository to manage worker.

  • ttps://docs.celeryproject.org/

Registered task

Sentry uses special decorators to configure tasks, giving us more explicit control over callable objects.

from sentry.tasks.base import instrumented_task

@instrumented_task(
    name="sentry.tasks.do_work",
    queue="important_queue",
    default_retry_delay=60 * 5,
    max_retries=None.)
def do_work(kind_of_work, **kwargs) :
    #...
Copy the code

There are several important points:

  • _ Must _ Declare the task name.

    Task name is Celery how messages (requests) are identified and which function and worker thread is required to process them. If task is not named, celery will derive a name from module and function names, making the name associated with the location of the code and more vulnerable to future code maintenance.

  • Task must accept \*\*kwargs to handle rolling compatibility.

    This ensures that the Task will accept any message that happens to be in the queue, rather than fail due to unknown parameters. It helps to roll back changes, deployment is not instantaneous, and messages may be generated using multiple versions of parameters.

    While this allows scrolling forward and backward in the event of incomplete task failure, you still have to be aware when changing parameters that the worker processes messages with old and new parameters. This does reduce the number of changes required in such a migration and provides the operator with more flexibility, but message loss due to unknown parameters is still unacceptable.

  • Task _ should _ automatically retry on failure.

  • The Task parameter _ should be primitive and small.

    The Task parameters are serialized to the message sent through the broker, and the worker needs to deserialize them again. Performing this operation on complex types is fragile and should be avoided. For example. Prefer to pass the ID to task, which can be used to load data from the cache rather than the data itself. The Task parameters are serialized into the message sent through the broker, and the worker needs to deserialize them again. Performing this operation on complex types is fragile and should be avoided. For example, you would rather pass an ID to task that can be used to load data from the cache rather than the data itself.

    Similarly, to keep message brokers and workers running efficiently, serializing large values into messages results in large messages, large queues, and more (de) serialization overhead and should therefore be avoided.

  • The task’s module must be added to CELERY_IMPORTS.

    SRC/sentry/conf/server. Py. Celery worker must according to the name lookup task, only when the worker into a decorative task function module is to do so, because this is according to the content of the name registered task. As a result, each containing a task modules must be added to the SRC/sentry/conf/server py CELERY_IMPORTS setup.

Run the Worker

You can run the Worker using the Sentry CLI.

  • Docs. Sentry. IO/product/cli…
$ sentry run worker
Copy the code

Start the Cron process

Sentry Uses the cron process to schedule routine jobs:

SENTRY_CONF=/etc/sentry sentry run cron
Copy the code

Configure the Broker

Sentry supports two main brokers that can be adjusted to suit your workload: RabbitMQ and Redis.

Redis

The default broker is Redis and works in most cases. The main limitation with Redis is that all work to be processed must be in memory.

BROKER_URL = "redis://localhost:6379/0"
Copy the code

If your Redis connection requires a password for authentication, use the following format:

BROKER_URL = "redis://:password@localhost:6379/0"
Copy the code

RabbitMQ

RabbitMQ is ideal for Sentry Worker support if you are running under heavy workload or are worried about loading workload backlog into memory.

BROKER_URL = "amqp://guest:guest@localhost:5672/sentry"
Copy the code

Email

Sentry provides support for outbound and incoming E-mail.

The use of inbound email is fairly limited, with only replies to Error and Note notifications currently supported.

The outbound E-mail.

You need to configure an SMTP provider for outbound E-mail.

TODO: Write mail preview back end.

mail.backend
Declared in ‘config.yml’.

A back end for sending E-mail messages. The options are SMTP, console, and dummy.

The default value is SMTP. If you want to disable E-mail delivery, use dummy.

mail.from
Declared in ‘config.yml’.

The E-mail address used for outbound E-mail in the From header.

The default value is root@localhost. Changing this value is strongly recommended to ensure reliable E-mail delivery.

mail.host
Declared in ‘config.yml’.

Host name for SMTP connection.

The default is localhost.

mail.port
Declared in ‘config.yml’.

Connection port for SMTP connection.

The default is 25.

mail.username
Declared in ‘config.yml’.

User name used when the SMTP server is used for authentication.

The default is (empty).

mail.password
Declared in ‘config.yml’.

Password used for authentication using the SMTP server.

The default is (empty).

mail.use-ssl
Declared in ‘config.yml’.

Should Sentry use SSL when connecting to SMTP server?

The default is false.

mail.use-tls
Declared in ‘config.yml’.

Should Sentry use TLS when connecting to SMTP server?

The default is false.

mail.list-namespace
Declared in ‘config.yml’.

The mailing list namespace for emails sent by this Sentry server. This should be the field you own (usually the same part of the field as the value of the mail.from configuration parameter) or localhost.

Incoming E-mail.

For configuration, you can choose from different backends.

Mailgun

Start by selecting a domain to process inbound E-mail. We found that this is easiest if you maintain a domain separate from everything else. In our example, we will choose inbound.sentry.example.com. You need to configure DNS records for a given domain based on the Mailgun document.

Create a new route in mailgun:

Priority:
  0
Filter Expression:
  catch_all()
Actions:
  forward("https://sentry.example.com/api/hooks/mailgun/inbound/")
Description:
  Sentry inbound handler
Copy the code

Configure Sentry with appropriate Settings:

# your Mailgun API key (used to validate incoming Webhooks)
mail.mailgun-api-key: ""

# Set SMTP hostname to your configured inbound domain
mail.reply-hostname: "inbound.sentry.example.com"

# notify Sentry to send the appropriate message header to enable
# Reply received
mail.enable-replies: true
Copy the code

That’s it! You can now respond to activity notifications about errors through the email client.

The node

Sentry provides an abstraction called ‘nodestore’ for storing key/value bloBs.

The default backend simply stores them as gzipped bloBs in the ‘nodestore_node’ table of the default database.

Django backend

The Django backend uses gzipped JSON blob-as-text mode to store all data in the ‘nodestore_node’ table.

There are no options on the back end, so you just set it to an empty dictionary.

SENTRY_NODESTORE = 'sentry.nodestore.django.DjangoNodeStorage'
SENTRY_NODESTORE_OPTIONS = {}
Copy the code

Customize the back end

If you have a favorite data storage solution, it just needs to run under a few rules and it can work with Sentry’s BLOB storage:

  • Set up thekey/value
  • To obtainkey
  • deletekey

Related to implement your own backend for more information, please see the sentry. Nodestore. Base. NodeStorage.

File storage

Sentry provides an abstraction called ‘Filestore’ for storing files (such as publishing artifacts).

The default backend stores files in/TMP /sentry-files that are not suitable for production use.

File system backend

filestore.backend: "filesystem"
filestore.options:
  location: "/tmp/sentry-files"
Copy the code

Google Cloud storage backend

In addition to the following configuration, you also need to ensure that the shell environment sets the variable GOOGLE_APPLICATION_CREDENTIALS. For more information, see the Google Cloud documentation for setting up authentication.

  • Cloud.google.com/storage/doc…
filestore.backend: "gcs"
filestore.options:
  bucket_name: "..."
Copy the code

Amazon S3 backend

The S3 storage back-end supports access keys or IAM instance roles for authentication. With the latter, access_key and secret_key are omitted. By default, S3 objects are created using public-read ACLs, which means that in addition to PutObject, GetObject, and DeleteObject, the account/role used must also have PutObjectAcl permissions. If you do not want your uploaded files to be publicly accessible, you can set default_acl to private.

filestore.backend: "s3"
filestore.options:
  access_key: "..."
  secret_key: "..."
  bucket_name: "..."
  default_acl: "..."
Copy the code

MinIO S3 backend

filestore.backend: "s3"
filestore.options:
  access_key: "..."
  secret_key: "..."
  bucket_name: "..."
  endpoint_url: "https://minio.example.org/"
Copy the code

Time series storage

Sentry provides a service to store time series data. This is primarily used to display summary information about events and projects, and to calculate (in real time) event rates.

RedisSnuba Back end (recommended)

This is the only backend that works 100% correctly:

SENTRY_TSDB = 'sentry.tsdb.redissnuba.RedisSnubaTSDB'
Copy the code

This back end communicates with Snuba to get metrics related to Event ingestion and with Redis to get everything else. Snuba needs to run its own outcomes consumer, which is not currently part of devservices.

The wrapped Redis TSDB can be configured as follows (see below for Redis options) :

SENTRY_TSDB_OPTIONS = {
    'redis':...The option dictionary for # RedisTSDB is here
}
Copy the code

Dummy backend

As the name implies, all TSDB data is deleted when written and replaced with zero when read:

SENTRY_TSDB = 'sentry.tsdb.dummy.DummyTSDB'
Copy the code

Redis backend

The “naked” Redis back end reads and writes all data to Redis. The columns associated with Organization Stats will show zero data because it is only available in Snuba.

SENTRY_TSDB = 'sentry.tsdb.redis.RedisTSDB'
Copy the code

By default, this will use the Redis cluster named Default. To use a different cluster, provide the Cluster option as follows:

SENTRY_TSDB_OPTIONS = {
    'cluster': 'tsdb',}Copy the code

Write a Buffer

Sentry manages database row contention by buffer writing and flushing bulk changes to the database over a period of time. This is useful if you have high concurrency, especially if they are often the same event.

For example, if you happen to receive 100,000 events per second, and 10% of them report connection problems to the database (they will be combined), enabling the buffer back end will change things so that each count update is actually put into a queue, and all updates are executed at a rate that the queue can keep up with.

configuration

To specify the back end, simply change the SENTRY_BUFFER and SENTRY_BUFFER_OPTIONS values in the configuration:

SENTRY_BUFFER = 'sentry.buffer.base.Buffer'
Copy the code

Redis

You need queues to configure the Redis back end, otherwise you won’t see any benefit (in fact, you’ll only have a negative impact on performance).

The configuration is straightforward:

SENTRY_BUFFER = 'sentry.buffer.redis.RedisBuffer'
Copy the code

By default, this will use the Redis cluster named Default. To use a different cluster, provide the Cluster option as follows:

SENTRY_BUFFER_OPTIONS = {
    'cluster': 'buffer',}Copy the code

indicators

Sentry provides an abstraction called ‘metrics’ for internal monitoring, typically timing and various counters.

The default backend simply discards them (although some values remain in the internal time series database).

Statsd backend

SENTRY_METRICS_BACKEND = 'sentry.metrics.statsd.StatsdMetricsBackend'
SENTRY_METRICS_OPTIONS = {
    'host': 'localhost'.'port': 8125,}Copy the code

Datadog backend

Datadog will ask you to install the Datadog package into your Sentry environment:

$ pip install datadog
Copy the code

In your sentry.conf.py:

SENTRY_METRICS_BACKEND = 'sentry.metrics.datadog.DatadogMetricsBackend'
SENTRY_METRICS_OPTIONS = {
    'api_key': '... '.'app_key': '... '.'tags': {},}Copy the code

After installation, Sentry metrics are sent to the Datadog REST API over HTTPS.

  • Docs.datadoghq.com/api/?lang=p…

DogStatsD backend

Using the DogStatsD backend requires a Datadog Agent to run with the DogStatsD backend (on port 8125 by default).

  • docs.datadoghq.com/agent/

You must also install the Datadog Python package into your Sentry environment:

$ pip install datadog
Copy the code

In your sentry.conf.py:

SENTRY_METRICS_BACKEND = 'sentry.metrics.dogstatsd.DogStatsdMetricsBackend'
SENTRY_METRICS_OPTIONS = {
    'statsd_host': 'localhost'.'statsd_port': 8125.'tags': {},}Copy the code

Once configured, the metrics back end is sent to the DogStatsD server and refreshed periodically to Datadog via HTTPS.

Logging the back-end

LoggingBackend reports all operations to the Sentry. metrics Logger. In addition to indicator names and values, log messages contain additional data, such as instance and tags values that can be displayed using custom formatters.

SENTRY_METRICS_BACKEND = 'sentry.metrics.logging.LoggingBackend'

LOGGING['loggers'] ['sentry.metrics'] = {
    'level': 'DEBUG'.'handlers': ['console:metrics'].'propagate': False,
}

LOGGING['formatters'] ['metrics'] = {
    'format': '[%(levelname)s] %(message)s; instance=%(instance)r; tags=%(tags)r',
}

LOGGING['handlers'] ['console:metrics'] = {
    'level': 'DEBUG'.'class': 'logging.StreamHandler'.'formatter': 'metrics',}Copy the code

The quota

With the way Sentry works, you might find yourself in a situation where you see too much inbound traffic and don’t have a good way to discard excess messages. There are several solutions to this, and if you encounter this problem, you may want to use them all.

Event quota

One of the main mechanisms for limiting workloads in Sentry involves setting event quotas. These can be configured on a per-project and system-wide basis and allow you to limit the maximum number of events accepted in a 60-second period.

configuration

The main implementation uses Redis, which only requires you to configure the connection information:

SENTRY_QUOTAS = 'sentry.quotas.redis.RedisQuota'
Copy the code

By default, this will use the Redis cluster named Default. To use a different cluster, provide the Cluster option as follows:

SENTRY_QUOTA_OPTIONS = {
    'cluster': 'quota',}Copy the code

If you have additional requirements, you are free to extend the base Quota class just like the Redis implementation.

System-wide rate limit

You can configure system-wide maximum rate per minute limits:

system.rate-limit: 500
Copy the code

For example, in your project’s sentry.conf.py, you can do the following:

from sentry.conf.server import SENTRY_OPTIONS


SENTRY_OPTIONS['system.rate-limit'] = 500
Copy the code

Alternatively, if you navigate to /manage/ Settings /, you’ll find an administration panel with an option to set Rate Limit, which is stored in the quota implementation described above.

Rate limit based on user

You can configure a user-based maximum rate per minute limit:

auth.user-rate-limit: 100
auth.ip-rate-limit: 100
Copy the code

Rate limits based on the project

To do rate limiting based on your project, click Settings for your project. Under the Client Keys (DSN) TAB, find the key you want to speed up and click the Configure button. This should display key/project-specific rate limit Settings.

Notification rate limit

In some cases, you might be concerned about restricting content such as outbound E-mail notifications. To solve this problem, Sentry provides a rate-limiting subsystem that supports arbitrary rate limiting.

configuration

As with event quotas, the main implementation uses Redis:

SENTRY_RATELIMITER = 'sentry.ratelimits.redis.RedisRateLimiter'
Copy the code

By default, this will use the Redis cluster named Default. To use a different cluster, provide the Cluster option as follows:

SENTRY_RATELIMITER_OPTIONS = {
    'cluster': 'ratelimiter',}Copy the code

Notification in this paper,

Sentry provides a service that collects notifications as they occur and schedules them for delivery as aggregated “digest” notifications.

configuration

Although the Digest system is configured with a reasonable set of default options, you can use the SENTRY_DIGESTS_OPTIONS setting to fine-tune the Digest backend behavior to suit your unique installation needs. All backends share a common set of options defined below, while some backends may define additional options specific to their respective implementations.

Minimum_delay: The minimum_delay option defines the default minimum amount of time, in seconds, to wait for delivery between scheduled digests after the initial schedule. This can be overridden by project in Notification Settings.

Maximum_delay: The maximum_delay option defines the default maximum time, in seconds, to wait for transfers between scheduled digests. This can be overridden by project in Notification Settings.

Increment_delay: The INCREment_delay option defines how long each observation of the event should be delayed before maximum_delay after the last processing of the digest.

Capacity: The Capacity option defines the maximum number of items that should be contained in the timeline. Whether this is a hard or soft limit depends on the back end – see the TRUNCATION_chance option.

Truncation_chance: The TRUNCATION_chance option defines the probability that the ADD operation triggers a timeline truncation to bring its size close to the defined capacity. A value of 1 causes the timeline to be truncated on each ADD operation (effectively making it a hard limit), while a low probability increases the chance that the timeline will exceed its intended capacity, but performing operations by avoiding truncation improves add performance, which is a potentially expensive operation, especially on large data sets.

The back-end

Dummy backend

Dummy back end disables digest scheduling and all notifications are sent as they occur (rate limited). This is the default Digest backend for installations created prior to version 8.

Dummy backends can be specified with the SENTRY_DIGESTS setting:

SENTRY_DIGESTS = 'sentry.digests.backends.dummy.DummyBackend'
Copy the code

Redis backend

The Redis back end uses Redis to store schedule and pending Notification data. This is the default Digest backend for installations created since version 8.

The Redis back end can be specified with the SENTRY_DIGESTS setting:

SENTRY_DIGESTS = 'sentry.digests.backends.redis.RedisBackend'
Copy the code

The Redis backend accepts several options beyond the basic set, provided by SENTRY_DIGESTS_OPTIONS:

Cluster: The Cluster option defines the Redis cluster that should be used for storage. If no cluster is specified, the default cluster is used.

Changes after data is written to the Digest backendclusterValue or cluster configuration can cause unexpected effects – that is, it creates the possibility of data loss during cluster size changes. This option should be adjusted carefully on the running system.

TTL: The TTL option defines the TTL (in seconds) of record, timeline, and digest. This can (and should) be a relatively high value, because timeline, digest, and Record should all be removed after processing — mainly to ensure that outdated data doesn’t linger too long in the case of misconfiguration. This should be greater than the maximum scheduling delay to ensure that data is not prematurely expelled.

Sample configuration

SENTRY_DIGESTS = 'sentry.digests.backends.redis.RedisBackend'
SENTRY_DIGESTS_OPTIONS = {
    'capacity': 100.'cluster': 'digests',}Copy the code

Relay

Relay is a service for event filtering, rate limiting, and processing. It can serve as:

  • Sentry installed storage endpoint. Relay developer documentation

    • Getsentry. Making. IO/relay/relay…
  • An additional middle layer between your application and Sentry. Relay product documentation

    • Docs. Sentry. IO/product/rel…

Snuba

  • Docs: getsentry.github.io/snuba/
  • Code: Github.com/getsentry/s…

Back-end Chart rendering

Sentry’s front end provides users with various types of detailed interactive charts that are highly consistent with the look and feel of Sentry products. Historically, these charts were just what we had in Web applications.

In some cases, however, it can be valuable to display a chart in some context of your application. For example,

  • Slack expands the Discover chart, metric alert notifications, problem details, or any other link in Sentry, where it might be useful to view the chart in Slack.

  • Notification and summary email. Visualize trends as charts.

Fortunately, Sentry provides built-in functionality for the internal Chartcuterie NodeJS service, which can generate graphics in image format via the HTTP API. Diagrams are generated using the same ECharts library used by the front end. Chartcuterie shares code with Sentry’s front end, which means that the look and feel of a chart can be easily maintained between the charts generated by the front end and the back end Chartcuterie.

  • Github.com/getsentry/c…
  • Github.com/apache/echa…

Use Chartcuterie on the back end of Sentry

Generating charts using Chartcuterie is very simple.

Import the generate_chart function, provide chart type and data object, and get the public image URL.

from sentry.charts import generate_chart, ChartType

# The shape of data is determined by the RenderDescriptor in the
# configuration module for the ChartType being rendered.
data = {}

chart_url = generate_chart(ChartType.MY_CHART_TYPE, data)
Copy the code

Configure Chart for rendering

Chartcuterie loads an external JavaScirpt module from sentry. IO that determines how it renders the diagram. This module directly configures EChart’s Options object, including the series of data conversions provided to Chartcuterie during POST/Render calls.

The module as part of the getsentry/sentry exist, can be in the static/app/chartcuterie/config. The TSX.

  • Echarts.apache.org/en/option.h…
  • Github.com/getsentry/s…

Service initialization

An optional initialization function init can be configured to run at service startup time. This function has access to Chartcuterie’s global Echarts object and can be used to register utilities (such as registerMaps).

Add/remove chart type

Chart rendering is based on each “Chart Type” configuration. For each type of chart, you need to declare a well-known name in both the front-end application and the back-end Chart module.

  1. On the front end, in the static/app/charctuerie/types. Add a ChartType TSX.

    • Github.com/getsentry/s…
  2. In the static/app/chartcuterie/config. Registered in the TSX RenderDescriptor chart, it describes the appearance and transformation series. You can use the register function for this purpose.

    • Github.com/getsentry/s…
  3. On the back end, add a matching ChartType to the sentry.charts.types module.

    • Github.com/getsentry/s…
  4. Deploy your changes in Sentry. The configuration module will be automatically propagated to Chartcuterie in 5 minutes.

    You do not need to deploy Charcuterie.

Don’tDeploy the ability to use the new Chart Type at the same time as the module is configured. Due to propagation delays, there is no guarantee that the new Chart Type will be available immediately after deployment.

The configuration module includes the deployed Sentry. IO commit SHA, which allows Chartcuterie to check if it has received a new configuration module with each polling tick.

Run Chartcuterie in development

To enable Chartcuterie in the local developer environment, first enable it in config.yml:

# enable charctuerie
chart-rendering.enabled: true
Copy the code

Currently you need to manually build the configuration modules in your development environment.

yarn build-chartcuterie-config
Copy the code

You can then start the Chartcuterie DevService. If devService is not started, check whether the chart-render.enabled key is set to true (use sentry config get chart-render.enabled).

sentry devservices up chartcuterie
Copy the code

You can verify that the service started successfully by checking the logs

docker logs -f sentry_chartcuterie
Copy the code

It should be something like that

info: Using polling strategy to resolve configuration...
info: Polling every 5s for config...
info: Server listening for render requests on port 9090
info: Resolved new config via polling: n styles available. {"version":"xxx"}
info: Config polling switching to idle mode
info: Polling every 300s for config...
Copy the code

Your development environment is now ready to invoke a local instance of Chartcuterie.

Update chart Type locally

Currently, you need to use YARN build-ChartCuterie-config to rebuild the configuration module each time you make a change. This may improve in the future.

The working principle of

Here are some service diagrams of the Chartcuterie Service and how it interacts with the Sentry application server.

Chartcuterie start

The Render call from Sentry