Sergey Lysenko https://sergey-lysenko.com/ The DevOps Engineer Fri, 03 Oct 2025 08:25:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Kaniko Transitions to Chainguard: What It Means for CI/CD https://sergey-lysenko.com/kaniko-transitions-to-chainguard-what-it-means-for-ci-cd/ https://sergey-lysenko.com/kaniko-transitions-to-chainguard-what-it-means-for-ci-cd/#respond Fri, 03 Oct 2025 08:25:47 +0000 https://sergey-lysenko.com/?p=314 For years, Kaniko has been a go-to tool for building container images inside CI/CD pipelines, especially in environments where Docker isn’t available. However, active development on Kaniko by Google has now come to an end. The good news is that the project isn’t disappearing. Chainguard has stepped in, forking Kaniko and taking over its ongoing […]

The post Kaniko Transitions to Chainguard: What It Means for CI/CD appeared first on Sergey Lysenko.

]]>
For years, Kaniko has been a go-to tool for building container images inside CI/CD pipelines, especially in environments where Docker isn’t available. However, active development on Kaniko by Google has now come to an end.

The good news is that the project isn’t disappearing. Chainguard has stepped in, forking Kaniko and taking over its ongoing development. This move ensures that the tool remains available and continues to evolve, with a strong focus on modern security practices and supply chain integrity — areas where Chainguard has already established deep expertise.

From a DevOps perspective, this transition raises some interesting points. On the one hand, it’s reassuring to see a community-driven fork keep a widely used tool alive. On the other hand, it highlights the fragility of relying on single-vendor projects, especially when they underpin critical workflows in CI/CD systems.

This shift also opens the door for teams to re-evaluate their image build strategy. Should they continue with Kaniko under Chainguard’s stewardship, or consider alternatives like BuildKit, Tekton, or other container-native solutions?

The post Kaniko Transitions to Chainguard: What It Means for CI/CD appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/kaniko-transitions-to-chainguard-what-it-means-for-ci-cd/feed/ 0
Building Multi-Architecture Images with crane index append https://sergey-lysenko.com/building-multi-architecture-images-with-crane-index-append/ https://sergey-lysenko.com/building-multi-architecture-images-with-crane-index-append/#respond Thu, 02 Oct 2025 13:38:45 +0000 https://sergey-lysenko.com/?p=310 When working with containerized applications, it’s common to target multiple platforms such as amd64 and arm64. Instead of publishing separate tags for each architecture, you can create a multi-architecture manifest (also known as an image index). This allows tools like Docker or Kubernetes to automatically pull the correct image for the host architecture. One of […]

The post Building Multi-Architecture Images with crane index append appeared first on Sergey Lysenko.

]]>
When working with containerized applications, it’s common to target multiple platforms such as amd64 and arm64. Instead of publishing separate tags for each architecture, you can create a multi-architecture manifest (also known as an image index). This allows tools like Docker or Kubernetes to automatically pull the correct image for the host architecture.

One of the simplest ways to achieve this is by using crane, a command-line tool from Google’s go-containerregistry project.

crane index append \
  -m reg.example.com/project/app:1.0.0-amd64 \
  -m reg.example.com/project/app:1.0.0-arm64 \
  -t reg.example.com/project/app:1.0.0 -v

Here’s what happens step by step:

1. Two platform-specific images already exist in the registry: one built for amd64, the other for arm64.
2. The -m flags specify the manifests to include in the new index.
3. The -t flag defines the final multi-arch tag (1.0.0 in this case).
4. The -v flag enables verbose output, useful for debugging.

After running this command, pulling reg.example.com/project/app:1.0.0 will automatically serve the correct architecture-specific image depending on the client’s platform.

Why This Matters

• Seamless developer experience: Users don’t need to worry about which tag matches their machine.
• Kubernetes compatibility: Clusters with mixed node types (x86 and ARM) can run the same deployment manifest.
• Future-proofing: ARM-based infrastructure is growing rapidly (e.g., AWS Graviton, Apple Silicon), so multi-arch support is becoming essential.

Final Thoughts

Using crane index append is a lightweight yet powerful way to unify platform-specific images under a single tag. It fits naturally into CI/CD pipelines and ensures your container images are portable, modern, and ready for diverse runtime environments.

The post Building Multi-Architecture Images with crane index append appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/building-multi-architecture-images-with-crane-index-append/feed/ 0
How to Reveal instanceType in AWS Fargate https://sergey-lysenko.com/how-to-reveal-instancetype-in-aws-fargate/ https://sergey-lysenko.com/how-to-reveal-instancetype-in-aws-fargate/#respond Tue, 10 Dec 2024 09:36:21 +0000 https://sergey-lysenko.com/?p=297 AWS Fargate is a serverless compute service that abstracts the underlying infrastructure, including the instanceType. This can be challenging when you need to determine the exact type of instance your container is running on. However, there’s a straightforward way to uncover this information. You can reveal the hidden instance type by accessing the container (or […]

The post How to Reveal instanceType in AWS Fargate appeared first on Sergey Lysenko.

]]>
AWS Fargate is a serverless compute service that abstracts the underlying infrastructure, including the instanceType. This can be challenging when you need to determine the exact type of instance your container is running on. However, there’s a straightforward way to uncover this information.

You can reveal the hidden instance type by accessing the container (or pod) and executing a specific command:

cat /sys/devices/virtual/dmi/id/product_name

Why This Works

The file /sys/devices/virtual/dmi/id/product_name contains hardware information provided via the Desktop Management Interface (DMI). Even in Fargate, this data is passed through the virtualization layer, making it accessible from within the container or pod.

The post How to Reveal instanceType in AWS Fargate appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/how-to-reveal-instancetype-in-aws-fargate/feed/ 0
Understanding the lifetime of Varnish cached objects: TTL, grace, and keep https://sergey-lysenko.com/understanding-the-lifetime-of-varnish-cached-objects-ttl-grace-and-keep/ https://sergey-lysenko.com/understanding-the-lifetime-of-varnish-cached-objects-ttl-grace-and-keep/#respond Sun, 10 Mar 2024 11:52:21 +0000 https://sergey-lysenko.com/?p=294 In the context of caching, the terms TTL (Time to Live), grace, and keep are often associated with controlling the lifetime of cached objects. These parameters are commonly used in caching systems like Varnish to manage how long an object should be considered valid and under what conditions it can still be served from the […]

The post Understanding the lifetime of Varnish cached objects: TTL, grace, and keep appeared first on Sergey Lysenko.

]]>
In the context of caching, the terms TTL (Time to Live), grace, and keep are often associated with controlling the lifetime of cached objects. These parameters are commonly used in caching systems like Varnish to manage how long an object should be considered valid and under what conditions it can still be served from the cache. Let’s explore each term:

  1. Time to Live (TTL):
    • Definition: TTL is the duration for which a cached object is considered fresh and can be served from the cache without checking the origin server.
    • Usage: When a request is made, the caching system checks if the object is still within its TTL. If yes, it serves the cached version; otherwise, it fetches a fresh copy from the origin server.
  2. Grace:
    • Definition: Grace is an extension of the TTL concept. It represents the additional time during which a cached object can be served even after its TTL has expired, while the caching system attempts to fetch a fresh copy from the origin server.
    • Usage: If a cached object’s TTL has expired but it is still within the grace period, the caching system may serve it while initiating a background request to the origin server to refresh the content.
  3. Keep:
    • Definition: Keep is another extension of the TTL concept, specifying the maximum time a cached object can be retained in the cache, irrespective of whether the TTL has expired.
    • Usage: Keep allows the caching system to retain objects in the cache for a longer duration, even if the TTL has expired. It is useful in scenarios where you want to keep certain objects in the cache for a fixed period, regardless of their freshness.

In summary:

  • TTL: Specifies how long an object remains valid in the cache.
  • Grace: Represents the additional time during which an expired object can still be served while a background fetch is attempted.
  • Keep: Defines the maximum duration an object can be retained in the cache, regardless of its TTL.

These parameters provide flexibility in balancing the need for serving fresh content and minimizing the load on the origin server by intelligently managing cached objects. Configuring TTL, grace, and keep values requires consideration of the specific requirements and characteristics of the cached content.

The post Understanding the lifetime of Varnish cached objects: TTL, grace, and keep appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/understanding-the-lifetime-of-varnish-cached-objects-ttl-grace-and-keep/feed/ 0
The Impact of JIT on Performance in PHP 8: Unleashing the True Potential https://sergey-lysenko.com/the-impact-of-jit-on-performance-in-php-8-unleashing-the-true-potential/ https://sergey-lysenko.com/the-impact-of-jit-on-performance-in-php-8-unleashing-the-true-potential/#respond Thu, 23 Nov 2023 17:04:25 +0000 https://sergey-lysenko.com/?p=291 In this article, we’ll explore the revolutionary influence of Just-in-Time (JIT) compilation on PHP 8’s productivity. Brace yourself for an enlightening exploration filled with intriguing facts and compelling numerical evidence. Understanding JIT in PHP 8: Let’s demystify the concept of JIT compilation in PHP 8. This powerful feature dynamically transforms code into machine instructions at […]

The post The Impact of JIT on Performance in PHP 8: Unleashing the True Potential appeared first on Sergey Lysenko.

]]>
In this article, we’ll explore the revolutionary influence of Just-in-Time (JIT) compilation on PHP 8’s productivity. Brace yourself for an enlightening exploration filled with intriguing facts and compelling numerical evidence.

Understanding JIT in PHP 8: Let’s demystify the concept of JIT compilation in PHP 8. This powerful feature dynamically transforms code into machine instructions at runtime, ushering in a new era of performance optimization. Prepare to unlock the astonishing potential of JIT and witness its significant impact on your PHP 8 projects.

The Power Unleashed: With the introduction of JIT in PHP 8, developers experience remarkable performance gains. The ability to generate optimized machine code on the fly eliminates the need for repetitive interpretation. 🚀

Consider a scenario where a PHP script needs to execute a computationally intensive task multiple times. Without JIT, the interpreter would re-analyze and recompile the code during each execution, resulting in unnecessary overhead. However, with JIT, the initial compilation is followed by the generation of highly optimized machine code, leading to faster subsequent executions. This groundbreaking shift can yield performance improvements of up to 30% or more! 😮

Real-World Applications: Let’s delve into real-world use cases where the influence of JIT in PHP 8 becomes evident:

  1. E-commerce Websites: Imagine an e-commerce platform with thousands of concurrent users. JIT compilation in PHP 8 can significantly enhance response times, ensuring seamless shopping experiences even during peak traffic periods. This boost in performance directly translates to increased customer satisfaction and improved conversion rates. 💸
  2. Data-Intensive Applications: JIT’s impact extends to data-intensive applications as well. Complex algorithms and calculations can be executed with astonishing speed, enabling businesses to process large datasets more efficiently. This enhanced performance empowers data-driven decision-making, leading to optimized operations and improved productivity. 📊

Conclusion: As we conclude our exploration of JIT’s influence on PHP 8’s performance, we can’t help but appreciate the transformative power this feature brings to the realm of web development. With JIT compilation, PHP 8 unlocks new levels of efficiency, efficiency that translates into tangible benefits for businesses and end-users alike. Embrace the possibilities, harness the power, and witness the remarkable improvements in your PHP 8 projects. 🌟

The post The Impact of JIT on Performance in PHP 8: Unleashing the True Potential appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/the-impact-of-jit-on-performance-in-php-8-unleashing-the-true-potential/feed/ 0
SFTP service for Magento https://sergey-lysenko.com/sftp-service-for-magento/ https://sergey-lysenko.com/sftp-service-for-magento/#respond Wed, 30 Mar 2022 10:59:32 +0000 https://sergey-lysenko.com/?p=286 Some of your suppliers and business partners may want to interchange files with your store via ftp/sftp. Usually it is orders and stock data in CSV or XML format. And we should to decide, how to hold this sftp service. First of all I have to warn you about using the optimised directory structure. At […]

The post SFTP service for Magento appeared first on Sergey Lysenko.

]]>
Some of your suppliers and business partners may want to interchange files with your store via ftp/sftp. Usually it is orders and stock data in CSV or XML format. And we should to decide, how to hold this sftp service.

First of all I have to warn you about using the optimised directory structure. At one single folder you should keep not more than 100-200 files.

For example, this is very bad for performance:

/folder/0001.txt
/folder/0002.txt
/folder/0003.txt
/folder/0004.txt
***
/folder/9999.txt

If there will be thousands of files in one single folder, the performance will be very poor. It could take minutes to read files from that plain directory structure.

It should be like that:

/folder/a/0001.txt
/folder/a/0002.txt
/folder/b/0003.txt
/folder/b/0004.txt

As you can see, the files are grouped by subfolders (the same way Magento uses at media folder). At each subfolder there are minimum amount of files. The reading of the files will be very fast.

So, I can offer 3 variants of using (s)FTP service:

1) Build (s)ftp service yourselves

It can be a simple AWS instance with ftp/sftp enabled and storage volume attached. Here we can make all the functionality as we wish, including security optimisations. The build and setup of our own sftp could take up to 2 days of work and it will cost approximately 30-50 USD/month.

2) Use AWS Transfer Family

It is fully managed professional service, but very and very expensive. The costs start from 400 USD/month.

3) Use another managed ftp services

As a result, I suggest to deploy own (s)ftp service (1-st point), because it has acceptable cost and we can do with it whatever we wish.

But if you have not enough skills, you should of course use managed ftp service (3-rd point).

If you need any help in setting up sFTP, you can contact me.

The post SFTP service for Magento appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/sftp-service-for-magento/feed/ 0
Spontaneous loss of Varnish cache https://sergey-lysenko.com/spontaneous-loss-of-varnish-cache/ https://sergey-lysenko.com/spontaneous-loss-of-varnish-cache/#respond Sat, 26 Mar 2022 13:25:39 +0000 https://sergey-lysenko.com/?p=281 You may notice that Varnish works well, but sometimes it completely loses the cache as the child process of Varnish is reloaded. This may be version independent and is accompanied by various errors, such as this one: The content of the error is not important here, the main thing here is that the child process […]

The post Spontaneous loss of Varnish cache appeared first on Sergey Lysenko.

]]>
You may notice that Varnish works well, but sometimes it completely loses the cache as the child process of Varnish is reloaded. This may be version independent and is accompanied by various errors, such as this one:

Error: Child (65162) died signal=6
Error: Child (65162) Panic at: Fri, 25 Mar 2022 11:18:31 GMT
Assert error in obj_getmethods(), cache/cache_obj.c line 98:
  Condition((oc->stobj->stevedore) != NULL) not true.
version = varnish-6.5.2 revision NOGIT, vrt api = 12.0

The content of the error is not important here, the main thing here is that the child process has died. In this case, the cache will naturally be lost:

If you notice this, then you should pay attention to the transparent_hugepage setting on the server. If it is set to always, then this is bad and varnish warns about this in the documentation.

How to check? Login to the server and run the command:

cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

You need to set this parameter to transparent_hugepage=madvise. To do this, edit the grub file:

nano /etc/default/grub

Find the parameter there:

GRUB_CMDLINE_LINUX="...

Add a value to it:

transparent_hugepage=madvise

As a result, the parameter will look something like this:

GRUB_CMDLINE_LINUX="resume= ... quiet transparent_hugepage=madvise"

After that run:

grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

The above steps set this setting permanently. This will allow Varnish to comply with the requirements and it will work more correctly.

The post Spontaneous loss of Varnish cache appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/spontaneous-loss-of-varnish-cache/feed/ 0
Sync SFTP remote folders via CLI in Linux https://sergey-lysenko.com/sync-sftp-remote-folders-via-cli-in-linux/ https://sergey-lysenko.com/sync-sftp-remote-folders-via-cli-in-linux/#comments Thu, 17 Mar 2022 13:19:09 +0000 https://sergey-lysenko.com/?p=269 There are many utilities for synchronizing folders via SFTP on Linux, but most of them are visual (eg Filezilla). There are not so many good utilities for the command line. For example, there is the sftp utility, but it is not very convenient, you cannot synchronize the whole folder recursively using it. Therefore, here we […]

The post Sync SFTP remote folders via CLI in Linux appeared first on Sergey Lysenko.

]]>
There are many utilities for synchronizing folders via SFTP on Linux, but most of them are visual (eg Filezilla). There are not so many good utilities for the command line. For example, there is the sftp utility, but it is not very convenient, you cannot synchronize the whole folder recursively using it. Therefore, here we will look at lftp, which works from the command line and allows you to do a lot of operations using the SFTP protocol.

Installing lftp is very easy. For example, for Ubuntu:

apt-get install -y lftp;

After installation, you can immediately use it. The logic of the program is such that we can write a script and run it. There at script will be a connection to the SFTP server and some action that we need. For example, let’s say we need to copy the entire folder from the local computer to the server via SFTP. Example of a command:

lftp sftp://username:password@sftp.example.com -p 22 -e 'set sftp:connect-program "ssh -o StrictHostKeyChecking=no -a -x -i /home/.ssh/yourkey.key"; mirror -eRv /var/from-folder/ /var/to-folder; quit;'

Here is a command to connect to an SFTP server using an ssh key. Commands for lftp are listed in the -e option separated by semicolons. After the connection, the contents of the /var/from-folder/ folder are copied from the local computer to the /var/to-folder folder on the server recursively (with all subfolders).

Note that to login with a key, the sftp:connect-program variable is first set, which contains the ssh connection command and path to the key. The StrictHostKeyChecking option has been added so that the server’s fingerprint will not be checked, it is assumed that we trust it.

Even if we use a key, the password must be specified: username:password. If not specified, lftp will ask for a password. If we authorize by key, we need to specify any string as a password.

The post Sync SFTP remote folders via CLI in Linux appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/sync-sftp-remote-folders-via-cli-in-linux/feed/ 2
Magento 2 Pagespeed Recommendations https://sergey-lysenko.com/magento-2-pagespeed-recommendations/ https://sergey-lysenko.com/magento-2-pagespeed-recommendations/#respond Wed, 09 Mar 2022 13:57:08 +0000 https://sergey-lysenko.com/?p=51 Magento is a very slow e-commerce system and initially the pagespeed score is very low. To increase pagespeed, you need to take some steps, which I will describe. The post will be updated. Install the nginx / apache pagespeed module https://www.modpagespeed.com/ This is a special module from Google that will do all the dirty work […]

The post Magento 2 Pagespeed Recommendations appeared first on Sergey Lysenko.

]]>
Magento is a very slow e-commerce system and initially the pagespeed score is very low. To increase pagespeed, you need to take some steps, which I will describe. The post will be updated.

Install the nginx / apache pagespeed module

https://www.modpagespeed.com/

This is a special module from Google that will do all the dirty work for you. Imagine that a content manager has uploaded a 15 megabyte image to the site, naturally it will take a long time to load and visitors may leave the site. This module will compress the image on the fly depending on the visitor’s device (mobile, tablet, desktop) and give him an optimized small image.

The pagespeed module automatically converts non-optimal graphic formats (gif, jpeg) to webp, which significantly increases the score.

Also, the pagespeed module can combine different css and js, because each separate http request increases the page load time. If all resources are combined, the page will load faster.

The pagespeed module can detect if gzip compression is enabled and enable it automatically if you haven’t enabled it yourself.

The post Magento 2 Pagespeed Recommendations appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/magento-2-pagespeed-recommendations/feed/ 0
How to Reset Admin password in Magento 2 https://sergey-lysenko.com/how-to-reset-admin-password-in-magento-2/ https://sergey-lysenko.com/how-to-reset-admin-password-in-magento-2/#respond Tue, 08 Mar 2022 14:15:16 +0000 https://sergey-lysenko.com/?p=68 The easiest and probably the correct way to reset the Magento administrative password is to use the command line: As you noticed, you need to install the n98-magerun2 utility, since there is no password reset command in the standard Magento command set (php bin/magento). In addition to the password, you may also need to reset […]

The post How to Reset Admin password in Magento 2 appeared first on Sergey Lysenko.

]]>
The easiest and probably the correct way to reset the Magento administrative password is to use the command line:

n98-magerun2 admin:user:change-password

As you noticed, you need to install the n98-magerun2 utility, since there is no password reset command in the standard Magento command set (php bin/magento).

In addition to the password, you may also need to reset two-factor authentication:

n98-magerun2 security:tfa:reset <username> google

If you have access to the Magento admin panel and need to reset the password for someone else, you can do it right from the panel.

The post How to Reset Admin password in Magento 2 appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/how-to-reset-admin-password-in-magento-2/feed/ 0