Sergey Lysenko https://sergey-lysenko.com/ The Magento Developer Sun, 10 Mar 2024 11:53:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Understanding the lifetime of Varnish cached objects: TTL, grace, and keep https://sergey-lysenko.com/understanding-the-lifetime-of-varnish-cached-objects-ttl-grace-and-keep/ https://sergey-lysenko.com/understanding-the-lifetime-of-varnish-cached-objects-ttl-grace-and-keep/#respond Sun, 10 Mar 2024 11:52:21 +0000 https://sergey-lysenko.com/?p=294 In the context of caching, the terms TTL (Time to Live), grace, and keep are often associated with controlling the lifetime of cached objects. These parameters are commonly used in caching systems like Varnish to manage how long an object should be considered valid and under what conditions it can still be served from the […]

The post Understanding the lifetime of Varnish cached objects: TTL, grace, and keep appeared first on Sergey Lysenko.

]]>
In the context of caching, the terms TTL (Time to Live), grace, and keep are often associated with controlling the lifetime of cached objects. These parameters are commonly used in caching systems like Varnish to manage how long an object should be considered valid and under what conditions it can still be served from the cache. Let’s explore each term:

  1. Time to Live (TTL):
    • Definition: TTL is the duration for which a cached object is considered fresh and can be served from the cache without checking the origin server.
    • Usage: When a request is made, the caching system checks if the object is still within its TTL. If yes, it serves the cached version; otherwise, it fetches a fresh copy from the origin server.
  2. Grace:
    • Definition: Grace is an extension of the TTL concept. It represents the additional time during which a cached object can be served even after its TTL has expired, while the caching system attempts to fetch a fresh copy from the origin server.
    • Usage: If a cached object’s TTL has expired but it is still within the grace period, the caching system may serve it while initiating a background request to the origin server to refresh the content.
  3. Keep:
    • Definition: Keep is another extension of the TTL concept, specifying the maximum time a cached object can be retained in the cache, irrespective of whether the TTL has expired.
    • Usage: Keep allows the caching system to retain objects in the cache for a longer duration, even if the TTL has expired. It is useful in scenarios where you want to keep certain objects in the cache for a fixed period, regardless of their freshness.

In summary:

  • TTL: Specifies how long an object remains valid in the cache.
  • Grace: Represents the additional time during which an expired object can still be served while a background fetch is attempted.
  • Keep: Defines the maximum duration an object can be retained in the cache, regardless of its TTL.

These parameters provide flexibility in balancing the need for serving fresh content and minimizing the load on the origin server by intelligently managing cached objects. Configuring TTL, grace, and keep values requires consideration of the specific requirements and characteristics of the cached content.

The post Understanding the lifetime of Varnish cached objects: TTL, grace, and keep appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/understanding-the-lifetime-of-varnish-cached-objects-ttl-grace-and-keep/feed/ 0
The Impact of JIT on Performance in PHP 8: Unleashing the True Potential https://sergey-lysenko.com/the-impact-of-jit-on-performance-in-php-8-unleashing-the-true-potential/ https://sergey-lysenko.com/the-impact-of-jit-on-performance-in-php-8-unleashing-the-true-potential/#respond Thu, 23 Nov 2023 17:04:25 +0000 https://sergey-lysenko.com/?p=291 In this article, we’ll explore the revolutionary influence of Just-in-Time (JIT) compilation on PHP 8’s productivity. Brace yourself for an enlightening exploration filled with intriguing facts and compelling numerical evidence. Understanding JIT in PHP 8: Let’s demystify the concept of JIT compilation in PHP 8. This powerful feature dynamically transforms code into machine instructions at […]

The post The Impact of JIT on Performance in PHP 8: Unleashing the True Potential appeared first on Sergey Lysenko.

]]>
In this article, we’ll explore the revolutionary influence of Just-in-Time (JIT) compilation on PHP 8’s productivity. Brace yourself for an enlightening exploration filled with intriguing facts and compelling numerical evidence.

Understanding JIT in PHP 8: Let’s demystify the concept of JIT compilation in PHP 8. This powerful feature dynamically transforms code into machine instructions at runtime, ushering in a new era of performance optimization. Prepare to unlock the astonishing potential of JIT and witness its significant impact on your PHP 8 projects.

The Power Unleashed: With the introduction of JIT in PHP 8, developers experience remarkable performance gains. The ability to generate optimized machine code on the fly eliminates the need for repetitive interpretation. 🚀

Consider a scenario where a PHP script needs to execute a computationally intensive task multiple times. Without JIT, the interpreter would re-analyze and recompile the code during each execution, resulting in unnecessary overhead. However, with JIT, the initial compilation is followed by the generation of highly optimized machine code, leading to faster subsequent executions. This groundbreaking shift can yield performance improvements of up to 30% or more! 😮

Real-World Applications: Let’s delve into real-world use cases where the influence of JIT in PHP 8 becomes evident:

  1. E-commerce Websites: Imagine an e-commerce platform with thousands of concurrent users. JIT compilation in PHP 8 can significantly enhance response times, ensuring seamless shopping experiences even during peak traffic periods. This boost in performance directly translates to increased customer satisfaction and improved conversion rates. 💸
  2. Data-Intensive Applications: JIT’s impact extends to data-intensive applications as well. Complex algorithms and calculations can be executed with astonishing speed, enabling businesses to process large datasets more efficiently. This enhanced performance empowers data-driven decision-making, leading to optimized operations and improved productivity. 📊

Conclusion: As we conclude our exploration of JIT’s influence on PHP 8’s performance, we can’t help but appreciate the transformative power this feature brings to the realm of web development. With JIT compilation, PHP 8 unlocks new levels of efficiency, efficiency that translates into tangible benefits for businesses and end-users alike. Embrace the possibilities, harness the power, and witness the remarkable improvements in your PHP 8 projects. 🌟

The post The Impact of JIT on Performance in PHP 8: Unleashing the True Potential appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/the-impact-of-jit-on-performance-in-php-8-unleashing-the-true-potential/feed/ 0
SFTP service for Magento https://sergey-lysenko.com/sftp-service-for-magento/ https://sergey-lysenko.com/sftp-service-for-magento/#respond Wed, 30 Mar 2022 10:59:32 +0000 https://sergey-lysenko.com/?p=286 Some of your suppliers and business partners may want to interchange files with your store via ftp/sftp. Usually it is orders and stock data in CSV or XML format. And we should to decide, how to hold this sftp service. First of all I have to warn you about using the optimised directory structure. At […]

The post SFTP service for Magento appeared first on Sergey Lysenko.

]]>
Some of your suppliers and business partners may want to interchange files with your store via ftp/sftp. Usually it is orders and stock data in CSV or XML format. And we should to decide, how to hold this sftp service.

First of all I have to warn you about using the optimised directory structure. At one single folder you should keep not more than 100-200 files.

For example, this is very bad for performance:

/folder/0001.txt
/folder/0002.txt
/folder/0003.txt
/folder/0004.txt
***
/folder/9999.txt

If there will be thousands of files in one single folder, the performance will be very poor. It could take minutes to read files from that plain directory structure.

It should be like that:

/folder/a/0001.txt
/folder/a/0002.txt
/folder/b/0003.txt
/folder/b/0004.txt

As you can see, the files are grouped by subfolders (the same way Magento uses at media folder). At each subfolder there are minimum amount of files. The reading of the files will be very fast.

So, I can offer 3 variants of using (s)FTP service:

1) Build (s)ftp service yourselves

It can be a simple AWS instance with ftp/sftp enabled and storage volume attached. Here we can make all the functionality as we wish, including security optimisations. The build and setup of our own sftp could take up to 2 days of work and it will cost approximately 30-50 USD/month.

2) Use AWS Transfer Family

It is fully managed professional service, but very and very expensive. The costs start from 400 USD/month.

3) Use another managed ftp services

As a result, I suggest to deploy own (s)ftp service (1-st point), because it has acceptable cost and we can do with it whatever we wish.

But if you have not enough skills, you should of course use managed ftp service (3-rd point).

If you need any help in setting up sFTP, you can contact me.

The post SFTP service for Magento appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/sftp-service-for-magento/feed/ 0
Spontaneous loss of Varnish cache https://sergey-lysenko.com/spontaneous-loss-of-varnish-cache/ https://sergey-lysenko.com/spontaneous-loss-of-varnish-cache/#respond Sat, 26 Mar 2022 13:25:39 +0000 https://sergey-lysenko.com/?p=281 You may notice that Varnish works well, but sometimes it completely loses the cache as the child process of Varnish is reloaded. This may be version independent and is accompanied by various errors, such as this one: The content of the error is not important here, the main thing here is that the child process […]

The post Spontaneous loss of Varnish cache appeared first on Sergey Lysenko.

]]>
You may notice that Varnish works well, but sometimes it completely loses the cache as the child process of Varnish is reloaded. This may be version independent and is accompanied by various errors, such as this one:

Error: Child (65162) died signal=6
Error: Child (65162) Panic at: Fri, 25 Mar 2022 11:18:31 GMT
Assert error in obj_getmethods(), cache/cache_obj.c line 98:
  Condition((oc->stobj->stevedore) != NULL) not true.
version = varnish-6.5.2 revision NOGIT, vrt api = 12.0

The content of the error is not important here, the main thing here is that the child process has died. In this case, the cache will naturally be lost:

If you notice this, then you should pay attention to the transparent_hugepage setting on the server. If it is set to always, then this is bad and varnish warns about this in the documentation.

How to check? Login to the server and run the command:

cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

You need to set this parameter to transparent_hugepage=madvise. To do this, edit the grub file:

nano /etc/default/grub

Find the parameter there:

GRUB_CMDLINE_LINUX="...

Add a value to it:

transparent_hugepage=madvise

As a result, the parameter will look something like this:

GRUB_CMDLINE_LINUX="resume= ... quiet transparent_hugepage=madvise"

After that run:

grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

The above steps set this setting permanently. This will allow Varnish to comply with the requirements and it will work more correctly.

The post Spontaneous loss of Varnish cache appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/spontaneous-loss-of-varnish-cache/feed/ 0
Sync SFTP remote folders via CLI in Linux https://sergey-lysenko.com/sync-sftp-remote-folders-via-cli-in-linux/ https://sergey-lysenko.com/sync-sftp-remote-folders-via-cli-in-linux/#comments Thu, 17 Mar 2022 13:19:09 +0000 https://sergey-lysenko.com/?p=269 There are many utilities for synchronizing folders via SFTP on Linux, but most of them are visual (eg Filezilla). There are not so many good utilities for the command line. For example, there is the sftp utility, but it is not very convenient, you cannot synchronize the whole folder recursively using it. Therefore, here we […]

The post Sync SFTP remote folders via CLI in Linux appeared first on Sergey Lysenko.

]]>
There are many utilities for synchronizing folders via SFTP on Linux, but most of them are visual (eg Filezilla). There are not so many good utilities for the command line. For example, there is the sftp utility, but it is not very convenient, you cannot synchronize the whole folder recursively using it. Therefore, here we will look at lftp, which works from the command line and allows you to do a lot of operations using the SFTP protocol.

Installing lftp is very easy. For example, for Ubuntu:

apt-get install -y lftp;

After installation, you can immediately use it. The logic of the program is such that we can write a script and run it. There at script will be a connection to the SFTP server and some action that we need. For example, let’s say we need to copy the entire folder from the local computer to the server via SFTP. Example of a command:

lftp sftp://username:password@sftp.example.com -p 22 -e 'set sftp:connect-program "ssh -o StrictHostKeyChecking=no -a -x -i /home/.ssh/yourkey.key"; mirror -eRv /var/from-folder/ /var/to-folder; quit;'

Here is a command to connect to an SFTP server using an ssh key. Commands for lftp are listed in the -e option separated by semicolons. After the connection, the contents of the /var/from-folder/ folder are copied from the local computer to the /var/to-folder folder on the server recursively (with all subfolders).

Note that to login with a key, the sftp:connect-program variable is first set, which contains the ssh connection command and path to the key. The StrictHostKeyChecking option has been added so that the server’s fingerprint will not be checked, it is assumed that we trust it.

Even if we use a key, the password must be specified: username:password. If not specified, lftp will ask for a password. If we authorize by key, we need to specify any string as a password.

The post Sync SFTP remote folders via CLI in Linux appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/sync-sftp-remote-folders-via-cli-in-linux/feed/ 1
Magento 2 Pagespeed Recommendations https://sergey-lysenko.com/magento-2-pagespeed-recommendations/ https://sergey-lysenko.com/magento-2-pagespeed-recommendations/#respond Wed, 09 Mar 2022 13:57:08 +0000 https://sergey-lysenko.com/?p=51 Magento is a very slow e-commerce system and initially the pagespeed score is very low. To increase pagespeed, you need to take some steps, which I will describe. The post will be updated. Install the nginx / apache pagespeed module https://www.modpagespeed.com/ This is a special module from Google that will do all the dirty work […]

The post Magento 2 Pagespeed Recommendations appeared first on Sergey Lysenko.

]]>
Magento is a very slow e-commerce system and initially the pagespeed score is very low. To increase pagespeed, you need to take some steps, which I will describe. The post will be updated.

Install the nginx / apache pagespeed module

https://www.modpagespeed.com/

This is a special module from Google that will do all the dirty work for you. Imagine that a content manager has uploaded a 15 megabyte image to the site, naturally it will take a long time to load and visitors may leave the site. This module will compress the image on the fly depending on the visitor’s device (mobile, tablet, desktop) and give him an optimized small image.

The pagespeed module automatically converts non-optimal graphic formats (gif, jpeg) to webp, which significantly increases the score.

Also, the pagespeed module can combine different css and js, because each separate http request increases the page load time. If all resources are combined, the page will load faster.

The pagespeed module can detect if gzip compression is enabled and enable it automatically if you haven’t enabled it yourself.

The post Magento 2 Pagespeed Recommendations appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/magento-2-pagespeed-recommendations/feed/ 0
How to Reset Admin password in Magento 2 https://sergey-lysenko.com/how-to-reset-admin-password-in-magento-2/ https://sergey-lysenko.com/how-to-reset-admin-password-in-magento-2/#respond Tue, 08 Mar 2022 14:15:16 +0000 https://sergey-lysenko.com/?p=68 The easiest and probably the correct way to reset the Magento administrative password is to use the command line: As you noticed, you need to install the n98-magerun2 utility, since there is no password reset command in the standard Magento command set (php bin/magento). In addition to the password, you may also need to reset […]

The post How to Reset Admin password in Magento 2 appeared first on Sergey Lysenko.

]]>
The easiest and probably the correct way to reset the Magento administrative password is to use the command line:

n98-magerun2 admin:user:change-password

As you noticed, you need to install the n98-magerun2 utility, since there is no password reset command in the standard Magento command set (php bin/magento).

In addition to the password, you may also need to reset two-factor authentication:

n98-magerun2 security:tfa:reset <username> google

If you have access to the Magento admin panel and need to reset the password for someone else, you can do it right from the panel.

The post How to Reset Admin password in Magento 2 appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/how-to-reset-admin-password-in-magento-2/feed/ 0
Opcache wasted memory https://sergey-lysenko.com/opcache-wasted-memory/ https://sergey-lysenko.com/opcache-wasted-memory/#comments Wed, 02 Mar 2022 07:52:29 +0000 https://sergey-lysenko.com/?p=144 Opcache is a vital mechanism for caching opcodes, without which no PHP project can function properly. Opcache provides performance by storing compiled opcodes in RAM. Let’s figure out how it works and what wasted memory is. When we write our program, we store PHP code in plain text. In the same form it gets to […]

The post Opcache wasted memory appeared first on Sergey Lysenko.

]]>
Opcache is a vital mechanism for caching opcodes, without which no PHP project can function properly. Opcache provides performance by storing compiled opcodes in RAM. Let’s figure out how it works and what wasted memory is.

When we write our program, we store PHP code in plain text. In the same form it gets to the server, that is, we do not perform any compilation before publishing the code. This means PHP is an interpreted programming language and compilation happens at request time.

When text code from a PHP script is requested for the first time, it is compiled on the fly: converted to byte code and executed on the processor. If opcache is enabled and configured, then after compilation the byte code (opcode) gets into the cache – opcache. The next time, with the next request, compilation no longer occurs and the cached opcode is executed immediately. This provides a huge performance advantage.

For opcache, a dedicated memory area is configured, that is, if there are a lot of scripts in the project, then you need to allocate enough memory to fit them all. Memory consumption can be seen when calling the opcache_get_status function, but it is much better to install a special panel to monitor the status of opcache. Normally, this panel should show something like this:

Wasted memory is marked here, and this value should always be zero whenever possible.

When a PHP script changes, for example, it is edited by another script or new scripts are deployed, PHP checks the file for changes, and if it really has changed, it generates a new entry in opcache for it. At the same time, the memory area that stored the old version of the opcode is not released for the script. New scripts go into the remaining free memory area. To clear this memory, you need to either restart php-fpm or call the opcache_reset function in the php-fpm context (not in the cli context).

It is this memory that is occupied by changed scripts, that is called wasted memory. If the scripts keep changing and the memory is not cleared, it will fill up, the wasted memory will grow, and there will not be enough room for new scripts. In this case, the site will continue to work without opcache, that is, very slowly.

As a result, in order for the site to work quickly, you need to enable opcache, allocate the necessary amount of memory to it and monitor the wasted memory indicator. After each deployment of new scripts, it is better to restart the entire php-fpm.

Sometimes you may notice that you haven’t changed the scripts, but the wasted memory still grows. In this case, check the value of the opcache.huge_code_pages parameter and enable it.

If you have met all the conditions and the wasted memory is still growing, you need to pay attention to your scripts. For example, cache invalidation for specific files may occur there, or even changes may be made to some files on the fly.

Try to find where in the code the opcache_invalidate function is called and do something about it if possible.

As an example, Magento had such a situation. If nothing can be done with the code, then you need to add these files to the exceptions:

at php.ini:

opcache.blacklist_filename="/usr/local/etc/php/opcache-blacklist.txt"

at opcache-blacklist.txt:

; Ignore config files
/var/www/html/app/etc/config.php
/var/www/html/app/etc/env.php

After adding to opcache exceptions, those scripts will not be cached to opcache memory storage, but will be processed by PHP as usual.

If you have any questions or need help, please leave a reply below.

The post Opcache wasted memory appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/opcache-wasted-memory/feed/ 1
Magento /app/code vs /vendor https://sergey-lysenko.com/magento-app-code-vs-vendor/ https://sergey-lysenko.com/magento-app-code-vs-vendor/#respond Thu, 24 Feb 2022 14:19:40 +0000 https://sergey-lysenko.com/?p=231 Modules in Magento can be located in two places – /app/code and /vendor. Let’s consider what is the difference between them and where it is right to develop your modules. TLDR: it depends on the number of projects in your company. As already mentioned, Magento implies a modular development principle. That is, each functionality, according […]

The post Magento /app/code vs /vendor appeared first on Sergey Lysenko.

]]>
Modules in Magento can be located in two places – /app/code and /vendor. Let’s consider what is the difference between them and where it is right to develop your modules. TLDR: it depends on the number of projects in your company.

As already mentioned, Magento implies a modular development principle. That is, each functionality, according to its meaning, should be allocated in a separate module. It implies such a development principle that, ideally, we develop a module with minimization of dependencies on other modules, that is, on pure Magento.

What is the /vendor folder for?

This folder contains third-party modules that were purchased and installed using composer. The vendor folder is entirely controlled by composer and is ignored by git (specified in .gitignore). To install modules there, you need to run composer install, to update – composer update, and so on.

If your company is large enough and has many projects, then it is obvious that you may want to use the same modules in different projects. Then such a company can also be regarded as a full-fledged vendor, and it is also advisable to pack and distribute its modules through composer.

When there are several projects, then each module that is used in these projects can be regarded as a separate mini-project. The module must have its own separate repository, changelog, responsible maintainer, its own ci / cd build procedure. Module versioning in the repository is done by using tags.

What is the /app/code folder for?

If a company has only one project on the Magento platform, then it makes no sense to fully develop each module separately. After all, apart from this project, it will not be used anywhere else. This means that it does not need to create a separate repository, organize versioning using tags, and so on.

In this case, Magento recommends developing the module in /app/code. There you can also create a folder with the name of the company (it will be vendor) and place your modules there.

When developing your modules in /app/code, the entire project exists in one git repository. The version of each module is indicated in Magento way (see the module.xml file).

Performance

Developing modules using composer implies that we can do an optimized autoload dump:

composer dump-autoload --no-dev --optimize --apcu

As you can see, the –apcu key is present here, which signals that the list of classes will be stored in the apcu cache. Loading a list of classes from that cache is extremely fast.

But composer dump-autoload also works with modules that are located in /app/code, so this mechanism will optimize them too.

What are the nuances? If you decide to put your modules in /app/code, then every time for every request that is not cached by Varnish, the glob function will be called 7 times. It is located here:

app/etc/NonComposerComponentRegistration.php

<?php
/**
 * Copyright © Magento, Inc. All rights reserved.
 * See COPYING.txt for license details.
 */
declare(strict_types=1);

//Register components (via a list of glob patterns)
namespace Magento\NonComposerComponentRegistration;

use RuntimeException;

/**
 * Include files from a list of glob patterns
 */
(static function (): void {
    $globPatterns = require __DIR__ . '/registration_globlist.php';
    $baseDir = \dirname(__DIR__, 2) . '/';

    foreach ($globPatterns as $globPattern) {
        // Sorting is disabled intentionally for performance improvement
        $files = \glob($baseDir . $globPattern, GLOB_NOSORT);
        if ($files === false) {
            throw new RuntimeException("glob(): error with '$baseDir$globPattern'");
        }

        \array_map(
            static function (string $file): void {
                require_once $file;
            },
            $files
        );
    }
})();

glob selects a list of files by the masks below:

registration_globlist.php

return [
    'app/code/*/*/cli_commands.php',
    'app/code/*/*/registration.php',
    'app/design/*/*/*/registration.php',
    'app/i18n/*/*/registration.php',
    'lib/internal/*/*/registration.php',
    'lib/internal/*/*/*/registration.php',
    'setup/src/*/*/registration.php'
];

It can be seen that /app/code is present here and glob is looking for files there. Therefore, the more files and folders are located in /app/code, the longer the fetch will take. If the hard drive is not very fast, the search can be significantly longer. The fact saves a little that disk accesses are cached using the realpath cache mechanism.

On average, a glob lookup operation on a typical project takes about 20-40 milliseconds. This is relatively small, but if you multiply this value by the total number of requests, you get a significant number.

Everyone can see this if they do some profiling, for example with xhprof:

If there are no files and folders in /app/code at all, then this operation takes very little time. This can be an argument in favor of developing modules through /vendor via composer, but it’s up to you whether you are willing to deal with the loss of performance or not.

The post Magento /app/code vs /vendor appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/magento-app-code-vs-vendor/feed/ 0
Varnish segfault because of libexecinfo.so https://sergey-lysenko.com/varnish-segfault-because-of-libexecinfo-so/ https://sergey-lysenko.com/varnish-segfault-because-of-libexecinfo-so/#respond Sun, 20 Feb 2022 12:11:40 +0000 https://sergey-lysenko.com/?p=223 Varnish can fail not only because of misconfigured transient cache. It can fail because of bugs, for example in using libexecinfo. Sometimes you can see that Varnish’s child process fails and you lose all the cache due to it. In that case, you’ll see such errors in your system log: This is because Varnish does […]

The post Varnish segfault because of libexecinfo.so appeared first on Sergey Lysenko.

]]>
Varnish can fail not only because of misconfigured transient cache. It can fail because of bugs, for example in using libexecinfo. Sometimes you can see that Varnish’s child process fails and you lose all the cache due to it.

In that case, you’ll see such errors in your system log:

kernel: varnishd[20091]: segfault at 4a82dd8a ip 00007f1c58ed277d sp 00007f1c52293458 error 4 in libexecinfo.so.1[7f1c58ec6000+d000]

This is because Varnish does not work well with the libexecinfo library. The developers are aware of this and at some point they decided to drop libexecinfo and start using libunwind.

If you are using Varnish 4.1.11 then you have 2 options to get rid of the error:

  1. Upgrade Varnish to version 6.5, in which case you will have to rewrite the VCL and check the compatibility of previously used modules;
  2. Patch Varnish 4.1.11 yourself, because the developers did not provide this fix for older versions.

I have prepared a patch for Varnish 4.1.11 to enable –with-unwind configure switch. You can download and apply it on the Varnish source code, then just rebuild and use it.

The post Varnish segfault because of libexecinfo.so appeared first on Sergey Lysenko.

]]>
https://sergey-lysenko.com/varnish-segfault-because-of-libexecinfo-so/feed/ 0