top of page

Search Results

9 results found with an empty search

  • Hiring: From CV to Interview - Why do so many get it wrong?

    Writing does not come easily to me. As such these posts tend to come in a seemingly random order as their writing is triggered by some event or post that compels me to write. I came across a post today of a Developer hiring manager commenting that almost all the CVs that come across their desk make claims regarding of percentage improvements. They gave some examples: Increased process efficiency by X% Reduced bug count by X% Increased test coverage by X% Reduced response time by 35% Now I work in the field of DevOps, so I see a slightly different set of claims: Decreased cloud cost by X% Reduced build time by X% Increased uptime by X% So far, so normal. What caught me off guard was how the post continued with a lament that those CVs never contain any details as to how the applicant achieved those results. This gave me two successive shocks Don't they realize that that isn't enough space in a CV to explain things, and the explanation is for the interview? After all part of the process of evaluating an applicant is determining the truth of the claims made in the CV, so discussing those specific accomplishments should always be a part of that process. Certainly that has always been my policy. But this lead me to start thinking about all the interviews I had been invited to over the past year. In every single interview I have had over the past year, I initiated the discussion into the accomplishments listed on my CV. Not a single personal asked me to explain my claims. The more I probed my memory the worse it got. In several interviews I had received compliments regarding my cost reductions, and/or staff retention, but had been cut off when I tried to talk about how I had achieved those things. The people interviewing me were so focused on their specific process that they essentially ignored the contents of my CV and my cover letter. So why is this problem so widespread? I honestly don't have the slightest idea. Out of all the management trends that I track, or read about, or complain about, this one had managed to entirely escape my notice. And when stepping back, not validating the achievements claimed in a CV is such odd behavior that I cannot explain why I failed to notice this trend. Nevertheless it is clear that the practice of keeping to a strict interview process that excludes asking questions about the achievements claims in CVs is wrong. It denies the hiring team the proper opportunity to distinguish high performers from confident tricksters. One should always interrogate applicants about the achievements they claim.

  • My take on the CrowdStrike outage: A culture of hubris creating an inevitable failure

    I realize that this post comes a bit late to the party as everyone has already had their fun bashing CrowdStrike and decided who to blame. I want to add my take as someone with more than 30 years coding, who has worked in companies ranging from tiny startups to Fortune 50s. This is fundamentally a failure of process that comes from a bad company culture. Now that we have the conclusion out of the way, lets go over what exactly the failures really were. These can be divided in multiple areas and sub areas. Design Way back when I was learning to code drives and kernel modules (more than 20 years ago) in university our professor made sure to impart the critical problem with these kinds of modules: they load during boot when there is no user input allowed. If an application has a failure that causes it (or the system) to crash, then it is very easy to not run that software - but kernel modules are not like that. If they crash, then they are likely to do during boot and the system can go into a crash loop. This is exactly what happened with the CrowdStrike outage. For this reason we were taught that on installation and update of such a module it was industry standard to have some form of auto-detection/auto-removal/-auto-rollback code within the module. This can be done fairly easily by checking module load if this is the first time running this version, then putting a marker to disk to that effect, and setting a job for post boot to remove the marker and set another one that the module worked. If the system crashes at any time during boot after the module loads, this is then detected on the next boot and module can self rollback/disable/remove as desired. So why was an industry standard not followed by CrowdStrike? Release day Ask anyone knowledgeable of the SaaS industry about pushing things to production/customers, and you will get concerns about making changes to close to weekends or other times when critical staff are going to be away. Exceptions to this are only made when the change addresses a critical bug already affecting customers, or a security flaw that is putting them at risk. I have found nothing in any release notes or statements that show that either justification existed here. So why was an industry standard not followed by CrowdStrike? Rollout Another backbone of the SaaS industry is the phased rollout. Pushing out updates in waves to selected site/customer/devices. I can't say for sure exactly how old this standard has existed, but it has been so for the 15 years I have spent in SaaS. This is just another easy to implement method of reducing the overall risk. Like the above, exceptions are made for critical updates, but as previously discussed that was not the case here. What NOT to Blame Testing/QA The reality of testing is that you can only test for the interactions the developers and QA personnel think of. This will always be less than the total number of interactions. In short no one's testing is perfect and it is unfair to blame the testers. The developer(s) who wrote the code Like with QA/Testing it is unfair to blame developers for making a mistake - we all make them. If the code worked in the development environment and passed all tests, then there is no reason for the developers to doubt their work. Nor do i find it likely that they violated process (as is being intimated) in order to push out the code. Even if they did, it would still ultimately be a reflection of a bad company culture. Lower Management It is never the team leads that get to decide on official process, nor do they have any control over company culture. These directives/elements always come from the more senior managers, directors and VPs. What about DEI? I have seen several posts blaming DEI for this outage given that CrowdStrike publicly declares their commitment to the movement , but this is at best an oversimplification. DEI is a result of the same kind of hubris that caused this outage, it did not produce said hubris. No small company can afford to start looking at anything other than the competence of prospective employees, and those who make such a mistake rapidly go out of business. Conclusion: Hubris powered incompetence It was hubris that lead CrowStrike to the incompetent choice to not have automated protection from a kernel boot crash loop, It was Hubris that made them ignore standard rollout practices.

  • Why the travesty of the ICJ ruling may actually be a good thing.

    Well it is official, the PLO has been given the ultimate excuse to ignore reality and pursue impossible maximalist positions. Others have written on the corruption of the International Court of Justice (ICJ), the absurdity of their recent ruling, and I see no value in adding my non-expert analysis to them. But for completeness here is one such analysis: The verdict of the ICJ is a staggering misuse of the tools of justices, and tears up the framework of the Oslo Accords The prospect of any real or meaningful negotiations with the PLO (or Hamas) has long been recognized by those paying attention to be a pipe dream. Anyone who remembers Arafat's response to the peace overtures of 2000/1 and/or Abbas' response to the peace overtures of 2008 has only seen a further hardening of the Palestinian leadership's expectation of eventually destroying Israel and expelling the Jewish population. So far, so hopeless. And what has the Israeli response to this reality been? We have done nothing. In the past 20+ years the only Israeli policy has one of containment and (to a degree) stasis. Forever living in a fear of action triggering another war, the governments of Israel have adopted a policy of pretending that meaningful negotiations are going to start any moment now, and that nothing should be done to jeopardize that. Only when events take on epic proportions do we momentarily break away from that inaction in order to respond to recent events. In the face of absolute Palestinian rejection of our existence we have looked to others to solve the problem. This is the attitude of a people whose minds are still trapped in exile - always looking for the approvals of others, and the protection of others. This has to end. It is long past time that we look long and hard at the territory we control and decide what portions we wish to annex, and how we are going to manage the Arab intransigents. No one else is going to help us with this, at best we will get support for the solutions we impose upon an unwilling Arab populace. But this should not frighten us, or worry us - if it was up to the "Arab street" then we all would have been killed long ago. So I am hoping that this ICJ ruling helps Israelis and Jews everywhere realize that no matter how closely we follow the law, no matter how hard we try for mutual agreement, our efforts will be pointless. The only way towards acceptance is through having the strength to decide on solution, and the will to impose it. Personally I always thought we should start implementing some variation of Trump's Peace to Prosperity Plan . Let us start by formally annexing all of the areas that would become Israel under that plan, and begin building the bypass roads so that the Palestinians can travel between their areas freely. If they comply with the requirements of the plan we will slowly transfer them the territories and authority outlined in the plan, and if not, we will at least drawn clear lines on what communities and territory we are keeping. What do you think?

  • Differing narratives - How rational choices lead to the Arab exile in 1948.

    In the Israel-Palestinian conflict there is one issue that is inescapable and on which it seems that there can never be reconciliation. For Palestinians is it called the Nakba and for them it is the forced  exile of hundreds thousands of their people. For most Israelis it is the voluntary departure of Arabs. But which narrative is true? People can (and do) go over the evidence for each and every village, town and city but ultimately those can at best only resolve the proximate cause of the departure. If one really wishes to understand what happened, then one must seek to understand not just that one event, but the entire series of decisions being made by the Arab populaces that left. Before we can talk about the decisions being made, we should first be clear that some Arabs were forcefully deported (Benny Morris puts that at ~10% of all Arabs that left)  and for the purposes of this post we will use that number). The discussion of those what were  deported is outside the discussion of this post along with any legal and moral obligations that Israel has to those people and their descendants. Nor will this post cover those who left as individuals. (Historians estimate anywhere between 100,000 and 300,000 Arabs left as individuals before a single village was attacked/abandoned/depopulated.) This post will only cover those communities where either all of most of the community left as a single act or as a closely connected series of events. Out of those villages/towns that became depopulate (or mostly depopulated) of Arabs we can speak of two categories: (1) those that openly started the conflict support of the ALA / Army of the Holy War / Arab League  and (2) those that declared a policy of neutrality/non-aggression. For those villages that openly supported the goals and methods of the ALA/AofHW/etc, it is only natural and rational that they would expect to be treated by the Yishuv forces in the same way they had planned to to treat the Jews. Thus it was only rational for them to flee the advancing Yishuv forces as they believed failing to do so would result in their own deaths. But what of the villages that (at the start of the conflict at at least) had a policy of neutrality or even non-aggression? By most accounts such villages made up the majority of Arab villages at the start of the 1947 war. The Arab forces focused on fortifying Arab villages and the non-Arab villages ( Druze , Circassians  and Maronites ) were largely left to their own devices. This meant that they could (and did) maintain their own relations with the the Yishuv. This meant that for a Druze village to maintain neutrality, all it had to do was not take part in the the hostilities, but for an Arab village to maintain neutrality they had to ensure that no Arab forces used their village as a base for attacking nearby Jewish villages and Jewish convoys. Such a thing might seem simple, but it was neither simple nor was it is necessary safe to do so. From the very beginning the ALA and allies made it very clear to everyone that this would be war to the finish - a war in which their could be no neutrals. Therefore any refusal to support the ALA by other Arabs carried with it an inherent risk of being labeled a traitor to the cause. It was (probably) easy enough to decline joining the fight directly but things were not that simple - especially given that at the time everyone expected the Arab forces to win. It was common for Arab irregulars to show up to a village with an offer to help "defend it from Yishuv attack". For a neutral village to reject such an offer risked being labeled being labeled as traitors - which could carry a heavy price - so many made the rational choice to accept. Of course those that accepted quickly found that Arab forces did not simply stay as a defensive force but that they used the village as a base for attacking nearby Jewish villages as used the village itself as a vantage point from which to snipe at Jewish farmers and convoys. Thus those villages that accepted "protection" quickly found their neutrality to be a thing of the past. Thus when their "defenders" later fled the villagers had every reason to have the same beliefs as those who had openly supported the genocide of the Jews from the start and fled expecting the Jews to treat them and the ALA had planned to treat the Jews. Even for those who initially rejected Arab forces being billeted in their villages, their rejection was not always accepted. In some cases the Arab forces simply moved in anyways, forcing the villages to either accept it or fight (not surprisingly few if any chose to fight). In other cases villages that rejected billeting Arab forces discovered that casualties from Arab attacks on Jewish villages were simply transported to their villages with the expectation that the villagers would help care for the wounded. Refusing to care for the wounded was a certain way to be labeled a traitor. Given how everyone expected the war to end (with Arab victory) it would have been irrational to try and maintain neutrality under those circumstances. Thus their only reasonable choice was to break their neutrality and side with the Arab forces. They too would have thus have reason to fear that the Jews would treat them as the Arab forces had planned to treat the Jews. Hopefully by now some of the readers have spotted what looks like a flaw in this line of reasoning: Certainly some Arab communities (especially those that lived in mixed communities such as Haifa where the Jewish community urged the Arabs to stay ) know very well that the Jews had no intent or interest in massacring them. Why then would those people leave of their own accord? The answer is that everyone (from the Arabs themselves, to the major European powers, to the U.S. and Canada) believed that Israel could not survive unless the Arab states agreed to accept it - something the Arab states were adamant they would not do. This means that as the military theaters came to a close the Arab residents had to make a choice. They could follow the Arab league orders to evacuate - which by common belief meant they would be able to return within a few years in the wake of the Arab armies, or  they could side with the Jews and risk having their fate tied to them - which by common belief meant death or displacement. The decisions of many to leave was thus a reasoned one based on the information available and common beliefs at the time. Not a very moving exciting, but a very important one. The Arabs communities that left did so not based on fear, nor were they forced out by either the Jewish or Arab forces. They left because based on the available evidence and opinions it was the rational choice to leave.   So what does that mean for those of us who are "Pro-Israel" in terms of our legal and.or moral obligations to the Arabs that left? As far as I am concerned they made a choice - they chose to side against us and with the Arab league, they chose to leave. Thus we have no moral or legal obligation towards them for their departure. Those who legally owned land (which was very few - another post for another day) should be compensated for the loss land. Those who rented are entitled to nothing, nor are any of them for other forms of compensation. That said I do sympathize with those who left and do believe we should assist somewhat with their resettlement out of compassion

  • Why Business Management Books Don't Work (and why this post probably won't help)

    Introduction In the realm of business management, the allure of quick-fix solutions and self-help books promising the keys to success is hard to resist. However, to truly understand why these resources often fall short, we must delve into the fascinating concepts of System 1 and System 2 thinking. In this post, we explore the complexities of these cognitive systems, shed light on their relevance to management, and unveil why the combination of misguided System 1 reflexes with well-intentioned System 2 policies leads to failure. Understanding System 1 and System 2 Thinking Nobel Prize-winning psychologist Daniel Kahneman introduced the concepts of System 1 and System 2 thinking. System 1 represents our intuitive, automatic thought processes. It operates effortlessly and swiftly, allowing us to make snap judgments and decisions based on ingrained biases and heuristics. In contrast, System 2 is our deliberate, analytical thinking system that requires conscious effort, attention, and logical reasoning. It is slower and more deliberative. The Majority of Management Books and System 2 Thinking The majority of management books primarily target System 2 thinking. They often rely on lengthy mission statements, value statements, and complex objectives and goals. While these frameworks have their merits, they only target System 2 thinking that demand conscious effort and deliberate analysis. While these resources may offer valuable insights and techniques, they often neglect the crucial role of System 1 thinking in effective management. The Pitfall of Neglecting System 1 Thinking System 1 thinking, with its intuitive and automatic responses, plays a significant role in management. It influences our behaviors, judgments, and decision-making, often operating outside our conscious awareness. Neglecting to address and train System 1 reflexes can lead to incongruence between prescribed policies and actual behaviors, hindering organizational success. But we can train our System 1 reactions, expert chess players have been proven to be able to use their system 1 as part of their process. But training our System 1 responses is hard. We need to break things down to quick principles. By doing so we can cultivate automatic and intuitive behaviors that align with effective management practices. These reflexes become second nature, guiding managers' actions and decisions in real-time, even in high-pressure situations. Training System 1 Reflexes: Breaking Down Good Management Principles To bridge the gap between System 2 principles and System 1 reflexes, it is essential to break down good management principles into actionable and trainable reflexes. Let's explore some examples: "It doesn't matter if it is your fault; it is your responsibility": This principle instills accountability in managers, training them to take ownership of outcomes, regardless of who is at fault. It promotes problem-solving and continuous improvement rather than blaming others. "Managers don't run the company; their employees do": By internalizing this principle, managers recognize the importance of empowering and supporting their employees. They foster a culture of autonomy and collaboration, focusing on facilitating their team's success rather than asserting dominance. "Take care of your people, and they will take care of the company": This principle highlights the significance of prioritizing employee well-being and growth. When managers invest in their team's development and happiness, it fosters loyalty, productivity, and a positive organizational culture. "Honesty is always the best management policy": By emphasizing the value of integrity and transparent communication, this principle promotes trust within teams. Managers who lead by example and encourage honesty create an environment where issues can be addressed promptly and conflicts can be resolved effectively. Conclusion Management books, with their focus on System 2 thinking, often overlook the critical role of System 1 reflexes in effective management. Recognizing and training these reflexes are crucial for aligning behaviors with prescribed policies and achieving organizational success. By breaking down good management principles into actionable reflexes, such as taking responsibility, empowering employees, prioritizing their well-being, and promoting honesty, managers can cultivate the intuitive responses necessary for effective leadership. Balancing both System 1 and System 2 thinking is key to navigating the complexities of the business world and fostering a culture of success.

  • What is Management?

    I have had a odd career path over the last 20+ years of full time work. As part of that path, I have been a manager in several different kinds of businesses. Yet when I go on interviews in my current field (Technical Operations/DevOps), I have often been told that my management experience outside the SaaS world "doesn't count". When I have tried to open a conversation into why (or why not) that experience is irrelevant, I have yet to receive anything resembling an answer. Faced with what seems like a hard-point of irrational belief, I have shifted into a discussion of what management is, only to discover that the people I am discussion with cannot define management either.  I found this very odd. This led me back into rechecking the book, articles and blogs that are famous in the 'management world', only to find that they don't really ever define management in a simple comprehensive way. They either focuses on specific behaviors, or waxed on about  System 2  philosophies/approaches.  Now if you have read the work of the late Nobel prize winning Dr Daniel Kahneman's book:  Thinking, Fast and Slow , then you know what I am referring to. If not then I try to summarize System 1 and System 2  in my previous article .  This left me wondering if I had a simple definition of management that could fit into a System 1 response. After all, I do consider myself a good manager, with the System 1 responses most of the time. Looking to explain my knee-jerk reactions, this is what I came up with: Management is finding the balance point between the needs of the company and the needs of your employees.

  • What is Infrastructure as Code (IaC). Why should you care, and when should you do it.

    The term Infrastructure as Code () is everywhere, but what is it really? Something to help your company grow? Or just another fad created by people trying to sell you something? What is Infrastructure as Code? Your favorite vendor will tell you that IAC tools have been around since the mid 2000s and that they allow you to define and manage the configuration of your infrastructure as a series of 'code' files. This is nonsense. Configuration Management (CM) tools have been around for a long time with products like dating back to 1993 (still developed today). Nor are configuration files the same thing as code, not even if those files are in or format. Tools like , and are themselves not IaC tools - they are strong CM tools that enable IaC for those who need it. So then, what is IaC? Infrastructure as Code (IaC) is when you have code that builds and manages your Configuration Management (CM) files. Now this is hardly clear, so here are two real world examples (yeas I have personally dealt with both of these companies), one company that uses IaC, and one that only uses CM. Company A Company A had over 300 developers/researchers/QA working independently on a single large Software as a Service () product. Their lab environments spanned multiple clusters, and projects. Each person (or team) required one or more environments for development and testing - all told approximately 350 environments at any given time. All of these environments were built/maintained through . Changes to these environments (scale, code version, etc) were frequent, and the Terraform configuration and deployments required a three person team to keep up with all the requests. Company B Company B had over 200 developers/researchers/QA working independently on a single large Software as a Service () product. Their lab, PoC, and customer trial environments spanned multiple clusters, and projects. Each person (or team) required one or more environments for development and testing - all told approximately 250 environments at any given time. All of these environments were built/maintained by an piece of code built on a combination of and . When a person wanted to deploy/undeploy an environment, they wrote a minimal config file that specified environment type (out of a small list of valid options: minimal, standard, production-like, etc), location (which Openstack cluster or GCP project) and code branch(s) and fed it to the CI/CD system (). While this IaC system took a long time and several people to build, the system required zero people to actively maintain and was used for years, with only minor modifications. A versus B Both companies had similarly structured products, with large numbers of short lived environments. Both needed (or had decided) to run completely separate infrastructure for each such environment. But company A had decided to allow for each environment to be defined independently, while Company B decided to standardize what kinds/structures of environments would be allowed. This allowed Company B to codify those rules, and automate the entire process of environment creation, and destruction - giving the developers/QA staff the ability to spin up what they needed when they needed it. Company A only used Configuration Management while Company B built and used an Infrastructure as Code system. What about the modern age of Microservices? The age of has changed many things about how many companies develop and deploy their products. What once would have required dozens (or hundreds) of separate environments can now often be achieved by having separate namespaces on a single cluster. At datagen our IaC system allows developers to easily deploy compete environments to their own personal namespace without needing any understanding of the underlying infrastructure Whether your infrastructure is shared by using namespaces, service meshes or something else entirely the principle is the same. IaC systems enable developers, QA, and other staff the ability to spin up standardized environments So why use infrastructure as Code? You have a lot of ethereal (short lived) environments You can force the infrastructure design of those environments to conform to a standardized set The infrastructure design of those environments is expected to remain largely unchanged for several years.

  • Why I Still Use Opsview: Uncovering Its Hidden Strengths

    In a market saturated with monitoring solutions, one might question why I continue to rely on Opsview, a seemingly overlooked and company with a shrinking market share. While it's true that Opsview may have struggled to keep pace with technological shifts, it remains at the core of my monitoring, remediation, and alerting system. Here are the reasons why Opsview continues to earn my trust: Affordability: Opsview's base feature set may not be the most robust, but its pricing model allows for creative utilization at a remarkably low cost. By strategically leveraging their pricing structure, I have constructed a fully redundant and geographically distributed setup that provides a comprehensive "single pane of glass" view across all our monitoring tools and systems. Remarkably, this has been achieved for less than $5,000 USD per year. Flexibility to Monitor Anything and Address Any Failure: Opsview's true power lies in its versatility. With a little customization, it can seamlessly pull data from various sources, making it an all-encompassing monitoring solution. Whether it's monitoring AWS, iDracs, CMC, UPS, PDUs, logz.io , Prometheus, Elasticsearch, or any other source, Opsview provides a platform to monitor it. By supporting the Nagios standards of custom plugin development, I have successfully used Opsview to monitor diverse systems and address a wide range of failures. Examples include: Conducting test calls for VoIP and rerouting traffic if they fail. Detecting malicious activity and blocking the originating IPs. Analyzing network traffic to identify early signs of DDoS attacks and emailing a report. Detecting AWS EC2 capacity shortages and automatically trying a different kind of instance. Horizontally auto-scaling monolithic applications, complete with re-configuring databases, load balancers, and security rules. Rerouting traffic in response to public internet routing issues. Detecting customer-side outages and notifying them by email, complete with diagnostic information. Detecting service anomalies and automating analysis and ticket generation for the NOC. Intelligent Alerting: Opsview incorporates Business Service Management (BSM) level alerting, an intelligent mechanism that takes redundancy into account to ensure alerts are only sent when customers are genuinely affected. By understanding alert dependencies, Opsview automatically suppresses expected alerts, such as when a Kubernetes cluster goes down, preventing unnecessary notifications. This approach significantly reduces alert fatigue, enabling teams to focus on critical issues impacting end-users. Opsview's intelligent alerting enhances operational efficiency and streamlines incident response, fostering a more reliable and customer-centric monitoring process. Conclusion: Despite Opsview's limitations and diminishing market share, it remains a central and invaluable component of my monitoring ecosystem. Its affordability, coupled with its adaptability to monitor any system or use case with a little custom development, sets it apart from other tools. The incorporation of intelligent alerting based on BSM principles further enhances its value. Opsview has proven time and again that it can deliver a cost-effective and comprehensive monitoring solution, allowing me to maintain a reliable and customer-centric IT environment. So, until a more compelling alternative emerges, Opsview will continue to be an integral part of my monitoring strategy.

  • The hidden weakness of AWS Global Accelerator.

    AWS Global Accelerator (GA) is a service that routes incoming traffic to healthy targets across multiple AWS regions. It provides a single entry point for global traffic, which is then distributed to the optimal target endpoint based on health checks and traffic routing policies. The problem with the GA's health checking process is that it only inherits the health status of the target, such as Application Load Balancer (ALB), Elastic Load Balancer (ELB), or EC2 instances, instead of performing its own health check. As a result, if there are any network or security issues that prevent traffic from reaching the target listener, the GA will not be able to detect this, and will continue to send traffic there, potentially causing downtime and poor user experience. For example, if a security group rule blocks incoming traffic from the GA to the target, or if a routing issue prevents traffic from reaching the target, the GA will not know about it, and will consider the target to be healthy, even though the target is not accessible. To work around this issue, it is important to implement additional health checks that specifically test the connectivity and accessibility of the target from the GA. One approach is to use a health checker tool, such as Amazon CloudWatch Synthetics, to perform a custom health check on the target. CloudWatch Synthetics can simulate a user's request to the target and verify that the target is responding correctly. If the target fails the custom health check, CloudWatch Synthetics can mark the target as unhealthy, and the GA will stop sending traffic to it. It is also important to monitor the network and security configuration of the GA and the target to ensure that traffic can flow freely between them. This can be achieved by monitoring the network logs, such as AWS VPC Flow Logs, and security logs, such as AWS CloudTrail logs, to detect any potential issues that may block incoming traffic. In conclusion, the problem with the Global Accelerator health check is a serious issue that can result in poor performance, slow response times, and dropped connections. To work around this problem, it is important to properly configure the security groups and ACLs for the target listeners, to configure the health checks for the target listeners, and to closely monitor any security or routing changes that affect the target listeners. By taking these steps, you can ensure that the Global Accelerator is functioning optimally, and that your applications are delivering the best possible performance for your users.

bottom of page