Help
RSS
API
Feed
Maltego
Contact
Domain > firstprincipleresilience.com
×
More information on this domain is in
AlienVault OTX
Is this malicious?
Yes
No
DNS Resolutions
Date
IP Address
2024-11-27
3.166.135.12
(
ClassC
)
2026-01-31
3.175.34.105
(
ClassC
)
Port 80
HTTP/1.1 301 Moved PermanentlyServer: CloudFrontDate: Sat, 31 Jan 2026 02:33:56 GMTContent-Type: text/htmlContent-Length: 167Connection: keep-aliveLocation: https://firstprincipleresilience.com/X-Cache: Redirect from cloudfrontVia: 1.1 188e4222daf42f54b9492a395b60fb00.cloudfront.net (CloudFront)X-Amz-Cf-Pop: HIO52-P3X-Amz-Cf-Id: uY5zKfOUi6diC3QsXgvVCK3JYlm0GQBRrWEPtAifRHdgDrPN81Fdog html>head>title>301 Moved Permanently/title>/head>body>center>h1>301 Moved Permanently/h1>/center>hr>center>CloudFront/center>/body>/html>
Port 443
HTTP/1.1 200 OKContent-Type: text/htmlContent-Length: 12197Connection: keep-aliveDate: Sat, 31 Jan 2026 02:33:57 GMTx-amz-meta-codebuild-content-sha256: 7073379b5fc887409198a749351eab536de4a6665878d02e82d5c70448053d37x-amz-meta-codebuild-buildarn: arn:aws:codebuild:us-east-2:545212347593:build/CodeBuildProject-QWRUZY9RO6D4:0d01349c-f487-4de2-a36a-a12ddf643199x-amz-meta-codebuild-content-md5: cb654ab6a7ce45d30d63d4524930bcf0Last-Modified: Fri, 26 May 2023 23:34:56 GMTETag: 267e81a504df9f0012199f4694a50763Server: AmazonS3X-Cache: Miss from cloudfrontVia: 1.1 71bf492f0f2662e8c099c2b20c7f4b4e.cloudfront.net (CloudFront)X-Amz-Cf-Pop: HIO52-P3X-Amz-Cf-Id: Oyj44ZtA8yHBlC-bvomb7x9xkw-TyzkPtx6-1ZgYC_3E0jHMiTZzXQ !DOCTYPE html>html classno-js langen>head> meta namegenerator contentHugo 0.70.0 /> meta charsetUTF-8> meta nameviewport contentwidthdevice-width, initial-scale1> meta http-equivX-UA-Compatible contentIEedge> title>Principle Resilience/title> script>(function(d,e){dede.replace(no-js,js);})(document.documentElement,className);/script> meta namedescription content> link relpreconnect hrefhttps://fonts.gstatic.com crossorigin> link reldns-prefetch href//fonts.googleapis.com> link reldns-prefetch href//fonts.gstatic.com> link relstylesheet hrefhttps://fonts.googleapis.com/css?familyOpen+Sans:400,400i,700> link relstylesheet href/css/style.css> link relalternate typeapplication/rss+xml href/index.xml titlePrinciple Resilience> link relshortcut icon href/favicon.ico> /head>body classbody> div classcontainer container--outer> header classheader> div classcontainer header__container> div classlogo logo--mixed> a classlogo__link href/ titlePrinciple Resilience relhome> div classlogo__item logo__imagebox> img classlogo__img src/img/IMG_2471.png> /div>div classlogo__item logo__text> div classlogo__title>Principle Resilience/div> div classlogo__tagline>A place for learning distributed systems resilience/div> /div> /a> /div> nav classmenu> button classmenu__btn aria-haspopuptrue aria-expandedfalse tabindex0> span classmenu__btn-title tabindex-1>Menu/span> /button> ul classmenu__list> li classmenu__item> a classmenu__link href/about/> span classmenu__text>About/span> /a> /li> li classmenu__item> a classmenu__link href/articles/> span classmenu__text>Articles/span> /a> /li> li classmenu__item> a classmenu__link href/products/> span classmenu__text>Products/span> /a> /li> /ul>/nav> /div>/header> div classwrapper flex> div classprimary> main classmain list rolemain> article classlist__item post> figure classlist__thumbnail thumbnail> a classthumbnail__link href/articles/howtosimulatehowcloudnetworksfail/> img classthumbnail__image src/img/diagrams/SimulateHowCloudNetworksFail.png altHow to Simulate How Cloud Networks Fail> /a> /figure> header classlist__header> h2 classlist__title post__title> a href/articles/howtosimulatehowcloudnetworksfail/ relbookmark> How to Simulate How Cloud Networks Fail /a> /h2> p classlist__lead post__lead>Cloud networks, and most massively scaled networks, are subject to partial failures that will impact some of your application connections but not others. Congestion will lead to intermittent packet loss and is another flavor of partial impact that occurs in multi-tenant environments. How do you know if your monitoring, health checks and failure mitigations, like host removal and retry, will mitigate these kinds of failures? You test! This article will show you how you can use Linux IPTables, routing policies and network control to simulate cloud network failures./p> /header> div classcontent list__excerpt post__content clearfix> Linux has come a long way with tooling for simulating network failures. The traffic control (tc) tool has a wonderful bag of tricks for simulating total packet loss, intermittent packet loss and delays. Tc even provides random distributions of packet loss for additional realism in testing. However, tc doesn’t support random % of network flow impacts. As a reminder, network flows are defined as a 5-tuple of ip address, ports and protocol. /div>/article>article classlist__item post> figure classlist__thumbnail thumbnail> a classthumbnail__link href/articles/howcloudnetworksfail/> img classthumbnail__image src/img/diagrams/CloudNetworkFail.svg altHow Cloud Networks Fail and What to do About It> /a> /figure> header classlist__header> h2 classlist__title post__title> a href/articles/howcloudnetworksfail/ relbookmark> How Cloud Networks Fail and What to do About It /a> /h2> p classlist__lead post__lead>Massively scaled cloud networks are composed of thousands of network devices. Network failures (usually) show up in tricky ways that your monitoring won't detect. These failures (usually) lead to minor client impacts. However, some critical applications, like strict data consistency database clusters, are sensitive to even these minor disruptions. This article will explain the why behind the funny behaviors you might have noticed in your applications running in the cloud or over the Internet. This article includes recommendations for how to improve application resilience to detect and mitigate these (usually) minor failures in the massive networks they depend on./p> /header> div classcontent list__excerpt post__content clearfix> High availability clusters include things like MySQL or PostgreSQL using synchronous replication. Any implementation of RAFT/PAXOS is a high availability cluster and is part of services like Aerospike and Cockroach DB, Consul, etcd or zookeeper. Clusters are often used behind the scenes and there’s a good chance they are in your environment. For example, Kafka and Kubernetes both use cluster technology in their management layers.Even if you aren’t running high-throughput clusters, this article will help you understand how cloud networks and the Internet behave when failures occur. /div>/article>article classlist__item post> figure classlist__thumbnail thumbnail> a classthumbnail__link href/articles/tolerancetofailures/> img classthumbnail__image src/img/diagrams/59sNotEnough.svg altFive 9s isn't enough> /a> /figure> header classlist__header> h2 classlist__title post__title> a href/articles/tolerancetofailures/ relbookmark> Five 9s isn't enough /a> /h2> p classlist__lead post__lead>Go beyond 9s service level objectives, recovery time objectives and recovery point objectives. Improve your resilience decisions with failure impact narratives./p> /header> div classcontent list__excerpt post__content clearfix> Most organizations set some form of resilience objectives using service level agreements (SLA) in 9s notations. Some organizations formalize their recovery time objectives (RTO) to determine how long it should take an application to recover or take recovery point objectives (RPO) to determine how much data can be lost in the event of failure. A few organizations write pages of detailed non-functional resilience requirements. SLAs, RTOs, RPOs and requirements aren’t enough information to decide how much effort and money an organization should spend on resilience. /div>/article>article classlist__item post> figure classlist__thumbnail thumbnail> a classthumbnail__link href/articles/fivecategoriesoffailure/> img classthumbnail__image src/img/diagrams/CategoriesOfFailure.svg altFive Categories of Failure> /a> /figure> header classlist__header> h2 classlist__title post__title> a href/articles/fivecategoriesoffailure/ relbookmark> Five Categories of Failure /a> /h2> p classlist__lead post__lead>Failure categories are a simplified approach to understanding distributed systems failures and what to do about them. Without failure categories, failure mode analysis and system design tends to operate off of a list of thousands of specific failures, which tends toward one-off approaches to failure detection and mitigation. Failure categories can help you design a few common approaches to detect and mitigate a wider range of failures./p> /header> div classcontent list__excerpt post__content clearfix> Traditional Failure Analysis Even simple distributed systems are extremely complex. A single transaction may use hundreds of computers and many networks. Distributed systems need DNS names, SSL certificates, a myriad of security credentials, layers of software and layers of networked devices connecting everything together. Any of these components can fail and impact an application. There are lots and lots and lots of ways for distributed systems to fail.Organizations that have been building and operating distributed systems for any period of time have long lists of failure modes and what to do about them. /div>/article>article classlist__item post> figure classlist__thumbnail thumbnail> a classthumbnail__link href/articles/monitoringforcriticalapplications/> img classthumbnail__image src/img/diagrams/MonitoringAlert.svg altMonitoring Design for Critical Applications> /a> /figure> header classlist__header> h2 classlist__title post__title> a href/articles/monitoringforcriticalapplications/ relbookmark> Monitoring Design for Critical Applications /a> /h2> p classlist__lead post__lead>Critical applications that need <5 minute recovery times for a wide range of failures need special monitoring. Many teams may not realize their monitoring package could fail due to the same underlying issue that impacts their application. They may not realize their metrics cannot detect and alert them to partial failures. This article will explain trade-offs and design recommendations for monitoring critical applications./p> /header> div classcontent list__excerpt post__content clearfix> Designing a monitoring solution for critical applications can be a little tricky, especially if your failure recovery automation depends on monitoring for detecting and responding to failures. Critical applications are those that need to hit 99.999% or better uptimes, are sensitive to error rates of 5% or lower, need to recover from a wide-range of failures in less than 5 minutes, or must be resilient against unusual major failures like a total unrecoverable loss of a datacenter. /div>/article>article classlist__item post> figure classlist__thumbnail thumbnail> a classthumbnail__link href/articles/stoprelyingonrollback/> img classthumbnail__image src/img/diagrams/RollbackRemove.svg altStop Relying on Rollback> /a> /figure> header classlist__header> h2 classlist__title post__title> a href/articles/stoprelyingonrollback/ relbookmark> Stop Relying on Rollback /a> /h2> p classlist__lead post__lead>When Code & Config changes cause a failure, rollbacks are a slow and unreliable mitigation. The best protection against change related failures isn't automated rollbacks, instead use better fault isolation and redundant capacity./p> /header> div classcontent list__excerpt post__content clearfix> Code & Config failures usually occur around a change. It might be a code deployment or a manual configuration change. Many organizations rely on rollback procedures to mitigate failures when problematic changes occur. If you are targeting 99.999% uptime or recovery times of less than 5 minutes, then a rollback isn’t an ideal mitigation. Rollbacks won’t reliably mitigate change related failures on <5 minute timelines.This article explains why rollbacks are too slow and unreliable to act as a primary mitigation mechanism for 99. /div>/article>/main> /div> /div> footer classfooter> div classcontainer footer__container flex> div classfooter__copyright> © 2023 Principle Resilience. span classfooter__copyright-credits>Generated with a hrefhttps://gohugo.io/ relnofollow noopener target_blank>Hugo/a> and a hrefhttps://github.com/Vimux/Mainroad/ relnofollow noopener target_blank>Mainroad/a> theme./span> /div> /div>/footer> /div>script async defer src/js/menu.js>/script>/body>/html>
View on OTX
|
View on ThreatMiner
Please enable JavaScript to view the
comments powered by Disqus.
Data with thanks to
AlienVault OTX
,
VirusTotal
,
Malwr
and
others
. [
Sitemap
]