Help
RSS
API
Feed
Maltego
Contact
Domain > blog.atakangul.com
×
More information on this domain is in
AlienVault OTX
Is this malicious?
Yes
No
DNS Resolutions
Date
IP Address
2024-06-13
104.21.11.168
(
ClassC
)
2025-11-24
216.24.57.251
(
ClassC
)
Port 80
HTTP/1.1 301 Moved PermanentlyDate: Mon, 24 Nov 2025 22:48:54 GMTContent-Type: text/html; charsetutf-8Content-Length: 62Connection: keep-aliveCF-RAY: 9a3c74db7fb26c17-PDXLocation: https://blog.atakangul.com/cf-cache-status: DYNAMICServer: cloudflarealt-svc: h3:443; ma86400 a hrefhttps://blog.atakangul.com/>Moved Permanently/a>.
Port 443
HTTP/1.1 200 OKDate: Mon, 24 Nov 2025 22:48:55 GMTContent-Type: text/html; charsetutf-8Transfer-Encoding: chunkedConnection: keep-aliveCF-RAY: 9a3c74dcd94eff06-PDXCache-Control: s-maxage300, stale-while-revalidate31535700etag: W/j0bao9z2uu2sqerndr-id: 277bed5b-041e-44b7vary: Accept-Encodingx-nextjs-cache: HITx-nextjs-prerender: 1x-powered-by: Next.jsx-render-origin-server: Rendercf-cache-status: DYNAMICServer: cloudflarealt-svc: h3:443; ma86400 !DOCTYPE html>html langen>head>meta charSetutf-8 data-next-head/>meta nameviewport contentwidthdevice-width data-next-head/>title data-next-head>Atakan Gül | Software Engineering Blog/title>meta charSetutf-8/>meta nametheme-color content#ffffff/>meta namedescription contentAtakan Gül – Software engineer from Istanbul, writing about DevOps, Kubernetes, and software workflows./>meta namekeywords contentAtakan Gül, DevOps, Kubernetes, Istanbul, Software Engineering, Technical Blog/>meta nameauthor contentAtakan Gül/>meta propertyog:type contentprofile/>meta propertyog:title contentAtakan Gül – Software Engineer/>meta propertyog:description contentSharing thoughts on software, systems, and development practices. Based in Istanbul./>meta propertyog:image contenthttps://media.licdn.com/dms/image/v2/D4D03AQE8Ea7pNUhJWg/profile-displayphoto-shrink_800_800/profile-displayphoto-shrink_800_800/0/1725493477020?e1741824000&vbeta&ttlBI34erc3JCQ6dBtYDgTJgF-EQkVHNVtpJT5ybgSUU/>meta propertyog:url contenthttps://atakangul.com/>meta propertyog:site_name contentatakangul.com/>meta nametwitter:card contentsummary_large_image/>meta nametwitter:title contentAtakan Gül – Software Engineer/>meta nametwitter:description contentWriting about infrastructure, DevOps, and real-world engineering problems./>meta nametwitter:image contenthttps://media.licdn.com/dms/image/v2/D4D03AQE8Ea7pNUhJWg/profile-displayphoto-shrink_800_800/profile-displayphoto-shrink_800_800/0/1725493477020?e1741824000&vbeta&ttlBI34erc3JCQ6dBtYDgTJgF-EQkVHNVtpJT5ybgSUU/>script typeapplication/ld+json>{@context:https://schema.org,@type:Person,name:Atakan Gül,jobTitle:Software Engineer,description:Software engineer and blogger from Istanbul, interested in DevOps, Kubernetes, and building reliable systems.,url:https://atakangul.com,image:https://media.licdn.com/dms/image/v2/D4D03AQE8Ea7pNUhJWg/profile-displayphoto-shrink_800_800/profile-displayphoto-shrink_800_800/0/1725493477020?e1741824000&vbeta&ttlBI34erc3JCQ6dBtYDgTJgF-EQkVHNVtpJT5ybgSUU,sameAs:https://www.linkedin.com/in/atakan-gul,https://atakangul.com/gallery,https://www.instagram.com/atakan_yta/,https://github.com/atakang7,knowsLanguage:Go,Python,JavaScript,Java,alumniOf:{@type:EducationalOrganization,name:Istanbul Bilgi University,department:Computer Engineering},award:Full scholarship at Bilgi University,Completed Microsoft Cloud Technologies program,IBM System Administration badge,Google Tech Foundations certificate,worksFor:{@type:Organization,name:Orion Innovation}}/script>link relicon href/img/favicon.ico/>script defer srchttps://cloud.umami.is/script.js data-website-id97859527-919b-443b-a9cc-8cbb01c0035d>/script>link relpreload href/_next/static/css/98bc43e340e21596.css asstyle/>link relstylesheet href/_next/static/css/98bc43e340e21596.css data-n-g/>noscript data-n-css>/noscript>script defer nomodule src/_next/static/chunks/polyfills-42372ed130431b0a.js>/script>script src/_next/static/chunks/webpack-835c9c9416da3f6a.js defer>/script>script src/_next/static/chunks/framework-f75312fc4004b783.js defer>/script>script src/_next/static/chunks/main-8b5f8b885f7c82e2.js defer>/script>script src/_next/static/chunks/pages/_app-751d06e8a02bb64c.js defer>/script>script src/_next/static/chunks/pages/index-100ebbc968ecb5f6.js defer>/script>script src/_next/static/3_hKhfRlNodrIl0SlZy28/_buildManifest.js defer>/script>script src/_next/static/3_hKhfRlNodrIl0SlZy28/_ssgManifest.js defer>/script>/head>body>div id__next>meta nameviewport contentwidthdevice-width, initial-scale1/>header classw-full sticky top-0 bg-white/80 backdrop-blur-sm border-b border-neutral-200 z-50>nav classw-full max-w-8xl mx-auto h-16 flex items-center justify-between px-4 stylemax-width:1200px>a classflex flex-col href/>h1 classtext-xl font-semibold text-neutral-900>Atakan Gül/h1>span classtext-sm text-neutral-500>Software Engineer/span>/a>div classflex items-center gap-6 >button classp-2 hover:bg-neutral-100 rounded-lg transition-colors aria-labelSearch posts>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-search w-5 h-5 text-neutral-700>circle cx11 cy11 r8>/circle>path dm21 21-4.3-4.3>/path>/svg>/button>/div>/nav>/header>div classmin-h-screen bg-white m-2>div classmax-w-7xl mx-auto>nav classflex justify-start mt-3 px-2 sm:px-0 >div classflex gap-1.5 sm:gap-3 w-full sm:w-auto overflow-x-auto pb-2 sm:pb-0 m-2>a href/gallery>div classflex items-center gap-2 px-2 sm:px-3 py-1.5 text-gray-900 border bg-white hover:bg-gray-100 whitespace-nowrap>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-image w-4 h-4 sm:w-4 sm:h-4>rect width18 height18 x3 y3 rx2 ry2>/rect>circle cx9 cy9 r2>/circle>path dm21 15-3.086-3.086a2 2 0 0 0-2.828 0L6 21>/path>/svg>span classtext-xs sm:text-sm hidden sm:inline>Gallery/span>span classtext-xs inline sm:hidden>Gallery/span>/div>/a>a hrefhttps://github.com/AtakanG7 target_blank relnoopener noreferrer>div classflex items-center gap-2 px-2 sm:px-3 py-1.5 text-gray-900 border bg-white hover:bg-gray-100 whitespace-nowrap>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-github w-4 h-4 sm:w-4 sm:h-4>path dM15 22v-4a4.8 4.8 0 0 0-1-3.5c3 0 6-2 6-5.5.08-1.25-.27-2.48-1-3.5.28-1.15.28-2.35 0-3.5 0 0-1 0-3 1.5-2.64-.5-5.36-.5-8 0C6 2 5 2 5 2c-.3 1.15-.3 2.35 0 3.5A5.403 5.403 0 0 0 4 9c0 3.5 3 5.5 6 5.5-.39.49-.68 1.05-.85 1.65-.17.6-.22 1.23-.15 1.85v4>/path>path dM9 18c-4.51 2-5-2-7-2>/path>/svg>span classtext-xs sm:text-sm hidden sm:inline>GitHub/span>span classtext-xs inline sm:hidden>GitHub/span>/div>/a>/div>/nav>div classflex flex-col lg:flex-row gap-5>section classflex-1>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classflex gap-4>div classhidden sm:block flex-shrink-0 rounded-md border border-gray-100 bg-white/60 shadow-sm p-1 overflow-hidden>div classrelative w-20 h-20 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded-md>/div>/div>/div>div classflex-grow min-w-0>a href/blogs/aws-secure-communication-privatelink>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>Secure internal service communication with PrivateLink/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>Learn how to secure internal service communication using AWS PrivateLink. This guide covers setting up connectivity between VPCs, deploying infrastructure, and testing connections with VPN and DNS configurations./p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2025-09-15T13:51:22.243Z>September 15, 2025/time>span classtext-gray-300>•/span>span>9!-- --> likes/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classflex gap-4>div classhidden sm:block flex-shrink-0 rounded-md border border-gray-100 bg-white/60 shadow-sm p-1 overflow-hidden>div classrelative w-20 h-20 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded-md>/div>/div>/div>div classflex-grow min-w-0>a href/blogs/ai-agent-systems>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>AI Agent Systems/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>Build advanced AI agent systems in 2025! Explore how enterprises are leveraging cloud solutions to create sophisticated platforms - with some humorous mishaps along the way./p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2025-07-20T22:17:49.392Z>July 20, 2025/time>span classtext-gray-300>•/span>span>16!-- --> likes/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classw-full mb-4>div classrelative w-full aspect-16/10 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded>/div>/div>/div>div classflex gap-4>div classflex-grow min-w-0>a href/blogs/design-patterns-my-journey>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>Design Patterns in Software Development: My Journey from Chaos to Structure/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>Transform chaos into structured software development with design patterns. Learn from the journey shared in this insightful blog post!/p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2024-12-25T14:47:10.584Z>December 25, 2024/time>span classtext-gray-300>•/span>span>9!-- --> likes/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classw-full mb-4>div classrelative w-full aspect-16/10 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded>/div>/div>/div>div classflex gap-4>div classflex-grow min-w-0>a href/blogs/kubernetes-control-data-plane>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>Kubernetes Control Plane and Data Plane Explained: Key Components & Automation/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>In this post, I break down the core components of Kubernetes' control and data planes, focusing on how it manages clusters, automates deployments, and ensures system health using tools like etcd, the API server, and Helm. It’s a straightforward look at Kubernetes architecture for anyone wanting to deepen their understanding./p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2024-10-12T07:33:12.148Z>October 12, 2024/time>span classtext-gray-300>•/span>span>10!-- --> likes/span>span classtext-gray-300>•/span>span>3!-- --> comments/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classflex gap-4>div classhidden sm:block flex-shrink-0 rounded-md border border-gray-100 bg-white/60 shadow-sm p-1 overflow-hidden>div classrelative w-20 h-20 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded-md>/div>/div>/div>div classflex-grow min-w-0>a href/blogs/mesh-networks-boost-connectivity-reliability>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>Mesh Networks Explained: Boosting Connectivity, Coverage, and Reliability/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>A mesh network enhances Wi-Fi coverage and reliability by allowing multiple devices to communicate seamlessly, eliminating dead zones in large homes and offices. With self-healing capabilities and flexible node placement, mesh networks ensure consistent internet access across extended areas. Discover how mesh technology improves network performance for both residential and commercial use./p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2024-10-06T16:54:58.292Z>October 6, 2024/time>span classtext-gray-300>•/span>span>4!-- --> likes/span>span classtext-gray-300>•/span>span>1!-- --> comments/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classflex gap-4>div classhidden sm:block flex-shrink-0 rounded-md border border-gray-100 bg-white/60 shadow-sm p-1 overflow-hidden>div classrelative w-20 h-20 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded-md>/div>/div>/div>div classflex-grow min-w-0>a href/blogs/dapr-simplifies-microservices>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>Simplify Microservice Development with Dapr: Code Abstraction Made Easy/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>This blog post discusses how Dapr simplifies microservice development through code abstraction, allowing developers to focus on business logic instead of communication complexities. It highlights Dapr's features that enhance scalability and resilience in applications./p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2024-09-24T11:59:04.924Z>September 24, 2024/time>span classtext-gray-300>•/span>span>2!-- --> likes/span>span classtext-gray-300>•/span>span>1!-- --> comments/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classw-full mb-4>div classrelative w-full aspect-16/10 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded>/div>/div>/div>div classflex gap-4>div classflex-grow min-w-0>a href/blogs/cloud-agnostic-ci-cd-pipeline>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>Building a Cloud Agnostic CI/CD Pipeline with Terraform and Kubernetes/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>Learn how to create a cloud-agnostic CI/CD pipeline using Terraform and Kubernetes. This approach ensures flexibility across various cloud providers while avoiding vendor lock-in. Discover how to manage your infrastructure and deployments seamlessly, regardless of the cloud environment./p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2024-09-13T11:02:22.061Z>September 13, 2024/time>span classtext-gray-300>•/span>span>19!-- --> likes/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classw-full mb-4>div classrelative w-full aspect-16/10 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded>/div>/div>/div>div classflex gap-4>div classflex-grow min-w-0>a href/blogs/nevotek-internship-ci-cd-experience>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>My First Week at Nevotek: Internship Insights and Experiences/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>In my first week as an intern, I worked on building a CI/CD pipeline with Azure DevOps and Jira. It was a hands-on learning experience where I set up my tools, faced technical challenges, and gained practical skills in DevOps. Here's a brief overview of what I did and learned./p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2024-08-22T17:14:35.315Z>August 22, 2024/time>span classtext-gray-300>•/span>span>13!-- --> likes/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classw-full mb-4>div classrelative w-full aspect-16/10 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded>/div>/div>/div>div classflex gap-4>div classflex-grow min-w-0>a href/blogs/logwatcher>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>LogWatcher: Simplifying Docker Image Monitoring with Open Source Software/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>Discover the LogWatcher project created by Atakan G, an open source software tool designed to simplify application monitoring for Docker images. Learn about the benefits, architecture, and technology stack of LogWatcher, which offers easy monitoring and optimization of resource usage. Explore the user-friendly interface powered by Streamlit, along with integrated tools like Prometheus, Grafana, Al/p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2024-07-31T06:00:55.260Z>July 31, 2024/time>span classtext-gray-300>•/span>span>27!-- --> likes/span>/div>/div>/div>/article>article classpy-4 my-4 border-b border-gray-100 last:border-none>div classflex gap-4>div classhidden sm:block flex-shrink-0 rounded-md border border-gray-100 bg-white/60 shadow-sm p-1 overflow-hidden>div classrelative w-20 h-20 >div classabsolute inset-0 bg-gray-100 animate-pulse rounded-md>/div>/div>/div>div classflex-grow min-w-0>a href/blogs/must-know-free-apis-for-developers>h2 classtext-lg md:text-xl font-medium text-blue-700 hover:underline cursor-pointer mb-2>Must-Know Free APIs for Developers/h2>/a>p classtext-gray-600 text-sm md:text-base line-clamp-2 mb-2>Explore a list of must-know free APIs for developers in this informative article. Discover tools like the Pixabay API for fetching images, MongoDB Atlas API for managing blog posts, Redis API for fast data retrieval, and more. From Google OAuth2 API for secure user authentication to Telegram API for sending logs, this article covers a wide range of APIs to enhance your projects. Check out these free APIs and revolutionize your development experience!/p>div classflex items-center gap-2 text-gray-500 text-xs>time dateTime2024-07-12T22:52:16.989Z>July 12, 2024/time>span classtext-gray-300>•/span>span>21!-- --> likes/span>span classtext-gray-300>•/span>span>1!-- --> comments/span>/div>/div>/div>/article>/section>aside classw-full lg:w-80 mt-4>div classspace-y-6>div classborder border-gray-200 rounded-lg p-4>h2 classtext-lg font-medium text-gray-900>Certifications/h2>div classmt-2>a hrefhttps://learn.microsoft.com/en-us/users/AtakanG7/credentials/A46DF79C8D4EB3D5 target_blank relnoopener noreferrer classflex items-center gap-3 w-full px-4 py-2 text-gray-700 hover:bg-gray-50>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-award w-5 h-5 text-gray-500 flex-shrink-0>path dm15.477 12.89 1.515 8.526a.5.5 0 0 1-.81.47l-3.58-2.687a1 1 0 0 0-1.197 0l-3.586 2.686a.5.5 0 0 1-.81-.469l1.514-8.526>/path>circle cx12 cy8 r6>/circle>/svg>div classflex-1>div classflex items-center justify-between>span classtext-sm>Microsoft Cloud Technologies Certification/span>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-external-link w-3.5 h-3.5 flex-shrink-0 text-gray-400>path dM15 3h6v6>/path>path dM10 14 21 3>/path>path dM18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6>/path>/svg>/div>/div>/a>a hrefhttps://www.coursera.org/account/accomplishments/certificate/ZPK2NQRF2A3E target_blank relnoopener noreferrer classflex items-center gap-3 w-full px-4 py-2 text-gray-700 hover:bg-gray-50>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-award w-5 h-5 text-gray-500 flex-shrink-0>path dm15.477 12.89 1.515 8.526a.5.5 0 0 1-.81.47l-3.58-2.687a1 1 0 0 0-1.197 0l-3.586 2.686a.5.5 0 0 1-.81-.469l1.514-8.526>/path>circle cx12 cy8 r6>/circle>/svg>div classflex-1>div classflex items-center justify-between>span classtext-sm>IBM System Administration Certificate/span>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-external-link w-3.5 h-3.5 flex-shrink-0 text-gray-400>path dM15 3h6v6>/path>path dM10 14 21 3>/path>path dM18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6>/path>/svg>/div>/div>/a>a hrefhttps://www.coursera.org/account/accomplishments/certificate/EVWGK9ZEG843 target_blank relnoopener noreferrer classflex items-center gap-3 w-full px-4 py-2 text-gray-700 hover:bg-gray-50>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-award w-5 h-5 text-gray-500 flex-shrink-0>path dm15.477 12.89 1.515 8.526a.5.5 0 0 1-.81.47l-3.58-2.687a1 1 0 0 0-1.197 0l-3.586 2.686a.5.5 0 0 1-.81-.469l1.514-8.526>/path>circle cx12 cy8 r6>/circle>/svg>div classflex-1>div classflex items-center justify-between>span classtext-sm>Google Technical Foundations Program/span>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-external-link w-3.5 h-3.5 flex-shrink-0 text-gray-400>path dM15 3h6v6>/path>path dM10 14 21 3>/path>path dM18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6>/path>/svg>/div>/div>/a>a hrefhttps://www.coursera.org/account/accomplishments/certificate/7Y5XDPNS9FPQ target_blank relnoopener noreferrer classflex items-center gap-3 w-full px-4 py-2 text-gray-700 hover:bg-gray-50>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-award w-5 h-5 text-gray-500 flex-shrink-0>path dm15.477 12.89 1.515 8.526a.5.5 0 0 1-.81.47l-3.58-2.687a1 1 0 0 0-1.197 0l-3.586 2.686a.5.5 0 0 1-.81-.469l1.514-8.526>/path>circle cx12 cy8 r6>/circle>/svg>div classflex-1>div classflex items-center justify-between>span classtext-sm>Google Network Engineering Fundamentals/span>svg xmlnshttp://www.w3.org/2000/svg width24 height24 viewBox0 0 24 24 fillnone strokecurrentColor stroke-width2 stroke-linecapround stroke-linejoinround classlucide lucide-external-link w-3.5 h-3.5 flex-shrink-0 text-gray-400>path dM15 3h6v6>/path>path dM10 14 21 3>/path>path dM18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6>/path>/svg>/div>/div>/a>/div>/div>/div>/aside>/div>/div>/div>footer classbg-white text-gray-900>div classmax-w-6xl mx-auto px-4 py-12>div classgrid grid-cols-1 md:grid-cols-2 lg:grid-cols-4 gap-8>div>h3 classtext-gray-900 text-lg font-semibold mb-4>About/h3>p classtext-sm>Developer focused on creating innovative solutions and open-source projects./p>/div>div>h3 classtext-gray-900 text-lg font-semibold mb-4>Projects/h3>ul classspace-y-2>li>a hrefhttps://sprojects.live/ classtext-sm hover:text-gray-600 transition-colors target_blank relnoopener noreferrer>ProjectPulse/a>/li>li>a hrefhttps://atakangul.com classtext-sm hover:text-gray-600 transition-colors target_blank relnoopener noreferrer>ideaLog/a>/li>li>a hrefhttps://chat.atakangul.com/ classtext-sm hover:text-gray-600 transition-colors target_blank relnoopener noreferrer>ChatVerse/a>/li>li>a hrefhttps://kubernetes-infra.atakangul.com/ classtext-sm hover:text-gray-600 transition-colors target_blank relnoopener noreferrer>KubernetesInfra/a>/li>li>a hrefhttps://github.com/AtakanG7/linux-diagnostic-agent classtext-sm hover:text-gray-600 transition-colors target_blank relnoopener noreferrer>Linux Diagnostic Agent/a>/li>/ul>/div>div>h3 classtext-gray-900 text-lg font-semibold mb-4>Contact/h3>ul classspace-y-2>li>a hrefhttps://github.com/atakang7 classtext-sm hover:text-gray-600 transition-colors target_blank relnoopener noreferrer>GitHub/a>/li>li>a hrefhttps://www.linkedin.com/in/atakan-gul/ classtext-sm hover:text-gray-600 transition-colors target_blank relnoopener noreferrer>LinkedIn/a>/li>li>a hrefhttps://www.atakangul.com classtext-sm hover:text-gray-600 transition-colors target_blank relnoopener noreferrer>Website/a>/li>/ul>/div>div>h3 classtext-gray-900 text-lg font-semibold mb-4>Legal/h3>p classtext-sm>© !-- -->2025!-- --> Atakan Gül.br/>All rights reserved./p>/div>/div>/div>/footer>/div>script id__NEXT_DATA__ typeapplication/json>{props:{pageProps:{blogs:{views:0,comments:,_id:68c819da43c6ee4e3392a196,url:aws-secure-communication-privatelink,title:Secure internal service communication with PrivateLink,content:!image(https://static.atakangul.com/uploads/b236oofio38ys0e195p1n1ic1)\n\n\n# Secure internal service communication with PrivateLink\n\n## Introduction\n\nExposing internal services through internet is not suggested due to security reasons. Most of the enterprises construct their network on highly secured connections. These, usually, well maintained by cloud providers like AWS, Azure, GCP and all has their own similar solutions to the matter. In todays writing, I will be focusing on one of these solutions. AWS PrivateLink.\n\n## What is AWS PrivateLink\n\nAWS PrivateLink helps customers to have connectivity between different VPC resources even in different regions. The data transfer is unidirectional and never leaves AWS-owned fiber network. Connectivity can be configured in different zones and regions. By enabling Cross-Zone Load Balancing you can allow the Network Load Balancer (NLB) to distribute traffic evenly across all registered targets in all enabled AZs, regardless of the AZ where the client is located.\n\nThe concept is constructed around producer-consumer model where the producer sets up NLB, Target Group and VPC Service Provider. Consumer VPC setups VPC endpoint with Elastic Network Interface (ENI).\n\n## Prerequisites\n\nTo get started, you need to have two different AWS accounts to experiment together. Configure AWS credentials to start.\n\nClone the repository:\n```bash\n❯ git clone https://github.com/atakang7/cross-vpc-private-link.git\n```\n\nRun the script:\n```bash\n❯ bash first-run.sh\n```\n\n## Script Execution Flow\n\nAfter running the script, it will do the following:\n\n### 1. Scripts/00_check_prereqs.sh\n\nChecks script prerequisites: tofu (ex-terraform), aws, openssl.\n\n- **Tofu** is OSS fork of Terraform - almost everything is the same as before\n- **AWS** is the AWS CLI for managing resources\n- **OpenSSL** needed to generate VPN certificates\n\n### 2. Scripts/10_generate_certs.sh\n\nCreates VPN certificates:\n\n- **CA certificate** (ca.crt) - Root certificate authority for signing\n- **Server certificate** (server.crt) - Authenticates the VPN endpoint\n- **Client certificate** (client.crt) - Authenticates VPN clients\n\n### 3. Scripts/20_import_acm.sh\n\nImports generated certificates to AWS Certificate Manager in dev account:\n\n\n- **Server cert + CA chain** - Required for VPN endpoint SSL termination\n- **Root CA cert** - Used for client certificate validation\n- **Returns ARNs** - Certificate ARNs passed to Terraform for VPN configuration\n\n### 4. Scripts/30_deploy_prod.sh\n\nDeploys provider infrastructure in production account:\n\n- **VPC + private subnets** - Isolated network environment\n- **Network Load Balancer** - Routes traffic to backend services\n- **VPC Endpoint Service** - Exposes NLB via PrivateLink\n- **Demo application** - Simple HTTP service for testing\n\n### 5. Scripts/40_deploy_dev.sh\n\nDeploys consumer infrastructure in development account:\n\n- **Consumer VPC** - Separate network for development\n- **Interface VPC Endpoint** - Connects to prod PrivateLink service\n\n\n\n- **Route53 private zone** - Custom DNS (hello.internal.company)\n- **Client VPN endpoint** - Certificate-based remote access\n\n### 6. Scripts/50_export_vpn_config.sh\n\nExports OpenVPN configuration file:\n\n- **Downloads .ovpn file** - From AWS Client VPN endpoint\n- **Includes endpoint details** - Server address, protocol, port\n- **Ready for client** - Use with OpenVPN client + certificates\n\n### 7. Scripts/60_test_privateline.sh\n\nTests PrivateLink connectivity:\n\n- **Curls private service** - http://hello.internal.company:8080\n- **Validates DNS resolution** - Route53 private zone working\n- **Confirms end-to-end** - VPN → private DNS → PrivateLink → backend\n\n### 8. Scripts/70_destroy_all.sh\n\nClean teardown of all resources:\n\n- **Dev environment first** - Removes consumer dependencies\n- **Prod environment second** - Safely removes provider resources\n- **Prevents dependency errors** - Proper destruction order matters\n\n## Testing the Connection\n\nAfter first_run.sh script completes, connect to VPN with your certificate.\n\n```bash\n❯ sudo openvpn --config ./dev.ovpn --cert scripts/certs/client.crt --key scripts/certs/client.key --ca scripts/certs/ca.crt\n```\n\nSuccessful connection:\n```\n...\n2025-09-15 01:15:40 Incoming Data Channel: Cipher AES-256-GCM initialized with 256 bit key\n2025-09-15 01:15:40 net_route_v4_best_gw query: dst 0.0.0.0\n2025-09-15 01:15:40 net_route_v4_best_gw result: via 192.168.1.1 dev wlp0s20f3\n2025-09-15 01:15:40 ROUTE_GATEWAY 192.168.1.1/255.255.255.0 IFACEwlp0s20f3 HWADDR90:cc:df:08:3a:81\n2025-09-15 01:15:40 TUN/TAP device tun0 opened\n2025-09-15 01:15:40 net_iface_mtu_set: mtu 1500 for tun0\n2025-09-15 01:15:40 net_iface_up: set tun0 up\n2025-09-15 01:15:40 net_addr_v4_add: 172.16.0.2/27 dev tun0\n2025-09-15 01:15:40 net_route_v4_add: 10.10.0.0/16 via 172.16.0.1 dev NULL table 0 metric -1\n2025-09-15 01:15:40 Initialization Sequence Completed\n```\n\nTry to connect to hello.internal.company:8080.\n\n```bash\n❯ curl hello.internal.company:8080\ncurl: (6) Could not resolve host: hello.internal.company\n```\n\nAdd DNS server to /etc/resolv.conf.\n```\nnameserver 10.10.0.2\n```\n\nTry again.\n```bash\n❯ curl hello.internal.company:8080\n{\message\: \Hello from provider\, \ts\: \2025-09-14T22:19:10.640669\}\n```\n\n## Learning Points\n\n- Be careful not to overlap CIDRs for your VPN and between VPCs.\n- If DNS resolution should only be resolved by the company, set split_tunnel to false.\n\n```hcl\nresource \aws_ec2_client_vpn_endpoint\ \this\ {\n ...\n split_tunnel true\n ...\n}\n```\n\nPeace ;)\n,description:Learn how to secure internal service communication using AWS PrivateLink. This guide covers setting up connectivity between VPCs, deploying infrastructure, and testing connections with VPN and DNS configurations.,search_keywords:aws, vpn, privatelink, certificate, vpc, client, endpoint, scripts, service, network, internal,imageURL:https://static.atakangul.com/uploads/b236oofio38ys0e195p1n1ic1,isProject:true,AICreated:false,isTechnical:false,status:published,publishedAt:2025-09-15T13:51:22.239Z,createdAt:2025-09-15T13:51:22.243Z,updatedAt:2025-11-24T18:08:53.872Z,__v:0,likes:9},{_id:687d6b0da37ff8b105452ae6,url:ai-agent-systems,title:AI Agent Systems,content:!(https://static.atakangul.com/uploads/075ebe4cb56e6a18988e26c00)\n\n# **AI Agent Systems**\n\nAI agents are the main objective in 2025. Everyone including enterprises are building AI agent systems by utilizing cloud providers or third party solutions. These platforms provide reasonable capabilities to create simple to medium complex level systems.\n\nTake the GitHub issue solver example. The promise is simple: developer creates an issue, AI agent reads the codebase, understands the problem, writes the fix, tests it, and submits a PR. In practice? The agent misunderstands requirements, writes code that doesnt compile, or changes the code differently :).\n\nThroughout the software development history, as I read and watch, Ive never seen such tendency to rely on high entropy after JS :). Were essentially throwing more complexity at a fundamentally unstable foundation hoping that somehow, adding more moving parts will create reliability.\n\nEven though LLMs are improving over time, this doesnt change the fact that they are just random machines that select the most probable outcome to an input. Therefore, the error margin always exists.\n\n## **Current Optimizations?**\n\nCurrent \solutions\ all follow the same pattern: optimize the LLM. Feed it more relevant context, use more powerful models, add sophisticated prompting techniques. RAG systems try to solve this by retrieving relevant documents, but they still dump everything into the LLM context without considering the order or relationships. But these approaches miss the fundamental issue theyre still treating the LLM as the primary decision-maker.\n\nWorse, theyre creating new problems. Adding more context often causes LLMs to hallucinate and provide poorer results. These hallucinations arent just technical failures theyre potential legal liabilities. OpenAI, Microsoft, and Anthropic are already facing lawsuits for AI outputs that produce defamatory content or violate copyrights. UnitedHealth got sued for AI decisions with 90% error rates that allegedly caused patient deaths\\¹\\. When an AI agent makes decisions that violate regulations or create compliance issues, companies face real penalties.\n\n## **What I Come Up With?**\n\nRecently, I have been building an AI agent orchestration system where developers enter plain language and the system handles agent creation and management. Building this system taught me firsthand why current approaches fail.\n\nOne of the first challenges I faced was integrating LLMs into the system. I had to implement different providers and connect them to a factory. At this point its almost ambiguous to know how many integrations will be needed tomorrow the landscape changes too rapidly.\n\nNevertheless, another issue arose from decision making. In each different run cycle, the systems behavior was changing due to use of incapable models. After the upgrade it was partially working. This time the issue was context management. New models can take up to 1M tokens but this wasnt the issue. The issue was providing the relevant data for decision making in a way that actually helped rather than hurt.\n\nOne point needs to be mentioned here is that this system was using threads to create agents in parallel. Such creation was creating dozens of system messages to be included in the prompt, which was bloating the context and confusing the LLM.\n\n## **Context Maintenance**\n\nI came up with a different strategy and made the creation sequential. Any component in the system was responsible to register their state into the context in a sequential manner. This made decision making dramatically easier and more reliable.\n\nThe solution wasnt just about reducing context size it was about the relations of these items in the context. When information flows in logical sequence, LLMs can follow the reasoning much more effectively.\n\n## **Supporting Trends**\n\nOne great example supporting my idea is KIRO. This is a new IDE created by Amazon and it brings new strategy to agent software development. It integrates pragmatic software development strategies into agentic software development. I know it feels weird, we are defining the wheel again. However, this is what it is right now.\n\n!(https://static.atakangul.com/uploads/f8a889cff3e14e9c4c919b705)\n\nIn this image, its shown that the agent forces developer to define requirements as the initial step. It identifies the request and prepares an action proposal. If the developer likes it, then it creates the plan to be executed in the next step.\n\n!(https://static.atakangul.com/uploads/f8a889cff3e14e9c4c919b704)\n\nIt creates tasks in sequential order as the developer wants them to be implemented, then the developer clicks on \Start task\ and executes each one.\n\n## **Academic Validation**\n\nRecently, I found research from Cornell that supports my approach. The \Chain of Agents\ paper from June 2024 shows that sequential agent collaboration works much better than traditional methods up to **10% improvement**\\²\\.\n\nWhats interesting is they discovered the same problems I faced. They mention that current approaches \struggle with focusing on the pertinent information\ which is exactly the context management issue I was dealing with. Their solution uses \worker agents who sequentially communicate\ and \interleaving reading and reasoning\ very similar to my sequential pipeline approach.\n\nThe academic community has already started catching up to what practical experience shows. Structure and sequence matter more than just using more powerful models. It feels good to see research validating what I discovered while building real systems.\n\n## **Conclusion**\n\nAs KIRO suggests \Bring structure to AI coding with spec-driven development\, developing in structured form is not only for development but for most of the agentic workflows as my experience also suggests. This approach brings reliability and scalability into the systems because as reliability increases, there will be more to add on.\n\n* * *\n\n**References:**\n\n\\1\\ International Business Times. \Lawsuit Filed Before Killing: UnitedHealthcare CEO Accused Insurance Giant Of Using Faulty AI Tool.\ Link(https://www.ibtimes.com/lawsuit-filed-before-killing-unitedhealthcare-ceo-accused-insurance-giant-using-faulty-ai-tool-3754323)\n\n\\2\\ Zhang, Y., Sun, R., Chen, Y., Pfister, T., Zhang, R., \u0026 Arik, S. Ö. (2024). \Chain of Agents: Large Language Models Collaborating on Long-Context Tasks.\ arXiv:2406.02818. Link(https://arxiv.org/abs/2406.02818),description:Build advanced AI agent systems in 2025! Explore how enterprises are leveraging cloud solutions to create sophisticated platforms - with some humorous mishaps along the way.,search_keywords:AI agent systems, AI agents, building AI systems, cloud providers, third party solutions, GitHub issue solver, software development, high entropy, complexity, LLMs, error margin, AI optimizations, RAG systems, relevant context,isTechnical:false,AICreated:false,imageURL:https://static.atakangul.com/uploads/075ebe4cb56e6a18988e26c00,isProject:false,views:0,comments:,status:published,likes:16,publishedAt:2025-07-20T22:17:49.388Z,updatedAt:2025-10-31T00:43:42.881Z,createdAt:2025-07-20T22:17:49.392Z,__v:0},{_id:676c1aee7e06bdb74382f7a1,url:design-patterns-my-journey,title:Design Patterns in Software Development: My Journey from Chaos to Structure,content:!(https://static.atakangul.com/uploads/f855b0da785f5472ee876a60c.png)\n\n# **Design Patterns in Software Development: My Journey from Chaos to Structure**\n\nDesigning software is the most important aspect of software development and I didnt know it when I started coding. I was trying patterns and find out how it works with what and got exhausted so much.\n\nSince I always liked directly implementing the actual software instead of learning and planning first, the outcome turned out to be always unsatisfying. In todays write I will tell about how I first met with software design :). \n\nHere we go;\n\nMy first real wake-up call about design came when we had to refactor a service that started as a simple API but grew into a tangled mess. Wed added feature after feature without planning, and suddenly we were spending more time fixing bugs than building new features.\n\nI started my coding journey in a startup and as you may guess there were a lot of different kinds of work to do. For example, one day you would be looking for Kubernetes related issues, one day researching for the new AI related topic.\n\nNo blaming, it was highly teaching environment with a lot of pros and cons. Always liked working with enthusiastic people who werent there to be there but try to get something out of the experience. This made the environment a little fast paced and made the learning possible for a lazy person like me :).\n\nThe time I spend in this place taught me a lot in terms of learning fast. Made me quick thinker and action taker instead of sitting and planning for hours.\n\nToday, I realize that this is not sustainable for an engineer. Therefore, I started spending more time on designing before implementing. Not only the business models but how is the production environment going to be like, how is secret management will be handled and more.\n\nIn design phase, I always ask questions and try to answer them if cannot I would directly get into research phase which is learning and planning at the same time.\n\n---\n\n!(https://static.atakangul.com/uploads/f855b0da785f5472ee876a60b.png)\n\n## **MVC**\n\nModel-View-Controller design pattern can be the most preferred design pattern on earth. Simple, straight to the point and effective. Design your business logic in models, control the business in the controllers and show your business to the world with the view with a view engine. \n\nWhen I learned this software design pattern, my coding days were relieved and knew what to expect from the software because there were boundaries and the requirements were so clear than before.\n\nYou expected to create a website for a person. Its going to serve only 100 people/month and not really that fancy things expected. You are expected to create just a blog site. This wouldnt be big of a deal to start and design in the go (not recommended).\n\nOn the other hand, lets say you are given a project that comes from a cloud provider which will have 100s of requirements to keep up with. What happens when you just start and not design anything? Let me predict, after an hour life is good birds are singing, after a day something is wrong but you dont have any idea and after a week you are helpless sitting in front of red (errors) screen. This comes from the experience, :) dont take it personal.\n\n---\n\n### Real Story:\n\nI was expected to build an application in the producer-consumer related context. At first I thought I could do this in just a week. It took... took... ... ... took months that I cant even remember :). What I did wrong was all about making the business logic complex and I was right to do that because it was my first experience.\n\n**If I had to do it today, I would:**\n\n- Start with basic producer-consumer flow \n- Build the simplest working models \n- Add complexity only when needed \n- Keep controllers focused on routing logic \n- Separate view concerns completely \n\n**Instead, I had:**\n\n- Huge controllers doing everything \n- Business logic scattered everywhere \n- Views that knew too much about the system \n\nAfter learning designing matters my days were fun than ever. I knew what to expect from the application and the boundaries are very clear. Also, this way you can predict the due date better for the project.\n\n---\n\n!(https://static.atakangul.com/uploads/f855b0da785f5472ee876a60a.png)\n\n## **Getting Complex... (DDD)**\n\nSo far, I was really crawling and not even walking yet. So, it was time to build more complex applications. As I was working on this new project called `linux-diagnostic-agent`, I really felt the urge to have some different design than MVC. This was the starting point where I learned Domain Driven Design (DDD).\n\nLet’s dive into a real example. In the `linux-diagnostic-agent` project, we needed to:\n\n1. Collect system logs \n2. Monitor network metrics \n3. Maintain secure tunnels \n4. Handle agent configuration \n\nEach of these components had its own complex logic and requirements. Trying to fit this into MVC would have been a mess. Instead, DDD helped me organize it like this:\n\nEach domain (logs, network, tunnel) lives in its own space with:\n\n- Clear boundaries \n- Independent logic \n- Specific requirements \n\nThe beauty of this approach? When we needed to add new features or fix bugs:\n\n- Changes in log collection didnt affect network monitoring \n- Network updates didnt break the tunnel \n- Each piece could evolve independently \n\nThis structure not only made the code cleaner but also made it easier for different team members to work on different domains without stepping on each others toes.\n\nDDD isnt just for huge enterprises - even in smaller projects like this, it helps manage complexity by giving each piece of logic its proper home.\n\nI really adapted this approach in my daily life thinking because it’s very effective even in daily life tasks. Let each task have their separate processor and each will resolve in an isolated environment—its just a matter of time.\n\n---\n\n## **Last Words**\n\nWe all have different stories when starting something new. My journey from chaotic code to structured design taught me that patterns arent just theoretical concepts – theyre tools that make our daily work more enjoyable.\n\nFrom MVC to Domain Driven Design, the goal remains the same: building maintainable systems that let developers code with a smile rather than exhaustion.\n,description:Transform chaos into structured software development with design patterns. Learn from the journey shared in this insightful blog post!,search_keywords:Design Patterns, Software Development, Chaos to Structure, Software Design, Coding Journey, Refactoring, Startup Environment, Software Planning, Bug Fixing,AICreated:false,imageURL:https://static.atakangul.com/uploads/f855b0da785f5472ee876a60a.png,isProject:false,views:0,comments:,status:published,likes:9,publishedAt:2024-12-25T14:47:10.583Z,updatedAt:2025-06-04T22:36:59.351Z,createdAt:2024-12-25T14:47:10.584Z,__v:0,isTechnical:true},{_id:670a2638d5583e264bc591f2,url:kubernetes-control-data-plane,title:Kubernetes Control Plane and Data Plane Explained: Key Components \u0026 Automation,content:!Cluster Diagram(https://static.atakangul.com/uploads/image-1728718383960-779743709.png)\\n\\n# Kubernetes - Cluster Control Plane and Data Plane Components Explained\\n\\nKubernetes (short: k8s) is a container orchestration and management solution based on Googles Borg(https://en.wikipedia.org/wiki/Borg_(cluster_manager)) cluster manager.\\n\\n\u003e In the broader perspective, it is a network manager and an orchestrator for the processors. Manages the underlying infrastructure and network configurations for containerized services through complex configuration steps.\\n\\n**In this guide you will get familiar with;**\\n\\n- K8s(https://kubernetes.io/) Architecture Overview\\n - Control Plane Components\\n - Data Plane Components\\n- Automation of Deployments\\n- The most important Thing\\n\\n## 1 - K8s Architecture Overview\\n\\nConsists of components including:\\n\\n1. **Control Plane**\\n - Scheduler\\n - API Server\\n - Controller Manager\\n - Etcd\\n2. **Data Plane**\\n - Worker Nodes\\n - Pods\\n\\n### 1.1 Control Plane\\n\\nControl plane is responsible for keeping the cluster state in requested state with components. Each component has unique responsibility in contributing in state management in cluster.\\n\\n#### Scheduler\\n\\nResponsible for pod creation. As new requests come to the cluster declaring to create new pods for a specific service, the control plane receives the request and sends it to the scheduler. The scheduler then talks to the data plane and triggers pod creation.\\n\\n#### API Server\\n\\nREST API service in the control plane to retrieve cluster information. Used by control plane components and clients. Accessible via `kubectl` CLI tool or REST requests.\\n\\n#### Controller Manager\\n\\nKeeps track of health of critical cluster controllers:\\n- Node Controller\\n- Deployment Controller\\n- ReplicaSet Controller\\n- StatefulSet Controller\\n- DaemonSet Controller\\n- Job Controller\\n- CronJob Controller\\n\\nIt uses a continuous loop to check controller states and restarts failed ones to maintain cluster reliability.\\n\\n#### Etcd\\n\\nA distributed key-value store that keeps track of the cluster state: deployments, node info, pod info, etc. It is crucial for recovery and backup.\\n\\n### 1.2 Data Plane\\n\\n#### Worker Nodes\\n\\nA worker node can be a VM or physical machine that runs cluster workloads. More nodes more compute capacity.\\n\\n#### Pods\\n\\nSmallest deployable unit in Kubernetes. Can run one or more containers. Ephemeral by default (no persistent storage unless volumes are mounted).\\n\\nKubernetes offers persistent volumes for long-lived data across pods.\\n\\nKey components on each node:\\n\\n- **Container Runtime**: Runs containers. Examples: Docker, containerd(https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd)\\n- **Kubelet**: Runs on each node, manages pod/container lifecycle and reports to the control plane.\\n- **Kube-proxy**: Manages network rules and service discovery/load balancing inside the cluster.\\n\\n## 2 - Automation of Deployments\\n\\nDeployments are managed via YAML files and CLI tools. These tools are critical for delivering applications to your cluster.\\n\\nThe cluster receives deployment requests through files like `deployment.yaml` that describe where the app binaries are and how they should be configured.\\n\\nControl plane persists this in `etcd`, then creates the pod.\\n\\nWhile this simplifies deployment logic, it does **not** track versions (which is critical in CI/CD).\\n\\n### 2.1 Helm\\n\\nHelm is the package manager for Kubernetes. It:\\n- Keeps track of app versions via Helm repos (e.g., GitHub)\\n- Wraps around `kubectl`\\n- Simplifies deployment and rollback\\n\\nInstead of writing new YAML files each time, you define charts and Helm handles version tracking and deployments.\\n\\n**For implementation details see:** Deploying application in a version controlled way.(https://atakangul.com/blogs/cloud-agnostic-ci-cd-pipeline)\\n\\n## 3 - The Most Important Thing\\n\\nEach Kubernetes setup is unique. Teams must be trained regularly and updates should be communicated clearly.\\n\\nTeam-wide understanding leads to better cluster usage, cost reduction, and higher efficiency.\\n\\n## Conclusion\\n\\n**Disclaimer**: written by Claude 3.5 Sonnet\\n\\nKubernetes may seem complex, but it is highly logical. It revolutionized container orchestration through its layered architecture.\\n\\nControl Plane and Data Plane components work together to keep the cluster healthy and functional.\\n\\nThe real power of Kubernetes lies in how teams **use** it. Regular education and documentation improve results dramatically.\\n\\nKubernetes evolves with your needs. Learn it, use it wisely, and let it scale with you.\\n\\n#### Continue Reading\\n\\n- Kubernetes deployments(https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)\\n- Helm Overview(https://helm.sh/docs/)\\n- Kubernetes Monitoring(https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-usage-monitoring/)\\n\\nIf you enjoy my posts, consider subscribing to my newsletter(https://atakangul.com/). Drop a comment and help me grow. Thanks for reading!,description:In this post, I break down the core components of Kubernetes control and data planes, focusing on how it manages clusters, automates deployments, and ensures system health using tools like etcd, the API server, and Helm. It’s a straightforward look at Kubernetes architecture for anyone wanting to deepen their understanding.,search_keywords:Kubernetes control plane\, \Kubernetes data plane\, \Kubernetes architecture\, \Kubernetes components\, \control plane vs data plane\, \Kubernetes automation\, \Kubernetes scheduler\, \Kubernetes etcd\, \Kubernetes API server\, \Kubernetes Helm deployments\, \Kubernetes cluster management\, \Kubernetes CI/CD\, \container orchestration,AICreated:false,imageURL:https://static.atakangul.com/uploads/35f0561d-bfb8-47f6-b6fd-a23bdec2c578.png,isProject:false,views:326,comments:67574c78366897bb522793be,67651f07463efc1da9e08f45,67651f09463efc1da9e08f48,status:published,likes:10,publishedAt:2024-10-12T07:33:12.147Z,updatedAt:2025-05-18T12:14:41.699Z,createdAt:2024-10-12T07:33:12.148Z,__v:0,isTechnical:true},{_id:6702c0e29d5737d957d3ae1a,url:mesh-networks-boost-connectivity-reliability,title:Mesh Networks Explained: Boosting Connectivity, Coverage, and Reliability,content:!(https://static.atakangul.com/uploads/image-1728220874185-797258161.png)\n\n# What is Mesh Network?\n\nMesh network is a network topology that enables network devices to talk to each other across various distances. This topology is being used in houses, industry environments and big areas to cover the area with network signals to provide internet to network devices.\n\nIn a mesh network, devices work together to pass along data, which helps extend the networks reach. This setup is useful in places where a single router might not cover the whole area, like in large homes or offices with thick walls.\n\n---\n\n!(https://static.atakangul.com/uploads/image-1728223183743-289703528.png)\n\n## How does it look like?\n\nIn general, mesh networks are built with nodes in a network. These nodes are separate special routers that come with pre-installed software. This software ensures that the network is configured and the nodes can communicate with each other through the network.\n\nA key advantage of using a mesh network is its flexibility in node placement. In a mesh network, node locations can be decided based on the users preferences. This flexible placement helps provide internet connection to areas where the main routers Wi-Fi signals cant reach effectively.\n\n---\n\n!(https://static.atakangul.com/uploads/image-1728233619654-877996396.png)\n\n## Self Healing\n\nIn a big mesh network, maintenance can be a significant challenge for network engineers. When one node goes down, the whole network can be affected by the downtime. If the failed node is the only path to the gateway, it could potentially bring down the entire network until that node is restored.\n\nHowever, these node devices typically come with self-healing software. This software ensures that data can reach the gateway through at least one node, even if others fail. The self-healing is achieved through algorithms that automatically reconfigure the network paths. These algorithms establish multiple routes to the gateway, so if one node fails, the network can quickly adapt and use alternative paths.\n\nThis self-healing capability significantly improves the reliability and resilience of mesh networks, reducing the impact of individual node failures on the overall network performance.\n\n---\n\n## Potential Headaches\n\nThe main challenge could be configuring these nodes to communicate with each other and ultimately with the main gateway. However, todays modern technology enables seamless configuration between the nodes. Adding a new node to the network is often as simple as opening a mobile application (**which varies according to the vendor**) and scanning the network to discover the new node.\n\nAnother potential issue is that some nodes may have limited range, which could require network engineers to add more nodes to the network. As you might expect, this additional hardware comes with increased costs.\n\nOn the other hand, many of these nodes now come with 5 GHz wireless connection capabilities, which can provide longer-range connections and help address the coverage issue. Speed is also important. Many of these node devices support connection speeds of up to **5 Gbit/s** when connected through the Ethernet port, though actual speeds may vary depending on various factors.\n\n---\n\n## Use Cases\n\nMesh network can be utilized in various scenarios:\n\n1. **Home Network**: In some cases if the house is big the main wifi signals are not sufficient and create dead zones in the house. In this case, mesh network can extend these wifi signals to the dead zones to provide internet. \n2. **IoT Devices**: In urban areas, mostly farmers or some entities use sensors to obtain information about their business. This requires them to have internet connection to these sensors. Therefore, implementing a mesh network brings network to these sensors. \n3. **Disaster**: When a natural disaster happens and the network infrastructure is affected, implementing a temporary mesh network can be the fastest solution. \n4. **Schools**: Since schools are big in terms of m^2, overall area is too big to cover with one router. Mesh network can cover such distances without an issue to provide internet to the whole school.\n\n---\n\n## Monitoring\n\nNetwork mesh network topology is highly beneficial in terms of providing internet in dead zones or areas with limited connectivity. However, they are also beneficial for network monitoring. These network nodes come with their own configurable interfaces where the administrator can decide which services the node can access and which it cant.\n\nThis feature adds an extra layer of control and security to the network. Administrators can:\n\n1. Manage traffic: Prioritize certain types of data or limit bandwidth for specific services. \n2. Implement security policies: Block access to potentially harmful or inappropriate content across the entire network. \n3. Monitor usage: Gain insights into network usage patterns, helping to optimize performance and identify potential issues before they become problems. \n4. Customize access: Set up different access levels for various user groups or devices connected to the network. \n5. Troubleshoot efficiently: Pinpoint issues to specific nodes, making it easier to diagnose and resolve network problems.\n\nThis level of control and visibility across the entire network is particularly valuable in larger deployments, such as enterprise environments or smart city initiatives, where managing network security and performance is crucial.\n\n---\n\n## Products to look\n\nWhen it comes to implementing mesh networks, several products stand out in the market. Heres an overview of some of the best mesh network systems available:\n\n1. **Google Nest Wifi**: Known for its easy setup and integration with Googles ecosystem. It offers good coverage and includes smart speakers in its satellite units. \n2. **ASUS ZenWiFi AX (XT8)**: This high-end system provides excellent Wi-Fi 6 performance and coverage. Its particularly good for larger homes and offers robust security features. \n3. **Amazon eero Pro 6**: Offers tri-band Wi-Fi 6 connectivity, easy setup through a smartphone app, and integrates well with other Amazon smart home devices. \n4. **Netgear Orbi**: Known for its high-performance systems, especially in larger homes. The Orbi line includes various models, including Wi-Fi 6 options. \n5. **TP-Link Deco**: Offers a range of affordable mesh systems with good performance. The Deco X20 and X60 are popular Wi-Fi 6 options. \n6. **Linksys Velop**: Provides reliable coverage and easy setup. The MX10 Velop AX system offers Wi-Fi 6 connectivity for faster speeds. \n7. **Ubiquiti AmpliFi**: Popular among tech enthusiasts, offering advanced features and customization options. The AmpliFi Alien is their high-end Wi-Fi 6 system.\n\nWhen choosing a mesh network system, consider factors such as:\n\n- The size of the area you need to cover \n- The number of devices youll be connecting \n- Whether you need Wi-Fi 6 capabilities \n- Your budget \n- What you need \n\nRemember, the \best\ system depends on your specific needs and environment. Its a good idea to read recent reviews before making a decision.\n\n---\n\n## Conclusion\n\nBriefly, mesh network is an extended local area network that brings wider network accessibility. This network topology doesnt only bring extended network access but also provides self-healing, flexible node placement, and better network monitoring. Mesh networks fix connectivity problems in many situations, like big houses, schools, IoT setups, and after disasters.\n\nWhile setting up can be tricky and sometimes costly, new mesh systems are easy to configure and offer fast connections. Mesh networks can adapt to different places and give strong, far-reaching coverage. Theyre a big step forward in network technology, especially good for our world where everythings getting more connected.\n,description:A mesh network enhances Wi-Fi coverage and reliability by allowing multiple devices to communicate seamlessly, eliminating dead zones in large homes and offices. With self-healing capabilities and flexible node placement, mesh networks ensure consistent internet access across extended areas. Discover how mesh technology improves network performance for both residential and commercial use.,search_keywords:Mesh network, Wi-Fi coverage, Reliable internet, Network topology, Home network solutions, Self-healing networks, Mesh Wi-Fi systems, Internet connectivity, Flexible node placement, Large area networking, Wi-Fi dead zones, Smart home technology, Mesh network benefits, IoT connectivity, Commercial network solutions,AICreated:false,imageURL:https://static.atakangul.com/uploads/edbf3286-5b57-4a73-9766-53a80a2a8dc9.png,isProject:true,views:295,comments:676b18f0900df2a77e57a178,status:published,likes:4,publishedAt:2024-10-06T16:54:58.292Z,updatedAt:2025-09-17T13:27:51.058Z,createdAt:2024-10-06T16:54:58.292Z,__v:0,isTechnical:false},{_id:66f2a98888b3df0c85e45c0a,url:dapr-simplifies-microservices,title:Simplify Microservice Development with Dapr: Code Abstraction Made Easy,content:!(https://static.atakangul.com/uploads/image-1727178092280-667606963.PNG)\n\n# Simplifying Microservice Development with Dapr\n\nDapr is an abstraction layer in front of your microservices core logic and external services, allowing your core code to be agnostic and clean by managing the connection functionality at its core.\n\nWhen you work with microservices, communication between them can quickly become complicated. Dapr (Distributed Application Runtime) was created to solve this problem.\n\nIt simplifies microservice development by handling application communication, letting you focus on business logic instead of the glue that holds everything together.\n\n!(https://static.atakangul.com/uploads/image-1727178176382-480137253.PNG)\n\n---\n\n## Abstraction at Its Core\n\nDapr provides a layer of abstraction over service communication, so you don’t need to constantly update your code or worry about version control for the communication logic. It takes care of the low-level details, like which protocol to use (it supports both gRPC and HTTP) and automatically converts between them without any manual configuration.\n\n---\n\n## Bindings: Code Agnostic and Easier\n\nOne of the standout features of Dapr is how it uses bindings to handle sending and receiving messages. Bindings abstract away even more code, making your services more agnostic and less dependent on specific implementations. With bindings, you don’t have to constantly restart your services or rewrite code to fit new communication patterns. It’s a \set it and forget it\ approach, which is especially helpful when scaling.\n\n!(https://static.atakangul.com/uploads/image-1727178284838-570548446.PNG)\n\n---\n\n## No More Message Broker Headaches\n\nDealing with message brokers can be a real pain, especially when you need them to reliably pass messages between services. Dapr handles all of that for you. Whether you’re using **Kafka, RabbitMQ**, or another system, Dapr abstracts it away, giving you a simple API for communication. You don’t need to manage the specifics—Dapr works with your targets and handles message delivery.\n\n---\n\n## Built-In Network Tracing\n\nDapr’s built-in network tracing is another valuable feature. It tracks service-to-service communication by automatically attaching headers to requests, ensuring they reach the right destination. This allows you to monitor your microservice traffic easily without manually setting up tracing.\n\nIf backend tracing is set up in the network, this data can be visualized in tracking systems such as:\n\n1. **Jaeger**: A tool for monitoring and troubleshooting microservices. \n2. **Zipkin**: A system that collects timing data to identify latency issues. \n3. **Prometheus**: Primarily for metrics but can be used with tracing data. \n4. **Elastic APM**: Monitors application performance and visualizes traces. \n5. **Grafana**: Creates dashboards to visualize tracing data. \n6. **OpenTelemetry**: Collects trace data for use with various tools. \n7. **Honeycomb**: Provides real-time tracing and visualization of service interactions. \n\n---\n\n## Secret Management\n\nIn addition to communication and bindings, Dapr supports secret management. You can store sensitive data securely using Dapr-supported secret stores like Azure Key Vault or AWS Secrets Manager. This eliminates the need to hardcode sensitive information in your services, keeping your app secure by design.\n\n---\n\n## State Management Simplified\n\nAnother powerful feature is Dapr’s state management. Managing state in a distributed system can be tricky, but Dapr provides simple APIs to work with state stores like Redis or Cosmos DB. Whether you’re handling caching, session data, or other stateful interactions, Dapr makes it seamless, reducing complexity in your code.\n\n---\n\n## Cross-Platform Flexibility\n\nOne of Dapr’s strengths is its cross-platform capability. While it integrates deeply with Kubernetes, you’re not limited to that environment. Dapr can run on any platform, whether on-premises, in the cloud, or even at the edge. This flexibility ensures Dapr can adapt to various environments without locking you into a single solution.\n\n---\n\n## Sidecars in Kubernetes: The Magic Behind Dapr\n\nThe key to how Dapr works is the sidecar pattern. Dapr runs a sidecar next to each service within a Kubernetes pod, enabling all the powerful features like communication handling, tracing, and secret management. You don’t have to modify your application code—simply connect to the Dapr sidecar, and everything is taken care of.\n\nSetting it up is easy. When deploying to Kubernetes, you install Dapr using the Dapr CLI. For each service, you add annotations to the Kubernetes deployment YAML file, and Dapr automatically starts the sidecars, connecting them to your services.\n\n---\n\n## Conclusion\n\nDapr isn’t just a tool for microservice communication—it’s an entire platform for simplifying distributed systems. From automatic protocol conversions and network tracing to state management, bindings, and secret handling, it takes care of the heavy lifting.\n\nUsing sidecars, Dapr integrates smoothly into your Kubernetes setup, ensuring your microservices communicate effectively with minimal effort.\n\nSee official Dapr documentation(https://docs.dapr.io/) to get started.\n,description:This blog post discusses how Dapr simplifies microservice development through code abstraction, allowing developers to focus on business logic instead of communication complexities. It highlights Daprs features that enhance scalability and resilience in applications.,search_keywords:Dapr, microservices, distributed application runtime, service communication, bindings, message broker, network tracing, secret management, state management, Kubernetes, sidecar pattern, cross-platform, application integration, protocol conversion, cloud-native, resilience, scalability, monitoring, observability, API management.,AICreated:false,imageURL:https://static.atakangul.com/uploads/735f936c-369b-431c-a8ee-5edad5693ef0.png,isProject:true,views:235,comments:66f2c1b988b3df0c85e45d1e,status:published,likes:2,publishedAt:2024-09-24T11:59:04.923Z,updatedAt:2024-12-16T15:03:24.161Z,createdAt:2024-09-24T11:59:04.924Z,__v:1,isTechnical:false},{_id:66e41bbe1fea6eb781499f44,url:cloud-agnostic-ci-cd-pipeline,title:Building a Cloud Agnostic CI/CD Pipeline with Terraform and Kubernetes,content:!(https://static.atakangul.com/uploads/image-1726225334970-943544415.PNG)\n\n# Cloud Agnostic CI/CD Pipeline and Environment\n\nIn todays world, having a portable environment with a proper version control is key to have a healthy production environment and setup. In this article, we will be discussing the advantages of cloud agnostic design and how it can be implemented using **Terraform, Helm and Kubernetes**.\n\n---\n\n\u003e **NOTE ** *Agnosticism is the belief that the existence of God or the divine is unknown or unknowable. It holds that its impossible to prove or disprove the existence of deities.*\n\n---\n\nLets say your company uses different services of a cloud provider of their preference, and the cloud team setup a production environment with the services and configurations using the cloud specific solutions. What happens when the company decides to change the cloud environment and use completely different environment instead?\n\nLets think, lets say you used Azure Container Applications to serve your applications and used Cosmos DB or similar services to provide storage to your setup. What are the corresponding services in AWS of those? Maybe, S3 and **ECS (Elastic Container Service) for serving applications, and DynamoDB for a NoSQL database similar to Cosmos DB.**\n\nThe cloud specific services considered, you will need to re-apply the logic to a completely different cloud environment, in this case Azure to AWS.\n\n---\n\n## Avoiding Waste with Portable Design\n\nWhat could be the logical approach to avoid such a waste. You may be thinking the answer: Hmm, why not using a service that works on all cloud providers and can be configured with code to keep the infra and version states wherever we want them to be.\n\nThen lets use VMs, thats included in all CPs and we can simply deploy applications with Ansible like tools. Hmm, good try. Then what happens, when the VM cannot handle the load YES scaling needed. Does VMs have scaling, actually YES, however personally I dont trust any VMs :).\n\nUsing VMs can be an option, but there’s a big question: how do they handle service discovery when they scale? To manage this, youd need a service discovery tool like Consul from HashiCorp. But here’s the catch: setting up and managing such tools can get pretty complex and require a lot of engineering effort.\n\nAs a company owner, I’d be wary of diving into a setup that’s both intricate and challenging to manage. Why? Because while a more complex environment might offer flexibility, it can also consume a lot of engineering resources. On the flip side, a simpler setup might lead to higher operational costs. Balancing complexity and cost is key to maintaining a stable and efficient infrastructure.\n\n---\n\n## Cloud Agnostic Environment\n\n**What? I thought we were building a CI/CD pipeline.**\n\nYes, we are. However, to create a cloud-agnostic CI/CD pipeline, you also need to make your environment agnostic to specific cloud services. This will become more apparent when we get to the CD (release) stage.\n\n**Hmm, what does that actually mean?**\n\nIn simple terms, it means using services that are available across all cloud platforms, avoiding vendor lock-in. By adopting this approach, you can focus on application development rather than being tied down by infrastructure choices.\n\n**So what services should we use?**\n\nIf we agree to avoid cloud-specific services, VMs are still a good option. The key is to use an orchestrator that continuously monitors and manages the health of the environment, ensuring it runs smoothly.\n\nAs you understood, I dont like unnecessary risks. Therefore, lets just consider Kubernetes (K8s). **BUT...** I didn’t say it will be cheap.\n\nKubernetes is a powerful orchestrator that can manage containers across different environments, making it a great fit for cloud-agnostic strategies.\n\nHowever, the trade-off here is cost—both in terms of infrastructure and the expertise needed to set it up and maintain it. You’ll need skilled engineers to manage Kubernetes clusters, ensure scaling, handle updates, and troubleshoot issues.\n\nThe upside? Once its set up, Kubernetes will give you the flexibility to move between cloud providers with minimal hassle, avoiding vendor lock-in and making your infrastructure more resilient in the long run. It’s an investment in reliability and futureproofing, but it comes with a price tag—whether that’s in terms of resources or personnel. So while it may not be the cheapest option, its certainly one that provides peace of mind when scaling your operations.\n\n!(https://static.atakangul.com/uploads/image-1726215745341-927164961.PNG)\n\nGreat, now we have a portable environment that can run on any cloud provider. **Hmm, I think its time to consider how to keep the state of the infrastructure. Why don’t we use Terraform?**\n\n!(https://static.atakangul.com/uploads/image-1726215865410-843696233.PNG)\n\nTerraform is a cloud-agnostic Infrastructure as Code (IaC) tool that allows you to define and manage your infrastructure across multiple cloud providers using a single configuration language. By using Terraform, you can maintain a consistent and version-controlled state of your infrastructure, automate provisioning and updates, and ensure that your setup is reproducible and maintainable. This aligns perfectly with our goal of a cloud-agnostic setup, providing flexibility and control over your infrastructure while avoiding vendor lock-in.\n\n---\n\n## `KubernetesInfra Github Repository`(https://github.com/AtakanG7/KubernetesInfra/tree/main)\n\n!(https://static.atakangul.com/uploads/image-1726216206734-641221120.PNG)\n\nThe `main.tf` file is the entry point of the terraform apply command:\n\n```hcl\n# main.tf file\nmodule \kubernetes\ {\n source \./modules/azure\\n resource_group_name var.resource_group_name\n kubernetes_cluster_name var.kubernetes_cluster_name\n location var.location\n node_count var.node_count\n vm_size var.vm_size\n ARM_CLIENT_ID var.ARM_CLIENT_ID\n ARM_CLIENT_SECRET var.ARM_CLIENT_SECRET\n ARM_TENANT_ID var.ARM_TENANT_ID\n ARM_SUBSCRIPTION_ID var.ARM_SUBSCRIPTION_ID\n}\n```\n\n**However, this setup demonstrates the flexibility of a cloud-agnostic environment.** The core idea is that by abstracting infrastructure through Terraform modules and variables, you can easily switch to any other cloud provider.\n\nThe specific module for Azure can be replaced or configured similarly for AWS, Google Cloud, or any other cloud provider, thereby supporting a cloud-agnostic strategy. This approach ensures that your infrastructure definitions remain portable and adaptable, fitting seamlessly into different cloud environments as needed.\n\n---\n\n### Azure Module Overview\n\nLets check the Azure module to see what resources we are creating and what logic were after.\n\n```hcl\nprovider \azurerm\ {\n features {}\n}\n\nresource \azurerm_resource_group\ \main\ {}\n\nresource \azurerm_kubernetes_cluster\ \aks\ {\n default_node_pool {}\n identity {}\n}\n\nprovider \kubernetes\ {}\n\nresource \kubernetes_namespace\ \monitoring\ {}\nresource \kubernetes_namespace\ \production\ {}\nresource \kubernetes_namespace\ \staging\ {}\n\nprovider \helm\ {\n kubernetes {}\n}\n\nresource \helm_release\ \prometheus\ {}\nresource \kubernetes_config_map\ \alertmanager_config\ {}\nresource \helm_release\ \database\ {}\nresource \helm_release\ \web_app\ {}\nresource \helm_release\ \worker\ {}\n\nresource \random_password\ \grafana_admin_password\ {}\n```\n\n---\n\n## Kubernetes Resources Overview\n\nThe Terraform configuration provisions:\n\n- **AKS Cluster**\n- **Kubernetes Namespaces** for `monitoring`, `production`, and `staging`\n- **Helm Charts** for Prometheus, web app, worker, and database\n- **Grafana Admin Password** generated securely\n\nEach namespace is logically separated, and each resource is Helm-managed for clear environment control.\n\n---\n\n## Helm for Versioning and Environment Management\n\n!(https://static.atakangul.com/uploads/image-1726223771383-670095416.PNG) \n!(https://static.atakangul.com/uploads/image-1726222781333-492444382.PNG)\n\n### GH-PAGES (Helm GitHub Repository)\n\nThis repository contains the Helm charts for all environments: production and staging.\n\nHelm chart folders like `/charts/web-app/` contain:\n\n- `deployment.yaml`\n- `values-staging.yaml`\n- `values-production.yaml`\n\nThis allows templating with variables:\n\n```yaml\nimage:\n repository: your-repo\n tag: 0.1.22\n```\n\nAnd Helm commands like:\n\n```bash\nhelm upgrade --install web-app ./charts/web-app -f ./charts/web-app/values-production.yaml\nhelm upgrade --install web-app ./charts/web-app -f ./charts/web-app/values-staging.yaml\n```\n\n---\n\n!(https://static.atakangul.com/uploads/image-1726217487482-839512376.PNG)\n\n# Creating Cloud Agnostic CI/CD Pipeline\n\nA cloud-agnostic CI/CD pipeline ensures your build and deployment logic works across all providers.\n\nRead more here: How to Setup CI/CD Pipeline Using Azure DevOps for AKS(https://atakangul.com/blogs/how-to-setup-cicd-pipeline-using-azure-devops-for-aks)\n\n## Pipeline Configuration and Team Management\n\n- **Azure DevOps** for full-scale team and repo management\n- **Jenkins** as a free alternative with flexible plugin support\n\n!(https://static.atakangul.com/uploads/image-1726224038114-1679746.PNG)\n\n---\n\n## Jenkins Pipeline Stages\n\nTools installed on AWS free tier instance:\n\n1. Docker\n2. Azure CLI\n3. `kubectl`\n4. Jenkins\n\n!(https://static.atakangul.com/uploads/image-1726217333159-281428603.PNG)\n\n### Stages:\n\n- Checkout application\n- Clone Helm charts repo\n- Update chart versions\n- Build + push Docker image\n\n!(https://static.atakangul.com/uploads/image-1726224286582-267393690.PNG)\n\n- Mirror prod → staging\n- Deploy to staging\n- Run tests\n- Wait for manual approval\n\n!(https://static.atakangul.com/uploads/image-1726224379618-882010765.PNG)\n\n- Push updated Helm chart\n- Cleanup staging\n- Deploy to production\n\nSee full pipeline: `KubernetesInfra/.jenkins`(https://github.com/AtakanG7/KubernetesInfra/blob/main/.jenkins/Jenkinsfile)\n\n---\n\n## Conclusion\n\nBy adopting a cloud-agnostic approach, you can ensure that your infrastructure and CI/CD pipeline are flexible and adaptable to any cloud provider. This strategy avoids vendor lock-in, making it easier to scale and manage your environment efficiently.\n\nMore info: \n`KubernetesInfra/.jenkins`(https://github.com/AtakanG7/KubernetesInfra/blob/main/.jenkins/Jenkinsfile)\n\nThis approach provides a robust, scalable solution for managing deployments and infrastructure, offering peace of mind as you scale your operations across different cloud platforms.\n,description:Learn how to create a cloud-agnostic CI/CD pipeline using Terraform and Kubernetes. This approach ensures flexibility across various cloud providers while avoiding vendor lock-in. Discover how to manage your infrastructure and deployments seamlessly, regardless of the cloud environment.,search_keywords:Cloud Agnostic, CI/CD Pipeline, Terraform, Helm, Kubernetes, Infrastructure as Code (IaC), Vendor Lock-in, VMs (Virtual Machines), Service Discovery, Azure, AWS, Azure Container Applications, Cosmos DB, S3, ECS (Elastic Container Service), DynamoDB, Jenkins, Version Control, GitHub Actions,AICreated:false,imageURL:https://static.atakangul.com/uploads/ce115fd7-213b-4653-af31-bd05ffafb59f.png,isProject:true,views:400,comments:,status:published,likes:19,publishedAt:2024-09-13T11:02:22.060Z,updatedAt:2025-10-21T11:46:03.635Z,createdAt:2024-09-13T11:02:22.061Z,__v:0,isTechnical:true},{_id:66c771fb0dc9040799a3f395,url:nevotek-internship-ci-cd-experience,title:My First Week at Nevotek: Internship Insights and Experiences,content:!(https://static.atakangul.com/uploads/image-1724344111772-833481450.png)\n\n# **Implementing a CI/CD Pipeline with Azure DevOps and Jira: My First Week as an Intern**\n\nThis week marked the start of my internship, and I was immediately thrown into the deep end with a challenging project: creating a **CI/CD pipeline integrated with Jira using Azure DevOps.**\n\nIt was a week full of learning, problem-solving, and a few headaches along the way. Heres a breakdown of what I worked on, the challenges I faced, and the insights I gained.\n\n!(https://static.atakangul.com/uploads/image-1724344762027-659284872.png)\n\n## **Setting Up the Development Environment**\n\nI began by setting up my development environment on a Lenovo laptop. Coming from a Linux background, working on Windows required some adjustments, especially when it came to using the Windows command prompt. To maintain my preferred Linux workflow, I set up a virtual machine (VM) running Linux, which allowed me to run Linux commands and scripts while still leveraging the native Windows tools.\n\n!(https://static.atakangul.com/uploads/image-1724344742254-45585937.png)\n\n### Heres what my setup included:\n\n1. **Docker**: Installed on both Windows and the Linux VM, allowing for seamless container management across both environments.\n2. **Azure CLI**: Set up on both platforms, ensuring I could manage Azure resources from either environment as needed.\n3. **Visual Studio Code**: My go-to editor, enhanced with extensions for Docker, Azure, and remote development.\n\n## **Implementing a Local Agent**\n\nOne unexpected challenge I encountered was a limitation in my Azure for Students subscription, which didn’t allow me to use parallelism in Azure Pipelines. This was a significant roadblock because parallelism is essential for running multiple pipeline tasks concurrently.\n\nTo work around this, I had to implement a local agent to run DevOps tasks on a different machine. Setting up the local agent involved configuring it to communicate with Azure DevOps and ensuring it could execute the pipeline tasks just like a cloud-hosted agent. This solution enabled me to continue developing the pipeline without being hindered by the subscription limitations.\n\n## **The CI/CD Pipeline Architecture**\n\nThe main task of the week was designing and implementing a CI/CD pipeline with the following workflow:\n\n1. **Code commits to Azure Repos**: Developers push their code to the Azure Repos repository.\n2. **CI pipeline initiation**: After a review and merge to the master branch, the CI pipeline kicks off automatically.\n3. **Artifact creation**: The CI process builds an artifact ready for testing.\n4. **Blue-green deployment**: The artifact is deployed to a development environment in Azure Web Apps, following a blue-green deployment strategy.\n5. **Testing**: Linting and basic unit tests are run to ensure code quality.\n6. **Artifact push to ACR**: If tests pass, the artifact is pushed to Azure Container Registry (ACR).\n7. **Staging deployment**: ACR triggers the automatic deployment to the staging environment.\n8. **Jira issue creation**: The pipeline creates a Jira issue to notify the tester that the build is ready for testing.\n9. **Issue resolution**: Once the tester marks the Jira issue as resolved, the pipeline continues to the next stage.\n10. **Environment swap**: The CD part swaps the staging and production environments, completing the blue-green deployment.\n11. **Manager approval**: Before the changes go live in production, the deployment requires manager approval.\n\n!(https://static.atakangul.com/uploads/image-1724344802750-176109976.png)\n\n## **Technical Hurdles**\n\nSetting up this pipeline wasnt without its challenges. Here are a few of the technical hurdles I encountered:\n\n1. **Service Connections**: Configuring service connections and service principals in Azure was trickier than expected. It required digging deep into Azures authentication mechanisms and setting the correct permissions to allow seamless integration between Azure DevOps, ACR, and Azure Web Apps.\n2. **Azure Web App Configuration**: Ensuring the Azure Web App exposed the correct port for the application involved a fair bit of trial and error. Debugging these issues was time-consuming but crucial for the pipelines success.\n3. **Jira Integration**: Integrating the pipeline with Jira for automated issue creation turned out to be more complex than anticipated. The API integration required precise configuration to ensure the pipeline and Jira communicated effectively.\n\n!(https://static.atakangul.com/uploads/image-1724344855007-456969970.png)\n\n## **What I Learned**\n\nThis project provided a hands-on introduction to real-world DevOps practices. Key takeaways include:\n\n1. **Blue-Green Deployment Strategy**: Implementing a blue-green deployment strategy was a new experience for me. Its one thing to understand the theory, but actually setting it up gave me a much deeper understanding of its benefits and challenges.\n2. **Service Integration**: I learned the complexities involved in integrating multiple services into a cohesive CI/CD pipeline. Each component needs to be meticulously configured to ensure smooth communication and operation.\n3. **Local Agent Setup**: Working around the Azure for Students subscription limitation by setting up a local agent taught me how to adapt to constraints and find alternative solutions, a valuable skill in any DevOps role.\n\n## **Next Steps**\n\nThe pipeline is functional, but theres still work to be done. My next steps include optimizing the pipeline for performance, adding more comprehensive testing stages, and improving the Jira integration to provide more detailed and automated information.\n\nFor anyone working on similar projects, I recommend getting familiar with Azure’s authentication mechanisms and spending time understanding how the different services in your pipeline communicate. It’s these details that can make or break the smooth operation of a CI/CD pipeline.\n\n!(https://static.atakangul.com/uploads/image-1724346846489-551228793.png)\n,description:In my first week as an intern, I worked on building a CI/CD pipeline with Azure DevOps and Jira. It was a hands-on learning experience where I set up my tools, faced technical challenges, and gained practical skills in DevOps. Heres a brief overview of what I did and learned.,search_keywords:Internship, CI/CD pipeline, Azure DevOps, Jira, local agent, Docker, Azure CLI, Visual Studio Code, blue-green deployment, Azure Repos, Azure Container Registry (ACR), Azure Web Apps, service connections, authentication, service integration, DevOps practices, artifact creation, testing, environment swap, manager approval, technical hurdles.,AICreated:false,imageURL:https://static.atakangul.com/uploads/f760c17c-218b-4b64-8618-7f662f6d6464.png,isProject:true,views:348,comments:,status:published,likes:13,publishedAt:2024-08-22T17:14:35.314Z,updatedAt:2025-10-31T00:43:34.470Z,createdAt:2024-08-22T17:14:35.315Z,__v:0,isTechnical:true},{_id:66a9d31778806aa9b87e9599,url:logwatcher,title:LogWatcher: Simplifying Docker Image Monitoring with Open Source Software,content:!(https://static.atakangul.com/uploads/image-1722389557280-444687410.png) \nFigure 1 *- LogWatcher dashboard -*\n\nIn this article, project created by Atakan G. will be discussed and pros and cons will be revealed.\n\n# Introducing LogWatcher\n\nLogwatcher is an open source software tool, created by Atakan G(https://www.atakangul.com/portfolio) to simplify the application monitoring. The main purpose of this tool is to monitor and gain insights about the given docker image(https://www.techtarget.com/searchitoperations/definition/Docker-image) application.\n\nWatch Video(https://www.youtube.com/embed/bLh7QsPede0?showinfo0)\n\n## Getting Started\n\n1 - Clone the project to your local environment.\n```bash\ngit clone https://github.com/AtakanG7/logWatcher.git\n```\n\n2 - Run the starter script - will check requirements and setup the environment.\n```bash\nsh logwatcher.sh your-docker-image-name\n```\n\n3 - You need to wait until the requirements installed and after a Streamlit(https://streamlit.io/) interface will pop-up in front of you. You can start exploring through the interface.\n\n## 1-1 What is LogWatcher?\n\nManaging everything from the CLI(https://aws.amazon.com/what-is/cli/) is really easy, but GUI(https://en.wikipedia.org/wiki/Graphical_user_interface) makes things really **simple** and I like **simplicity**. Therefore, I needed such a tool to see the metrics of my application in real time and optimize the resource usage accordingly.\n\n## Open Source Philosophy and Benefits\n\nI believe as the open source quality increase and the time takes developers to measure twice, cut once; overall code quality will increase in the space.\n\nOne of the best ways to convey your understanding to the community by simply showing the capabilities you have. I never hide them, I think that is stupid. **Nevermind**!\n\n## 1-2 Purpose and Benefits\n\nThis project is designed to be beneficial to developers and can be used as internal company software(https://budibase.com/internal-tools/). The LogWatcher is beneficial when its used to:\n\n- Make quick and easy tests to test ready docker images(https://docs.docker.com/reference/cli/docker/image/ls/).\n- Get idea about total resource usage of the application.\n- See error logs in real time.\n- Monitor in a some period of time to see what alerts it creates.\n\nIn application monitoring the procedure needed to set-up such a system actually requires a lot of research and knowledge about configuration management(https://en.wikipedia.org/wiki/Configuration_management).\n\nHowever, since the LogWatcher abstracts away these details and automatically comes with configured monitoring tools (which will be discussed later), there is no need to manually configure and setup the exporters or scrapers(https://prometheus.io/docs/instrumenting/exporters/).\n\nOverall, when its needed to get quick results about your ready docker images; this tool is a life saver for such scenarios.\n\n## 2-1 LogWatcher Architecture\n\nLogWatcher system is a micro-service architecture(https://en.wikipedia.org/wiki/Microservices). Each service independent than other, connects when its needed and each service has different responsibilities.\n\nSince the services loosely coupled(https://www.techtarget.com/searchnetworking/definition/loose-coupling), each service can scale and change its capacity when its required. This is a typical architecture for monitoring solutions, and there can be added more exporters to gather information about the network and its devices.\n\n!(https://static.atakangul.com/uploads/image-1722393016583-272663628.png) \nFigure 2 *- Showcasing the LogWatcher system Architecture -*\n\nThe whole system is in the same simple docker network interface and communicates through the docker network interfaces over HTTP/HTTPS protocol(https://www.cloudflare.com/learning/ssl/what-is-https/).\n\n## 2-2 Technology\n\nIn this section, the majority of the technology stack will be discussed. However the GUI will be discussed now.\n\nThe architecture depicts a typical system design for monitoring, and with this design its easy to monitor any application nodes without any additional services. However, in this project main point is to create a user interface to get insight about a docker image immediately. No configuration needed.\n\nFor this reason, I choose a python module called Streamlit(https://streamlit.io/) for its simplicity and easy of creating such interfaces. Highly abstract modules enable fast interface creation and fast coding.\n\n### Prometheus\n\nPrometheus is an open source metric scraper used to collect information from the registered target network devices. Its fast information processing and query capabilities make it unique.\n\n### Grafana\n\nGranafa is also an open source tool, and provides a very unique and variety of graphical interfaces on web browser using Prometheus data. Easy to use and easy to integrate with other solutions.\n\n### Alert Manager\n\nAlert manager is an open source tool for listening alerts from Prometheus to take actions. These actions can be listed as email, telegram, discord, slack... and so on.\n\n### Loki \u0026 Promtail\n\nLoki and Promtail stack is used to collect log information from the target container applications. The collected data pipelines into Prometheus for later access.\n\n### Node Exporter \u0026 Cadvisor\n\nNode exporter and Cadvisor are solutions to collect information about the local system and the docker containers.\n\n## 3-1 LogWatcher Features\n\nImportant features can be listed as:\n\n- Monitoring \n- Alert System \n- Benchmarking \n- Configuration Management \n- Real-time Logs \n\n!(https://static.atakangul.com/uploads/image-1722395943133-711927569.png) \nFigure 3 *- LogWatcher monitoring center illustrated -*\n\n### Monitoring\n\nLogWatcher utilized mentioned technologies to gather information about the network and its devices. This enables it to create visually appealing dashboards and monitoring systems.\n\n### Alert System\n\nAlert system listens for any events from Prometheus and in case of any problem it takes immediate action and sends message to technical person.\n\n### Benchmarking\n\nBenchmarking is another feature that makes this tool stand out. I believe testing on the fly is stressful for developers (I know). Knowing the results before deployment is crucial for career growth ha ha.\n\n!(https://static.atakangul.com/uploads/image-1722395804599-699041310.png) \nFigure 4 - *LogWatcher configuration management illustrated -*\n\n### Configuration Management\n\nConfiguring any system requires depth knowledge in the field to be successful. For that reason, I created an interface to easily manipulate the configuration files from the user friendly interface.\n\nIn configuring docker container, there is a slight problem. The problem is that you need to restart the container to make the changes take effect. Each container in the monitoring network interface is observed and can be visible through the interface.\n\nIf you create another container in the same network as monitoring, the container will be detected by the system and automatically start fetching metrics from the container.\n\n!(https://static.atakangul.com/uploads/image-1722395855967-281576961.png) \nFigure 5 - *LogWatcher real-time log view center illustrated -*\n\n### Real-time Logs\n\nThis one is a little different. Normally, there is no need to set up Loki \u0026 Promtail stack to gather logs, simply using python docker module gives the exactly same solution.\n\nHowever, in docker way the logs are not processable by Prometheus, thus leading unprocessed read-only data which is waste of time for such a monitoring system.\n\n!(https://static.atakangul.com/uploads/image-1722396208627-776862648.png) \nFigure 6 - *LogWatcher container management center illustrated -*\n\nFull access from the dashboard to the docker containers and easy manipulation is enabled thanks to docker module.\n\n!(https://static.atakangul.com/uploads/image-1722400466909-30432399.png) \nFigure 7 - *Local system resource usage in Grafana -*\n\nThis screenshot depicts pre-configured node exporter dashboard in Grafana. The system uses JSON and YAML templates to configure the whole system. Especially in Grafana, there are a lot of ready dashboard styles avaliable at link(https://grafana.com/grafana/dashboards/).\n\n!(https://static.atakangul.com/uploads/image-1722400688696-499393314.png) \nFigure 8 - *Loki \u0026 Promptail log metrics in Grafana -*\n\nThis picture showcases the log datas scraped from all containers in monitoring docker network interface. Each log is labeled and has priority, thus Prometheus and Grafana can take action according to the given priority.\n\n!(https://static.atakangul.com/uploads/image-1722401164853-358147280.png) \nFigure 9 *- Cadvisor interface showcase -*\n\nAs the name suggests, this is cadvisor dashboard. You can basically reach any container resource metrics from this interface too.\n\nCadvisor gathers docker container information and sends them to Promethus as the **Figure 2** depicts. Cadvisor will be avaliable at http://localhost:8080/containers/.\n\n## Conclusion\n\nDoing such a project was really self-teaching, and would really see what people reacts to the project. If project gets attention, next features definitely will be integration with cloud platforms and ease of integration with any tool.\n\nThank you for exploring LogWatcher with me, and I look forward to seeing how this tool evolves with the community’s input.\n\nBest regards Atakan G.(https://www.atakangul.com/portfolio)\n\nIf you like such projects and talks, I highly recommend subscribing to the newsletter. Its free.\n\n### Resources\n\nhttps://www.techtarget.com/searchnetworking/definition/loose-coupling \nhttps://en.wikipedia.org/wiki/Microservices \nhttps://prometheus.io/docs/instrumenting/exporters/ \nhttps://en.wikipedia.org/wiki/Configuration_management \nhttps://streamlit.io/ \nhttps://www.cloudflare.com/learning/ssl/what-is-https/ \nhttps://docs.docker.com/reference/cli/docker/image/ls/ \nhttps://budibase.com/internal-tools/ \nhttps://aws.amazon.com/what-is/cli/ \nhttps://en.wikipedia.org/wiki/Graphical_user_interface \nhttps://www.techtarget.com/searchitoperations/definition/Docker-image \nhttps://grafana.com/grafana/dashboards/ \n,description:Discover the LogWatcher project created by Atakan G, an open source software tool designed to simplify application monitoring for Docker images. Learn about the benefits, architecture, and technology stack of LogWatcher, which offers easy monitoring and optimization of resource usage. Explore the user-friendly interface powered by Streamlit, along with integrated tools like Prometheus, Grafana, Al,search_keywords:LogWatcher dashboard, Atakan G., LogWatcher, LogWatcher Architecture, open source software, Docker image monitoring, Streamlit interface, CLI, GUI, open source philosophy, benefits, internal company software, resource usage, error logs, real-time monitoring, configuration management, microservice architecture.LogWatcher: Simplifying Docker Image Monitoring with Open Source SoftwareDiscover the LogWatcher project created by Atakan G, an open source software tool designed to simplify application monitoring for Docker images. Learn about the benefits, architecture, and technology stack of LogWatcher, which offers easy monitoring and optimization of resource usage. Explore the user-friendly interface powered by Streamlit, along with integrated tools like Prometheus, Grafana, Alert Manager, Loki, and more. Experience real-time insights and efficient monitoring with LogWatcher.{\content\:\\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722389557280-444687410.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 1 \u003cem\u003e- LogWatcher dashboard -\u003c/em\u003e\u003c/p\u003e\u003cp\u003eIn this article, project created by Atakan G. will be discussed and pros and cons will be revealed.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch1\u003eIntroducing LogWatcher\u003c/h1\u003e\u003cp\u003eLogwatcher is an open source software tool, created by \u003ca href\\\https://www.atakangul.com/portfolio\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eAtakan G\u003c/a\u003e to simplify the application monitoring. The main purpose of this tool is to monitor and gain insights about the given \u003ca href\\\https://www.techtarget.com/searchitoperations/definition/Docker-image\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003edocker image\u003c/a\u003e application.\u003c/p\u003e\u003ciframe class\\\ql-video\\\ frameborder\\\0\\\ allowfullscreen\\\true\\\ src\\\https://www.youtube.com/embed/bLh7QsPede0?showinfo0\\\\u003e\u003c/iframe\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch2\u003eGetting Started\u003c/h2\u003e\u003cp\u003e\u003cspan class\\\ql-size-small\\\\u003eNOTE As the time being 31.07.2024 AtakanG7 is not reachable to public. (\u003c/span\u003e\u003cstrong class\\\ql-size-small\\\\u003eflagged\u003c/strong\u003e\u003cspan class\\\ql-size-small\\\\u003e)\u003c/span\u003e\u003c/p\u003e\u003cp\u003e1 - Clone the project to your local environment.\u003c/p\u003e\u003cdiv class\\\ql-code-block-container\\\ spellcheck\\\false\\\\u003e\u003cdiv class\\\ql-code-block\\\ data-language\\\plain\\\\u003egit clone https://github.com/AtakanG7/logWatcher.git\u003c/div\u003e\u003c/div\u003e\u003cp\u003e2 - Run the starter script - will check requirements and setup the environment.\u003c/p\u003e\u003cdiv class\\\ql-code-block-container\\\ spellcheck\\\false\\\\u003e\u003cdiv class\\\ql-code-block\\\ data-language\\\plain\\\\u003esh logwatcher.sh your-docker-image-name\u003c/div\u003e\u003c/div\u003e\u003cp\u003e3 - You need to wait until the requirements installed and after a \u003ca href\\\https://streamlit.io/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eStreamlit\u003c/a\u003e interface will pop-up in front of you. You can start exploring through the interface.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch2\u003e1-1 What is LogWatcher?\u003c/h2\u003e\u003cp\u003eManaging everything from the \u003ca href\\\https://aws.amazon.com/what-is/cli/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eCLI\u003c/a\u003e is really easy, but \u003ca href\\\https://en.wikipedia.org/wiki/Graphical_user_interface\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eGUI\u003c/a\u003e makes things really \u003cstrong\u003esimple\u003c/strong\u003e and I like \u003cstrong\u003esimplicity\u003c/strong\u003e. Therefore, I needed such a tool to see the metrics of my application in real time and optimize the resource usage accordingly.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch2\u003eOpen Source Philosophy and Benefits\u003c/h2\u003e\u003cp\u003eI believe as the open source quality increase and the time takes developers to measure twice, cut once; overall code quality will increase in the space.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eOne of the best ways to convey your understanding to the community by simply showing the capabilities you have. I never hide them, I think that is stupid. \u003cstrong\u003eNevermind\u003c/strong\u003e!\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch2\u003e1-2 Purpose and Benefits\u003c/h2\u003e\u003cp\u003eThis project is designed to be beneficial to developers and can be used as \u003ca href\\\https://budibase.com/internal-tools/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003einternal company software\u003c/a\u003e. The LogWatcher is beneficial when its used to:\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003col\u003e\u003cli data-list\\\bullet\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eMake quick and easy tests to test ready \u003ca href\\\https://docs.docker.com/reference/cli/docker/image/ls/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003edocker images\u003c/a\u003e.\u003c/li\u003e\u003cli data-list\\\bullet\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eGet idea about total resource usage of the application.\u003c/li\u003e\u003cli data-list\\\bullet\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eSee error logs in real time.\u003c/li\u003e\u003cli data-list\\\bullet\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eMonitor in a some period of time to see what alerts it creates.\u003c/li\u003e\u003c/ol\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eIn application monitoring the procedure needed to set-up such a system actually requires a lot of research and knowledge about \u003ca href\\\https://en.wikipedia.org/wiki/Configuration_management\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003econfiguration management\u003c/a\u003e.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eHowever, since the LogWatcher abstracts away these details and automatically comes with configured monitoring tools (which will be discussed later), there is no need to manually configure and setup the \u003ca href\\\https://prometheus.io/docs/instrumenting/exporters/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eexporters or scrapers.\u003c/a\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eOverall, when its needed to get quick results about your ready docker images; this tool is a life saver for such scenarios.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch2\u003e2-1 LogWatcher Architecture\u003c/h2\u003e\u003cp\u003eLogWatcher system is a \u003ca href\\\https://en.wikipedia.org/wiki/Microservices\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003emicro-service architecture\u003c/a\u003e. Each service independent than other, connects when its needed and each service has different responsibilities.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eSince the services \u003ca href\\\https://www.techtarget.com/searchnetworking/definition/loose-coupling\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eloosely coupled\u003c/a\u003e, each service can scale and change its capacity when its required. This is a typical architecture for monitoring solutions, and there can be added more exporters to gather information about the network and its devices.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722393016583-272663628.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 2 \u003cem\u003e- Showcasing the LogWatcher system Architecture -\u003c/em\u003e\u003c/p\u003e\u003cp\u003eThe whole system is in the same simple docker network interface and communicates through the docker network interfaces over \u003ca href\\\https://www.cloudflare.com/learning/ssl/what-is-https/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eHTTP/HTTPS protocol\u003c/a\u003e.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch2\u003e2-2 Technology\u003c/h2\u003e\u003cp\u003eIn this section, the majority of the technology stack will be discussed. However the GUI will be discussed now.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eThe architecture depicts a typical system design for monitoring, and with this design its easy to monitor any application nodes without any additional services. However, in this project main point is to create a user interface to get insight about a docker image immediately. No configuration needed.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eFor this reason, I choose a python module called \u003ca href\\\https://streamlit.io/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eStreamlit\u003c/a\u003e for its simplicity and easy of creating such interfaces. Highly abstract modules enable fast interface creation and fast coding.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003ePrometheus\u003c/h3\u003e\u003cp\u003ePrometheus is an open source metric scraper used to collect information from the registered target network devices. Its fast information processing and query capabilities make it unique.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003eGrafana\u003c/h3\u003e\u003cp\u003eGranafa is also an open source tool, and provides a very unique and variety of graphical interfaces on web browser using Prometheus data. Easy to use and easy to integrate with other solutions.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003eAlert Manager\u003c/h3\u003e\u003cp\u003eAlert manager is an open source tool for listening alerts from Prometheus to take actions. These actions can be listed as email, telegram, discord, slack... and so on.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003eLoki \u0026amp; Promtail\u003c/h3\u003e\u003cp\u003eLoki and Promtail stack is used to collect log information from the target container applications. The collected data pipelines into Prometheus for later access.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003eNode Exporter \u0026amp; Cadvisor\u003c/h3\u003e\u003cp\u003eNode exporter and Cadvisor are solutions to collect information about the local system and the docker containers.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch2\u003e3-1 LogWatcher Features\u003c/h2\u003e\u003cp\u003eImportant features can be listed as:\u003c/p\u003e\u003col\u003e\u003cli data-list\\\checked\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eMonitoring\u003c/li\u003e\u003cli data-list\\\checked\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eAlert System\u003c/li\u003e\u003cli data-list\\\checked\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eBenchmarking\u003c/li\u003e\u003cli data-list\\\checked\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eConfiguration Management\u003c/li\u003e\u003cli data-list\\\checked\\\\u003e\u003cspan class\\\ql-ui\\\ contenteditable\\\false\\\\u003e\u003c/span\u003eReal-time Logs\u003c/li\u003e\u003c/ol\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722395943133-711927569.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 3 \u003cem\u003e- LogWatcher monitoring center illustrated -\u003c/em\u003e\u003c/p\u003e\u003ch3\u003eMonitoring\u003c/h3\u003e\u003cp\u003eLogWatcher utilized mentioned technologies to gather information about the network and its devices. This enables it to create visually appealing dashboards and monitoring systems.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003eAlert System\u003c/h3\u003e\u003cp\u003eAlert system listens for any events from Prometheus and in case of any problem it takes immediate action and sends message to technical person.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003eBenchmarking\u003c/h3\u003e\u003cp\u003eBenchmarking is another feature that makes this tool stand out. I believe testing on the fly is stressful for developers (I know). Knowing the results before deployment is crucial for career growth ha ha.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722395804599-699041310.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 4 - \u003cem\u003eLogWatcher configuration management illustrated -\u003c/em\u003e\u003c/p\u003e\u003ch3\u003eConfiguration Management\u003c/h3\u003e\u003cp\u003eConfiguring any system requires depth knowledge in the field to be successful. For that reason, I created an interface to easily manipulate the configuration files from the user friendly interface.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eIn configuring docker container, there is a slight problem. The problem is that you need to restart the container to make the changes take effect. Each container in the monitoring network interface is observed and can be visible through the interface.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eIf you create another container in the same network as monitoring, the container will be detected by the system and automatically start fetching metrics from the container.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722395855967-281576961.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 5 - \u003cem\u003eLogWatcher real-time log view center illustrated -\u003c/em\u003e\u003c/p\u003e\u003ch3\u003eReal-time Logs\u003c/h3\u003e\u003cp\u003eThis one is a little different. Normally, there is no need to set up Loki \u0026amp; Promtail stack to gather logs, simply using python docker module gives the exactly same solution.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eHowever, in docker way the logs are not processable by Prometheus, thus leading unprocessed read-only data which is waste of time for such a monitoring system.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722396208627-776862648.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 6 - \u003cem\u003eLogWatcher container management center illustrated -\u003c/em\u003e\u003c/p\u003e\u003cp\u003eFull access from the dashboard to the docker containers and easy manipulation is enabled thanks to docker module.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722400466909-30432399.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 7 - \u003cem\u003eLocal system resource usage in Grafana -\u003c/em\u003e\u003c/p\u003e\u003cp\u003eThis screenshot depicts pre-configured node exporter dashboard in Grafana. The system uses JSON and YAML templates to configure the whole system. Especially in Grafana, there are a lot of ready dashboard styles avaliable at \u003ca href\\\https://grafana.com/grafana/dashboards/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003elink\u003c/a\u003e.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722400688696-499393314.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 8 - \u003cem\u003eLoki \u0026amp; Promptail log metrics in Grafana -\u003c/em\u003e\u003c/p\u003e\u003cp\u003eThis picture showcases the log datas scraped from all containers in monitoring docker network interface. Each log is labeled and has priority, thus Prometheus and Grafana can take action according to the given priority.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/31/image-1722401164853-358147280.png\\\\u003e\u003c/p\u003e\u003cp\u003eFigure 9 \u003cem\u003e- Cadvisor interface showcase -\u003c/em\u003e\u003c/p\u003e\u003cp\u003eAs the name suggests, this is cadvisor dashboard. You can basically reach any container resource metrics from this interface too.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eCadvisor gathers docker container information and sends them to Promethus as the \u003cstrong\u003eFigure 2\u003c/strong\u003e depicts. Cadvisor will be avaliable at http://localhost:8080/containers/.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch2\u003eConclusion\u003c/h2\u003e\u003cp\u003eDoing such a project was really self-teaching, and would really see what people reacts to the project. If project gets attention, next features definitely will be integration with cloud platforms and ease of integration with any tool.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eThank you for exploring LogWatcher with me, and I look forward to seeing how this tool evolves with the community’s input.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eBest regards \u003ca href\\\https://www.atakangul.com/portfolio\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eAtakan G.\u003c/a\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eIf you like such projects and talks, I highly recommend subscribing to the newsletter. Its free.\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003eResources\u003c/h3\u003e\u003cp\u003ehttps://www.techtarget.com/searchnetworking/definition/loose-coupling\u003c/p\u003e\u003cp\u003ehttps://en.wikipedia.org/wiki/Microservices\u003c/p\u003e\u003cp\u003ehttps://prometheus.io/docs/instrumenting/exporters/\u003c/p\u003e\u003cp\u003ehttps://en.wikipedia.org/wiki/Configuration_management\u003c/p\u003e\u003cp\u003ehttps://streamlit.io/\u003c/p\u003e\u003cp\u003ehttps://www.cloudflare.com/learning/ssl/what-is-https/\u003c/p\u003e\u003cp\u003ehttps://docs.docker.com/reference/cli/docker/image/ls/\u003c/p\u003e\u003cp\u003ehttps://budibase.com/internal-tools/\u003c/p\u003e\u003cp\u003ehttps://aws.amazon.com/what-is/cli/\u003c/p\u003e\u003cp\u003ehttps://en.wikipedia.org/wiki/Graphical_user_interface\u003c/p\u003e\u003cp\u003ehttps://www.techtarget.com/searchitoperations/definition/Docker-image\u003c/p\u003e\u003cp\u003ehttps://grafana.com/grafana/dashboards/\u003c/p\u003e\},AICreated:false,imageURL:https://static.atakangul.com/uploads/image-1722393016583-272663628.png,isProject:true,views:639,comments:,status:published,likes:27,publishedAt:2024-07-31T06:00:55.260Z,updatedAt:2025-11-14T20:43:41.326Z,createdAt:2024-07-31T06:00:55.260Z,__v:0,isTechnical:true},{_id:6691b3a0847533f601515fe5,url:must-know-free-apis-for-developers,title:Must-Know Free APIs for Developers,content:# !(https://static.atakangul.com/uploads/image-1720822113935-42046526.jpg)\n\n# Every Developer **Must** Know These **Free APIs**\n\n\u003e _In this Article, I will be discussing the_ **_free_** _and_ **_usable APIs_** _that you will definitely want to use in your side projects._\n\n### **Free API List:**\n\n---\n\n**1 - Pixabay API(https://pixabay.com/api/docs/):** To fetch related images for AI created blogs. \n_Free Tier: 300 image per minute._ \n!(https://static.atakangul.com/uploads/image-1720822679961-984051655.png)\n\n---\n\n**2 - MongoDB Atlas API(https://www.mongodb.com/):** To maintain blog posts and images. \n_Free Tier: 512mb storage / project._ \n!(https://static.atakangul.com/uploads/image-1720822772435-710762698.png)\n\n---\n\n**3 - Redis API(https://redis.io/docs/latest/develop/get-started/data-store/):** To make sure of fast retrieval of most accessed data. \n_Free Tier: **30mb ram** per account._ \n!(https://static.atakangul.com/uploads/image-1720822947892-319571271.png)\n\n---\n\n**4 - NEWS API(https://newsapi.org/):** To fetch the latest articles about X. \n_Free Tier: **100 requests** per day._ \n!(https://static.atakangul.com/uploads/image-1720823583708-203779079.png)\n\n---\n\n**5 - BREVO SMTP API(https://developers.brevo.com/):** For managing email notifications and updates. \n_Free Tier: **300 email** per day._ \n!(https://static.atakangul.com/uploads/image-1720823975507-839546645.png)\n\n---\n\n**6 - Google OAuth2 API(https://developers.google.com/identity/protocols/oauth2):** To enable secure user authentication. \n_Free Tier: Free._ \n!(https://static.atakangul.com/uploads/image-1720824152667-125664921.png)\n\n---\n\n**7 - Telegram API(https://core.telegram.org/):** For sending logs to maintainers of the site. \n_Free Tier: **no cost**._ \n!(https://static.atakangul.com/uploads/image-1720824526024-977966237.png)\n\n---\n\n**8 - Discord API(https://discord.com/developers/docs/intro):** For creating bots. \n_Free Tier: **no cost**._ \n!(https://static.atakangul.com/uploads/image-1720824466295-571045846.png)\n\n---\n\nThere are much more free APIs for developers. I encourage you to google simply as free APIs for developers(https://www.google.com/search?qfree+apis+for+developers\u0026oqfree+APIs+for+developers\u0026gs_lcrpEgZjaHJvbWUqBwgAEAAYgAQyBwgAEAAYgAQyCAgBEAAYFhgeMggIAhAAGBYYHjIICAMQABgWGB4yCAgEEAAYFhgeMggIBRAAGBYYHjIICAYQABgWGB4yCAgHEAAYFhgeMggICBAAGBYYHjIICAkQABgWGB7SAQcyOTVqMGo3qAIAsAIA\u0026sourceidchrome\u0026ieUTF-8).\n,description:Explore a list of must-know free APIs for developers in this informative article. Discover tools like the Pixabay API for fetching images, MongoDB Atlas API for managing blog posts, Redis API for fast data retrieval, and more. From Google OAuth2 API for secure user authentication to Telegram API for sending logs, this article covers a wide range of APIs to enhance your projects. Check out these free APIs and revolutionize your development experience!,search_keywords:Free APIs, Pixabay API, MongoDB Atlas API, Redis API, NEWS API, BREVO SMTP API, Google OAuth2 API, Telegram API, Discord API, developers.Must-Know Free APIs for DevelopersExplore a list of must-know free APIs for developers in this informative article. Discover tools like the Pixabay API for fetching images, MongoDB Atlas API for managing blog posts, Redis API for fast data retrieval, and more. From Google OAuth2 API for secure user authentication to Telegram API for sending logs, this article covers a wide range of APIs to enhance your projects. Check out these free APIs and revolutionize your development experience!{\content\:\\u003ch1\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720822113935-42046526.jpg\\\\u003e\u003c/h1\u003e\u003ch1\u003e\u003cbr\u003e\u003c/h1\u003e\u003ch1\u003eEvery Developer \u003cstrong\u003eMust\u003c/strong\u003e Know These \u003cstrong\u003eFree APIs\u003c/strong\u003e\u003c/h1\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cblockquote\u003e\u003cem\u003eIn this Article, I will be discussing the \u003c/em\u003e\u003cstrong\u003e\u003cem\u003efree\u003c/em\u003e\u003c/strong\u003e\u003cem\u003e and \u003c/em\u003e\u003cstrong\u003e\u003cem\u003eusable APIs\u003c/em\u003e\u003c/strong\u003e\u003cem\u003e that you will definitely want to use in your side projects.\u003c/em\u003e\u003c/blockquote\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003ch3\u003e\u003cstrong\u003eFree API List:\u003c/strong\u003e\u003c/h3\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e1 - \u003ca href\\\https://pixabay.com/api/docs/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003ePixabay API\u003c/a\u003e: \u003cspan style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003eTo fetch related images for AI created blogs.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\\t\u003cspan class\\\ql-size-small\\\\u003eFree Tier: 300 image per minute.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720822679961-984051655.png\\\\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e2 - \u003ca href\\\https://www.mongodb.com/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eMongoDB Atlas API\u003c/a\u003e: \u003cspan style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003eTo maintain blog posts and images.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\\t\u003cspan class\\\ql-size-small\\\\u003eFree Tier: 512mb storage / project.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720822772435-710762698.png\\\\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e3 - \u003ca href\\\https://redis.io/docs/latest/develop/get-started/data-store/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eRedis API\u003c/a\u003e: \u003cspan style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003eTo make sure of fast retrieval of most accessed data.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\\t\u003cspan class\\\ql-size-small\\\\u003eFree Tier: \u003c/span\u003e\u003cstrong class\\\ql-size-small\\\\u003e30mb\u003c/strong\u003e\u003cspan class\\\ql-size-small\\\\u003e \u003c/span\u003e\u003cstrong class\\\ql-size-small\\\\u003eram\u003c/strong\u003e\u003cspan class\\\ql-size-small\\\\u003e per account.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720822947892-319571271.png\\\\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e4 - \u003ca href\\\https://newsapi.org/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\ style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003eNEWS API\u003c/a\u003e: \u003cspan style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003eTo fetch the latest articles about X.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\\t\u003cspan class\\\ql-size-small\\\\u003eFree Tier: \u003c/span\u003e\u003cstrong class\\\ql-size-small\\\ style\\\background-color: rgb(251, 252, 253); color: rgb(95, 99, 104);\\\\u003e100 requests\u003c/strong\u003e\u003cspan style\\\background-color: rgb(251, 252, 253); color: rgb(95, 99, 104);\\\ class\\\ql-size-small\\\\u003e per day.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720823583708-203779079.png\\\\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e5 - \u003ca href\\\https://developers.brevo.com/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\ style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003eBREVO SMTP API\u003c/a\u003e\u003cspan style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003e: For managing email notifications and updates.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\\t\u003cspan class\\\ql-size-small\\\\u003eFree Tier: \u003c/span\u003e\u003cstrong class\\\ql-size-small\\\ style\\\color: rgb(95, 99, 104); background-color: rgb(251, 252, 253);\\\\u003e300 email\u003c/strong\u003e\u003cspan class\\\ql-size-small\\\ style\\\color: rgb(95, 99, 104); background-color: rgb(251, 252, 253);\\\\u003e per day.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720823975507-839546645.png\\\\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e6 - \u003ca href\\\https://developers.google.com/identity/protocols/oauth2\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\ style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003eGoogle OAuth2 API:\u003c/a\u003e\u003cspan style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003e To enable secure user authentication.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\\t\u003cspan class\\\ql-size-small\\\\u003eFree Tier: Free.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720824152667-125664921.png\\\\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e7 - \u003ca href\\\https://core.telegram.org/\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\ style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003eTelegram API\u003c/a\u003e\u003cspan style\\\background-color: rgb(255, 255, 255); color: rgba(0, 0, 0, 0.9);\\\\u003e: For sending logs to maintainers of the site.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\\t\u003cspan class\\\ql-size-small\\\\u003eFree Tier: \u003c/span\u003e\u003cstrong class\\\ql-size-small\\\ style\\\background-color: rgb(31, 31, 31); color: rgb(232, 232, 232);\\\\u003eno cost\u003c/strong\u003e\u003cspan class\\\ql-size-small\\\\u003e.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720824526024-977966237.png\\\\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e8 - \u003ca href\\\https://discord.com/developers/docs/intro\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003eDiscord API\u003c/a\u003e: For creating bots.\u003c/p\u003e\u003cp\u003e\\t\u003cspan class\\\ql-size-small\\\\u003eFree Tier: \u003c/span\u003e\u003cstrong class\\\ql-size-small\\\ style\\\color: rgb(232, 232, 232); background-color: rgb(31, 31, 31);\\\\u003eno cost\u003c/strong\u003e\u003cspan class\\\ql-size-small\\\\u003e.\u003c/span\u003e\u003c/p\u003e\u003cp\u003e\u003cimg src\\\http://www.atakangul.com/uploads/2024/07/12/image-1720824466295-571045846.png\\\\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003e\u003cbr\u003e\u003c/p\u003e\u003cp\u003eThere are much more free APIs for developers. I encourage you to google simply as \u003ca href\\\https://www.google.com/search?qfree+apis+for+developers\u0026amp;oqfree+APIs+for+developers\u0026amp;gs_lcrpEgZjaHJvbWUqBwgAEAAYgAQyBwgAEAAYgAQyCAgBEAAYFhgeMggIAhAAGBYYHjIICAMQABgWGB4yCAgEEAAYFhgeMggIBRAAGBYYHjIICAYQABgWGB4yCAgHEAAYFhgeMggICBAAGBYYHjIICAkQABgWGB7SAQcyOTVqMGo3qAIAsAIA\u0026amp;sourceidchrome\u0026amp;ieUTF-8\\\ rel\\\noopener noreferrer\\\ target\\\_blank\\\\u003efree APIs for developers\u003c/a\u003e.\u003c/p\u003e\},AICreated:false,views:735,comments:66957a6f3967d8ea6d16f0c7,imageURL:https://static.atakangul.com/uploads/image-1720822113935-42046526.jpg,status:published,likes:21,publishedAt:2024-07-12T22:52:16.989Z,updatedAt:2025-05-25T12:36:35.513Z,createdAt:2024-07-12T22:52:16.989Z,__v:1,isProject:false,isTechnical:false}},__N_SSG:true},page:/,query:{},buildId:3_hKhfRlNodrIl0SlZy28,isFallback:false,isExperimentalCompile:false,gsp:true,scriptLoader:}/script>/body>/html>
View on OTX
|
View on ThreatMiner
Please enable JavaScript to view the
comments powered by Disqus.
Data with thanks to
AlienVault OTX
,
VirusTotal
,
Malwr
and
others
. [
Sitemap
]