mariadb – mysql server does not start on macOS – kill – No such process

Recently I had to install mariadb server on macOS. I did follow the following instructions from the site. Once installation was over I was able to access using the mysql client on command line. Next day I was unable to start my server, I used the following options

mysql.server start

brew services start mariadb

brew services restart mariadb

For brew services I used to get stopped status when I listed the services, and for mysql.server start I used to get the following message

/usr/local/bin/mysql.server: line 264: kill: (4542) – No such process

I did try sudo option also but did not help. The best way is the one mentioned here, we need to delete the log files as mentioned below.

  • Stop MySQL / MariaDB.
  • Go to /usr/local/var/mysql
  • Delete ib_logfile0 & ib_logfile1 files.
  • Try to start now, It should work

Thanks to the guys who put this together. https://gist.github.com/irazasyed/a74766108b4630fc5c7c822df23526e8

React Native, Firebase- Android Gradle Error -cannot find symbol return BuildConfig.DEBUG or No matching client found

Whenever we try to build react-native application with firebase module and use messaging, then we use the following steps:

  • Set up a package in the Android source code
  • Use the same package name to create an application in Firebase console
  • Download the google.json and place it in the android folder.
  • Ensure that applicationId in build.gradle default config has the right package name, this is for grade version 4.2.2
  • This same package name is to be present AndroidManifest.xml
  • The Mainactivity.java, MainApplication.java package name have to be same as one mentioned in the AndroidManifest.xml, build.gradle, google.json
  • The files MainActivity.java and MainApplication.Java require the same folder structure as mentioned in the package.
  • Ensure that react-native is started with reset-cache option

npm start --reset-cache

We can try to remove node_modules and do an npm install again.

rm -rf node_modules
npm install

The main idea is to make package name consistent across AndroidManifest, build.grade, google.json, java application references and folder structure

https://stackoverflow.com/questions/34990479/no-matching-client-found-for-package-name-google-analytics-multiple-productf

https://stackoverflow.com/questions/46878638/how-to-clear-react-native-cache

https://github.com/invertase/react-native-firebase/issues/3254

https://github.com/facebook/react-native/issues/11228

Matillion data pipeline for BigQuery – long living OAuth tokens

Using Matillion to extract load and transfer data to analytics database is simple as discussed in previous blog. Though the OAuth set up worked fine, but I always had to reset my client tokens as they used to expire and there was no option to choose any option other than user accounts. When we do a “Manage OAuth” setup, we end up using the clientID and secret from our google API credentials set up and authorising the API call by using out login. Then we use this token configuring the OAuth for the Big Query. Somehow the OAuth refresh token used to expire and I had to redo the steps again. When I searched for answers in Matillion community, here is what I could get. https://metlcommunity.matillion.com/s/question/0D54G00007dIsVeSAK/refreshing-oauth-token-when-using-with-api-extractI tried the python scripts also configure OAuth token as a the variable, somehow it was not accepting the token. Then I came across this link from Matillion, which seems to have been written for AdWords but works for Big Query too. https://documentation.matillion.com/docs/2963740

I had to tweak few configurations to make mine work.

  • OAuthClientId, OAuthClientSecret, OAuthJWTSubject, Profile are not required, we need not configure them
  • P12 file is available under the Keys tab of credentials->Service Accounts -> Your service account. It mentions the password require to open the file when you add a new key.
  • Ensure that this file is available on the machine where Matillion is configured to be run.
  • Our Create New OAuth token has to be left unconfigured

React Native on IOS – Release version – with Error – “Use of undeclared identifier ‘kindOf'”

Here we are investigating an issue with a React Native Application which was taking time to load when it is not connected to WiFi on a debug build. We were trying to come up with a release build. With NodeJS on a command prompt this is a simple activity. Switching to release builds did reduce connection time considerably.

react-native run-ios --configuration Release

When we need to use Xcode we need to perform extra steps, on Xcode we goto Product -> Scheme -> Configuration and set it to release https://www.christianengvall.se/react-native-build-for-ios-app-store/

I am using ReactNative Firebase Messaging component referenced in my application.

"@react-native-firebase/app": "^12.2.0", 

"@react-native-firebase/messaging": "^12.2.0",

This worked well for a debug build, but when I changed to release , I started getting Error – Use of

undeclared identifier 'kindOf'.

After some search, here was the right solution. Please refer https://github.com/ammarahm-ed/react-native-mmkv-storage/issues/71. Here are the quick steps:

  • Change the platform to 11.
  • Delete the node_modules, and do an npm install
  • within iOS folder, do a pod repo update
  • do a pod deintegrate;pod clean; pod install
  • open the Xcode xcworkspace, and
  • Choose Pods, and then targets set it to 11.
  • Now build will work fine.
  • pod deintegrate
  • pod clean
  • pod install
  • pod repo update is useful. If stuck with ”’Library Not found – IDoubleConversion”’ make sure Xcode is opened with xcworkspace and no xcproj.

    Now all should be ok. The recent versions do not require pod clean, pod reintegrate and pod install works fine. This creates a new xcworkspace for the project. When using cocoa pods, using xcworkspace is important, building out of xcproj will give errors.

    Data pipeline for events from Google Analytics GA4 to snowflake using Matillion

    In this blog, I would like to share with you my experiences in pulling data from google analytics, GA4 and then loading them in staging tables in snowflake. The ETL tool I used was Matillion. My instance was set up on AWS. There are many videos from Matillion itself, which describe activities related to the ETL components that are available. I have found most of the steps for integration in various blogs within matillion itself. https://www.matillion.com/resources/blog.

    In this blog we will focus on pushing data from GA4 to snowflake. There are three ways this can be done, and we will discuss each one of these. Since we are using GA4, the linking to big query is straightforward and is the precondition to start. In fact the firebase linkage is also present (https://support.google.com/google-ads/answer/6333536?hl=en&ref_topic=3121765).

    • Use Matillion Google Big Query Component to directly pull data into snowflake in basic mode.
    • Use Matillion to pull from Big Query and then use google storage as JSON files for intermediate staging and finally push to staging tables and FLATTEN them using Snowflake queries or procedures
    • Use Matillion Google Big Query component to write queries with UNNEST them then pass them on to staging tables.

    Before we go deep into each of these methods, we need to understand how our analytics data is structured. The firebase+GA4 component, allows clients (web browsers, client devices) to send data to firebase server. There are numerous fields that are standard and available to be set values e.g. device details, location details etc. In some cases these fields might not be sufficient. e.g. In my case, I am dealing with use cases where, we need more custom parameters. These custom parameters are set up as name value pairs either in user properties or event properties. Once BigQuery is linked with GA4, we can query the events table e.g. a select * on a particular event date will show how these key values are arranged. This context is required to choose the type of method required to push data into snowflake.

    One more aspect that is required for integration, is an ability to configure OAuth2 between the systems, mainly Google Analytics API and Big query API. The Google Analytics API is the interface to GA4 data. In my case, I was provided only an IP address of the Matillion server. This makes it difficult to configure OAuth2 connections in a production environment which require fully qualified domain names. I was able to overcome this by making an entry of local domain in my etc/hosts file in windows. The same will apply to Unix and MacOS systems. (https://support.acquia.com/hc/en-us/articles/360004175973-Using-an-etc-hosts-file-for-custom-domains-during-development)

    If we do not worry about custom events and user parameters, it is safe to choose the first point, using Big Query component, to map the available rows in Big Query to target table in snowflake for staging. the link here is more than sufficient to get connected to both big query and snowflake and create the data pipeline. (https://www.matillion.com/resources/blog/using-the-bigquery-query-component-in-matillion-etl-for-snowflake). In this case we will observe that event_parameters and user_properties are columns that get created as Varchar(2000) automatically. These two columns are stored as a JSON.

    In my case, I could not use the default size of the event parameters column, these were pretty large documents and could not be covered within VARCHAR(2000), and Matillion does not allow to copy these data on an existing table with different data type. A snowflake table with variant data type allows one to run many different queries that otherwise require JSON data. Hence I decided to pull the data from big query and push them to cloud storage of google, and then pull them from cloud storage and push them to snowflake. This is a single large JSON for each day. Then a separate procedure was written to parse this data and load this into different tables. Here are two links, one of them present how JSON can be traversed for a specific path for event parameters or user parameters (https://interworks.com/blog/2020/02/26/zero-to-snowflake-reporting-in-tableau/), and the other is on how the stored procedure can be written for the same.(https://www.snowflake.com/blog/automating-snowflakes-semi-structured-json-data-handling/). There are two steps here, we need to load data into google storage and (COPY INTO <table> — Snowflake Documentation) then copy into snowflake staging table (Copying Data from a Google Cloud Storage Stage — Snowflake Documentation). You can skip the two above and look into Matillion documentation for the same (https://www.matillion.com/resources/blog/moving-google-analytics-360-data-into-snowflake)

    Again in my case, there was another problem, I had key value pairs, many of them, and the keys had to be converted to columns. The requirement became more complex with custom parameters JSON structures changing with events. The JSON parsing using stored procedures was very time consuming. Hence we decided to use the power of Big Query to process the data into columns and then map them to the columns in snowflake using the big query Matillion component itself. Let me assume we have a JSON event params of this type and we need column names as Author and Book. the magic around cast, coalesce is done to ensure that we sometimes cannot be sure which column has the value and we are certain about what data type the value has to be.

    {
     {
         key:"author",
         value:"Amish"
     }
     {
         key:"book",
         value:"Shiva Trilogy"
     }
    }
    
    (SELECT CAST(COALESCE(CAST(value.int_value AS STRING), CAST(value.string_value AS STRING),CAST(value.float_value AS STRING),CAST(value.double_value AS STRING) )AS STRING) FROM UNNEST(event_params) WHERE key = "author") AS author, 
    (SELECT CAST(COALESCE(CAST(value.int_value AS STRING), CAST(value.string_value AS STRING),CAST(value.float_value AS STRING),CAST(value.double_value AS STRING) )AS STRING) FROM UNNEST(event_params) WHERE key = "book") AS book
    From <YOUR BIG QUERY EVENT TABLE>
    

    Now this data can be fed into the snowflake as columns which can be used directly by Tableau users. These queries run very fast and we leverage the power of Big query.

    React Native setup on Windows 10 – vscode and android device – for beginners

    Recently I had to work on setting up notifications to be sent to android applications. I had to set up an android workspace that can run the code on my device. After enquiring my friends, I decided to go ahead with visual studio code IDE, along with Android Studio Simulator. In this set up we are leveraging the IDE capabilities of visual studio which has react native extensions, and attach this to android studio which allows us to connect to emulator or device.

    Here are the steps that are required to make this run:

    • Set up a package manager like chocolatey, that can allow us to install dependencies and manage them the way we do in Linux like rpm, apt-get, yum etc.
    • Install the dependencies like node, phyton and JDK
    • Install Android Studio
    • Install the Android SDK and configure ADB,
    • Install Visual Studio
    • Add React native extensions

    All the steps are documented here in the link – https://www.ryadel.com/en/react-native-visual-studio-code-windows-hello-world-sample-app/. I was able to get the setup ready in 1 hour. You will find the links to chocolatey deployment there, and it works for windows 10. We can directly go to the site and download from there too.

    There are few extra steps I had to take to ensure the linkages worked fine.

    • ADB set up and linking it to view devices. With all the steps given above I was able to get the emulator running for Reactnative, i was unable to connect my device. This can resolved by opening android studio in privileged mode, install the add on AVD device manager. I had to search in studio for packages and get it installed. We also need to ensure that ADB is in the path, it will be available in software tools, and in windows this will be in drive:/users/<username>/appdata/android/sdk/platform-tools. Please ensure this is in path. It has already been described in the link shared above. We need to set it up in environment variables. After this it was easy to run the sample application using Android Studio
    • there is a privilege that is required to be set up in powershell for allowing certain commands to be run on the devices. This link explains it completely. We only need to run the following in powershell. https://github.com/Microsoft/vscode-python/issues/2559.
    PS D:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
    

    With the changes mentioned above I was able to get the the sample code working. From now on time to look into notifications.

    sep 2022

    Recently I had installed a newer version of cmake globally and my command “npm run android” failed, with the error – “

    Execution failed for task ‘:react-native-mmkv-storage:generateJsonModelSharedCppDebug’.

    > java.lang.NullPointerException (no error message)”

    This could be restored only by uninstalling the cmake file. Please refer to the following link for more details.

    Rails App Log Monitoring on Heroku using Newrelic and fluent log forwarder

    Once we deploy our rails application on Heroku, application monitoring is a crucial aspect before we move this to production. We have multiple options for logging, in this blog I will detail the steps to integrate with New Relic. NewRelic has easy integration modes with Rails application and Heroku platform, but it is important to know what we need to fetch and push.

    There are options available:

    a. Using newrelic infrastructure agent to push the logs from application to newrelic server. This is easiest to use, and requires downloading the gem and applying the license key in the newrelic.yml file. When we do this within Heroku platform it is important to navigate to newrelic dashboard from the Heroku addons page. It is better to mention this key in our application configuration. https://docs.newrelic.com/docs/agents/ruby-agent/installation/ruby-agent-heroku. The agent is good for all platform components, for forwarding application logs it was not possible for me. I had to resort to option b.

    b. Here we need to have an intermediate log forwarded, and in this blog i have chosen fluentd to do the same. For a bare metal environment, we have have the daemon run seperately and have our loggers connect to it. There are two approaches we can follow,

    i. Use ActFluentLoggerRails to directly post the messages to the fluent port. We just need to ensure that the rails base logger is assigned to this new logger and rest will work fine. https://github.com/actindi/act-fluent-logger-rails

    ii. use normal ActLogger and have the logs pushed to STDOUT, and then a pipe to push this to our fluent server. This works fine for development or sandbox, but does not help in heroku environments.

    There it is not possible to run a separate process in the same dyno and communicate with it over localhost. We do not have access to log files too. Here it is useful to set up log drains set up for Heroku. We need to set up fluentd as a seperate application. Here is the link which is the easiest way to set up. https://github.com/aminoz007/NR-Heroku-Logs

    Somehow there is dependency between newrelic infra gem and the fluent push in terms of getting the logger initialized in newrelic. What I meant was, in a new set up where fluentd needs to forward to a new account set up, the logs do not get forwarded unless newrelic gem is also present. This is weird since both are independent. However. I am following up with newrelic team to understand this better, and will post the solution once i know it.

    Configure Rubymine on Windows, WSL, run rake tasks

    Rubymine is an excellent IDE when working on Ruby projects. Though there is a license fee associated with its usage, but then it is worth it in my opinion. The installation options for rubymine is fairly detailed here, and easy to follow – https://www.jetbrains.com/help/ruby/installation-guide.html. While providing the options the documentation recommends usage of WSL Windows Subsystem for Linux which is available as an application as well as configuration of windows developer tools. In case you do not want to use Rubymine, still WSL is a good choice for setting up the Ruby projects. The link here provides detailed steps on how to set this up with a Postgres backend.(https://gorails.com/setup/windows/10). In the link they have mentioned about setting up the Postgres database using a windows installer, I was able to set it up within WSL environment itself. We can migrate to WSL2, and the recent documentation is available here (https://docs.microsoft.com/en-us/windows/wsl/install-win10). In fact when initially WSL2 could not connect to network and I had to reset all network adapters as mentioned here(https://stackoverflow.com/questions/62314789/no-internet-connection-on-wsl-ubuntu-windows-subsystem-for-linux/63578387#63578387)

    I suggest setting up a running project either downloaded from GIT, or made locally, and ensure it works fine. This is important to debug the configuration set ups in case of issues. Once you are comfortable running the application from a WSL command line, using rake or rails or other tools like Heroku, we can shift to Rubymine. Here in this link they mention about pointing to rvm for remote ruby interpreter or version manager, I pointed it to rbenv path for ruby. Please check if you get errors like “unable to read rbconfig from specified interpreter”, this means the path to RBConfig is not correct. THis will require the Unix name of the folder where ruby is. Please check the output from RBConfig.ruby running in an irb (https://stackoverflow.com/questions/2814077/how-do-i-find-the-ruby-interpreter) and use that. This is also available when you type ‘whereis ruby` in your wsl. Once set up ruby and sdk will be fine. From here you will be able to get all the projects and dependencies and select the remote SDK.

    The next part is understanding the path that is associated with WSL and how they map to actual windows path. In my case I wanted to set up the rake task. You can do this by checking the edit configurations as mentioned here. (https://www.jetbrains.com/help/idea/run-debug-configuration-rake.html). Here there is a small change, you will need to map the local and remote path. The discussion here is pretty useful. In short map Local Path: //wsl$/Ubuntu-18.04/home/user/code/test_project to Remote Path: /home/user/code/test_project and from now on you are good to run your code in you rubymine. (https://superuser.com/questions/1540786/running-intellij-project-inside-of-wsl-ubuntu)

    There might be times when indexes get corrupt and the whole rubymine is unable to detect dependencies and perform the run tasks. No need to worry, clean up the project, and set this up again. Worked fine for me.

    Enjoy rubymine !!!

    Setting up local single node cluster in kubernetes windows environment with volume mount

    The blog below can interest those building tools for CI environment, and need to test the packaged components in a cloud environment. The readily available environments are shared ones for developer integration but will be highly constrained by the organization policies and it will be difficult to set up one there for development that allows the freedom to experiment. It is preferred to have a local desktop environment where we have the can develop and interact with containers, registry, servers locally. Linux and Mac environments are easy to set up and use. Windows environment had constraints earlier. With the availability of Docker for Windows integrated with Kubernetes, development is easy. Some features are in development and in this blog, we will explore using the latest docker and associated Kubernetes features. We will look at some of the very simple errors that come up during installation, application containerization, pushing to registry, pulling it up for a job.

    Here are the items we will perform:
    1. Install Docker on a Windows 10 environment
    2. Set up a Kubernetes environment create a pod in it
    3. Develop certain scripts that can be pushed into a container and set it up as a job

    My local windows version is windows 10 professional, with 16 GB RAM and 64 bit OS same as the prerequisites for Docker. Here we install the Docker community edition here. (https://docs.docker.com/docker-for-windows/). During installation no need to click on windows container. The Docker installation is well documented and require the windows features to be set up and virtualization set up on the BIOS, check your vendor instructions to set this up. Some times while installing certain Windows features will have to be toggled – Hyper-V and virtualization. If you already have virtual box running, you need to check if it still working. I did get errors like “Error: Hardware assisted virtualization and data execution protection must be enabled in the BIOS”. Try the following :
    1. https://docs.docker.com/docker-for-windows/troubleshoot/#virtualization-must-be-enabled.
    2. https://stackoverflow.com/questions/39684974/docker-for-windows-error-hardware-assisted-virtualization-and-data-execution-p,

    What worked for me is the resetting of Hyper-V feature, restart and then set Hyper-V again.

    Then I set up Kubernetes for a single node cluster. All it needs is to check the EnableKubernetes box in the Docker For Windows UI. You can check it up using the kubectl config view and kubectl get nodes.

    A local docker registry that helps within the kubernetes cluster.
    https://microk8s.io/docs/registry-private
    Issues with manifest
    https://success.docker.com/article/error-pulling-image-no-matching-manifest
    Experimental features switched on.

    Now we will try to start a HelloWorld application all alone in our cluster.
    a. Download the Helloworld docker image from the dockerhub
    b. Tag it and load it in the local docker repository
    c. create the docker yaml file for the same.
    d. apply it on our node using the Kubernetes cluster.

    How to use a local Kubernetes config. https://medium.com/@jonjam/kubernetes-development-environment-using-docker-on-windows-9cd731c776b5

    https://docs.docker.com/registry/

    We set up a local docker registry and push our helloworld docker image there. We tag it, and this is the link we provide to our kubernetes configuration file. The kube cluster file will not be able to access the local images.

    Now we create the Kubeconfig file. We will have to create a “deployment.yaml” as provided here. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
    Finally apply the configuration, we have our helloworld container deployed in a pod in a kubernetes cluster.

    The last part is volume mount, we bring up the windows We need to enable the local disk share option in our docker UI, then follow the instructions of configuring the path in our volume mounts in the kubernetes deploument configs.
    https://stackoverflow.com/questions/43181654/locating-data-volumes-in-docker-desktop-windows https://docs.microsoft.com/en-us/visualstudio/containers/troubleshooting-docker-errors?view=vs-2019

    And this will allow our container application to access the mount path.

    Multitenant Application Configuration State Management with Terraform and Ansible

    Applications leverage multi-tenant architecture when providing services. A new tenant configuration requires setting up the data store, end points and registering client callbacks. These changes are automated with REST or equivalent form APIs. These APIs are sequenced and packaged for CI-CD in organizations. Such sequencing can be automated using Jenkins, and the deployment can be managed using any of the orchestration tools like Chef, Puppet and Ansible.
    Infrastructure declaration, definition and automation tools are developed when such applications are hosted in cloud. Infrastructure as Code is one such concept. Cloud Infrastructure/platforms providers provide interfaces with certain open or proprietary infrastructure definition and control formats. AWS Cloud Formation, Google Deployment Manager can be used for the same purpose.
    When there is a need a heterogeneous Infrastructure that leverages multiple cloud infrastructure/platform provisioning, tools specific to platforms are difficult to integrate and manage. In these situations tools that provide platform Agnostic formats can be used like Terraform.
    Maintaining the cloud hosted multi-tenant application will require management of Infrastructure, Platform and the applications. In many cases, the applications themselves might be deployed or configured through multiple channels like CI/CD, localized operators script. For example a subset of operators will be using Terraform for infrastructure provisioning where as many more might use the cloud provider tools. Hence now there is a growing need to control such an environment or at least recover quickly from incorrect configurations. So there is a need to either streamline a process and constraint the operators to flow through a same channel through a set of applications or services.

    Such management applications will require at least some the following:

    • Ability to declare Infrastructure, Platform and application configurations and resolve dependencies
    • Allow concurrent usage with access controls
    • Provide secure key management methods
    • Detect configuration drifts

    All the functionality is not available in one single open source tool. Terraform is excellent for abstracting infrastructure/platforms, Infrastructure provisioning, state management, secrets management. It does not do well at application configuration. Ansible does well on configuration side but does not easy when it comes to state management. On top of this, we also need dependency to be taken care of. There is a need to visualize these dependencies and their impact.

    While designing such applications, the following aspects will have to be addressed:

    • When transforming configuration as code, the representation format, JSON or YAML. This will be decided by the choice of our tool. Terraform has a specific format of representation using HCL (Hashicorp Configuration language, Ansible uses Python)
    • The nature of configuration interfaces, these are programmatic to allow automation. It can be a RESTful API, or might be certain messaging queues or file transfer. There can be a customized need to login to certain machines and make changes through CLI (SSH or some other mode)

    Now we look onto what certain tools can offer, once we have a format ready.
    Terraform – Declare and provision Infrastructure, creates dependencies for usm certain visualizations are available out of the box.
    Ansible – Automates application configuration sequence.

    The next steps are to integrate all these tools for a complete application.