Skip to content
View Abhiknoldur's full-sized avatar
🏠
Working from home
🏠
Working from home
  • Paytm
  • Noida India
Block or Report

Block or report Abhiknoldur

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Abhiknoldur/README.md

alt text

My Tech Toolbox 🧰

python html5 css3 Java width= NodeJS width= git mysql Docker PostGreSQL AWS Kubernetes Jenkins Terraforms istio Ansible

<script src="https://platform.linkedin.com/badges/js/profile.js" async defer type="text/javascript"></script>

  • My primary focus: DevOps & SRE
  • Earned AWS certification in 6 month of usage.
  • Passionate for learning & exploring new Tech. I write Tech Blogs & make educational YouTube Videos.
  • I am working on building my online presence and doing my bit to spread knowledge & mentor fellow developers who are starting our their programming journey.

📊 Github Stats

Abhishek Baranwal | Stats

Visitor Count

DevOps-Roadmap

Show some  ❤️  by starring some of the repositories!

https://www.credly.com/badges/d67be2e7-35c6-4648-9e5a-3dfb224019ac

My Blog Posts 🌱

➡️ more blog posts...

My Latest YouTube Videos 🌱

Pinned

  1. sparkSession-demo sparkSession-demo Public

    Forked from NashTech-Labs/sparkSession-demo

  2. Dataframe and dataset Dataframe and dataset
    1
    scala> val rdd1=sc.parallelize(Seq((1,3.6)))
    2
    rdd1: org.apache.spark.rdd.RDD[(Int, Double)] = ParallelCollectionRDD[0] at parallelize at <console>:24
    3
    
                  
    4
    scala> val rdd2=sc.parallelize(Seq((1,1.1)))
    5
    rdd2: org.apache.spark.rdd.RDD[(Int, Double)] = ParallelCollectionRDD[1] at parallelize at <console>:24
  3. PowerPlant.scala PowerPlant.scala
    1
    package com.knoldus
    2
    
                  
    3
    import org.apache.spark.sql.{SaveMode, SparkSession}
    4
    import org.apache.spark.sql.functions._
    5
    import org.apache.spark.sql.types._
  4. cassandra queries cassandra queries
    1
       Queries==============>
    2
    
                  
    3
    
                  
    4
    1. CREATE TABLE assignment.emp_details (
    5
        emp_id bigint,
  5. Sample json Sample json
    1
    { "car": "supercar",
    2
     "manufacturer": "Porsche", 
    3
     "model": "911", 
    4
     "price": 135000,
    5
     "wiki": "http://en.wikipedia.org/wiki/Porsche_997" 
  6. Spark-asignments Spark-asignments
    1
    val rdd_1 = sc.parallelize(Seq((1, 3.6)))
    2
      val rdd_2 = sc.parallelize(Seq((1, 1.1)))
    3
    
                  
    4
      println(s"Wanted Result:", findSubOfVAlues(rdd_1, rdd_2))
    5