This is a two part series on Recursion, please complete part-1 before you proceed further.
Why do we need a Tail-Recursive functions?
Simple recursion creates a series of stack frames, and for algorithms that require
deep levels of recursion, this creates a StackOverflowError.
How to Overcome this?
What is a recursive function?
A recursive function is a function that calls itself.
Why do we need to write recursive functions?
The short answer is that algorithms that use for (or any) loops require the use of var fields, and as you know that pure functions should not use any variable fields. so to write a pure function one should use recursive function.
Scala lets you create functions with multiple parameter groups, syntax is quite simple and similar to any regular function like below
def sum(x: Int, y: Int, z: Int): Int = x+y+z
//function with multiple parameter groups
def add(x:Int)(y:Int)(z:Int): Int = x+y+z
//difference in function calls
sum(1,2,3) //calling a regular function
add(1)(2)(3) //calling a function which has multiple parameter groups
Benefits of this approach:
- They let you have both implicit and non-implicit parameters
- A parameter in one group can use a parameter from a previous group as a default value
- you will get to know the inside approach of control structures and you can even create one of your own.
we will see all these benefits with examples.
Programming languages use evaluation strategies to determine when and how the arguments of a function call are evaluated. There are many evaluation strategies, but most of them end up in two categories:
- Non-strict evaluation, which will defer the evaluation of the arguments until they are actually required/used in the function body. Haskell is probably the most popular functional programming language that uses non-strict evaluation.
There are also languages that support both strict and non-strict evaluation strategies and Scala is one of them.
When we look at Cassandra-Spark integration from 1000 feet up, it seems crystal clear and simple right, but at a closer look, you will end up with lot of questions unanswered(at least for me now), well i do had few such questions posted below which needs to be addressed.
For my recent use case I had a requirement to integrate spark2 with hive and then load the hive table from spark, very first solution I found on Google was to move the existing hive-site.xml file to spark conf directory, but this alone would not be sufficient for complete integration and yes i had spent some couple of hours to find the exact solution, here are the consolidated steps for you.
Hello Everyone , Have you ever tried to build your own cluster from scratch instead of using packaged distributions , if so you would have faced some common issues like effecting your machine speed after installing 4-5 Linux boxes right , here is a better solution i would prefer to overcome i.e setting up the Hadoop cluster within docker containers.
Hello Everyone , this post is to integrate Apache Tez with Hive, and use it as processing engine instead of mr(Map Reduce).
Before we see actual integration lets install the Apache Tez.
First download the apache Tez tar ball from any of the apache mirror sites , the latest stable release of tez while writing is 0.8.5 you can find the same here.
Hello Everyone, This is Phanidhar Swarna the hadoop enthusiast, In the process of learning hadoop i do faced few hurdles for searching the right content(at-least for a beginner) on the web, one such case is setting up an Apache Hadoop cluster in docker containers. So for the scenarios which ever i feel lack of proper sources on the web i do post here.