Profile
Data is the most essential commodity for the next generation. I worked with the data for many sectors throughout my career. It amazes me how the data is getting extracted, transformed(cleansed) and Loaded into the powerful engines before being served to the next available customer. Often businessess never have the clear visibility into business. I use better resources and strategy to put all those pieces at your fingertips, so you can make the job faster, more effective decisions for your business.

Three Key element I always use to for better deliverance to all my clients.
Strategy - I always assess, plan and build a roadmap by begin with the end in mind.
Architecture - We as a partner will always discuss on for best approach to move to the cloud, modernise or scale the business solutions for the best approach towards visulasing the data.
Implementation - Several methods of implementing a high quality output through Microsoft PowerBI, Sql server data tools.

Workflows

  1. Powerapps- App designer, Logic flow designer and app service management.
  2. Microsoft Flow- Triggers, approvals logics, conditions set and automate processess.
  3. PowerBI- Powerpivot,powerquery,M-language, DAX, Visualisation tricks, Report server.
  4. Reporting Process- Report processing, Model processing, Scheduling.
  5. Analysis Process- MDX queries, Facts and Dimensions, KPI Indicators, Partitions,Aggregations.
  6. ETL Process- Conditional split, Merge, Multicast, Aggregate.
  7. T-SQL- Joins, Keys, Performance tuning, Query Optimisation techniques, Index, Stored Procedures, Views, Triggers.
  8. Machine Learning- R, ggplot, shinysheets
  9. Apart of this I am an ardent fan of literature,books and novels. I finish reading a book each week and I always try to rejuvenate my work each day.

You can find me always here:

  




Memory management will blocks of code

16-May-2019

A solid understanding of R’s memory management will help you predict how much memory you’ll need for a given task and help you to make the most of the memory you have. It can even help you write faster code because accidental copies are a major cause of slow code. The goal of this chapter is to help you understand the basics of memory management in R, moving from individual objects to functions to larger blocks of code. Along the way, you’ll learn about some common myths, such as that you need to call gc() to free up memory, or that for loops are always slow.

Outline
Object size shows you how to use object_size() to see how much memory an object occupies, and uses that as a launching point to improve your understanding of how R objects are stored in memory.

Memory usage and garbage collection introduces you to the mem_used() and mem_change() functions that will help you understand how R allocates and frees memory.

Memory profiling with lineprof shows you how to use the lineprof package to understand how memory is allocated and released in larger code blocks.

Modification in place introduces you to the address() and refs() functions so that you can understand when R modifies in place and when R modifies a copy. Understanding when objects are copied is very important for writing efficient R code.

Prerequisites

In this chapter, we’ll use tools from the pryr and lineprof packages to understand memory usage, and a sample dataset from ggplot2. If you don’t already have them, run this code to get the packages you need:

install.packages("ggplot2")
install.packages("pryr")
install.packages("devtools")
devtools::install_github("hadley/lineprof")
Sources

The details of R’s memory management are not documented in a single place. Most of the information in this chapter was gleaned from a close reading of the documentation (particularly ?Memory and ?gc), the rest API: The rest I figured out by reading the C source code, performing small experiments, and asking questions on R-devel. Any mistakes are entirely mine.


WE ALWAYS WORK WITH :