Skip to main content

ElasticSearch

Basic of elasticsearch go to https://www.elastic.co/

Chrome plugin for elasticsearch: Go to add extention and search for sense 

Enable debug of NEST lib:


Create elasticClient from following code:

var node = new Uri(nodeUrl);
var settings = new ConnectionSettings(node).DefaultIndex(index).DisableDirectStreaming()
.OnRequestCompleted(details =>
{
Debug.WriteLine("### ES REQEUST ###");
if (details.RequestBodyInBytes != null) Debug.WriteLine(Encoding.UTF8.GetString(details.RequestBodyInBytes));
Debug.WriteLine("### ES RESPONSE ###");
if (details.ResponseBodyInBytes != null) Debug.WriteLine(Encoding.UTF8.GetString(details.ResponseBodyInBytes));
}).PrettyJson();

elasticClient = new ElasticClient(settings);

Nest.ISearchResponse<DataModel.Position> searchResponse = new Nest.SearchResponse<DataModel.Position>();
int[] units = Array.ConvertAll(unitFilter.Split(','), c => int.Parse(c));

//Query using Search Descriptor
var sd = new SearchDescriptor<DataModel.Position>();
sd.Query(q => q.Range(r => r.Field(f => f.EventCount).GreaterThan(0)) && q.Terms(t => t.Field(f => f.UnitId).Terms<int>(units)))
.Sort(s => s.Descending(d => d.DeviceTime))
.From(paging.From)
.Take(paging.Take).Size(20);

searchResponse = elasticClient.Search<DataModel.Position>(sd);

//Query of elastic search after var jsonBody = Encoding.UTF8.GetString(searchResponse.ApiCall.RequestBodyInBytes);

return searchResponse.Hits.ToList().Select(s => s.Source);

Book on ES : https://drive.google.com/open?id=0B--gpA0TMqR1dVZfbXJrb2ZIWnM

Basic Query to monitor elasticsearch and index


//Query for order by desc record
POST http://192.168.0.110:9200/satelocgconnect/position/_search?pretty=true
{
 "size": 10,
 "sort": [
   {
     "deviceTime": {
       "order": "desc"
     }
   }
 ]
}
//Query to get top 10 record
GET http://192.168.0.110:9200/runtimeunits/unitrunTime/_search?pretty=true
{
 "size": 10
}
//To know node state
GET http://192.168.0.110:9200/_nodes/stats/indices/fielddata?level=indices&fields=*

//To know index health doc count etc
GET http://192.168.0.110:9200/_cat/indices?v
//to check all alias
GET http://192.168.0.110:9200/*/_alias
//to check cluster health
GET http://192.168.0.110:9200/_cluster/health
//to get mapping of index
GET http://192.168.0.110:9200/runtimeunits/_mapping  
//create thread with number_of_replicas and number_of_shards
PUT http://192.168.0.110:9200/runtimeunits_v1/
{
    "settings" : {
        "index" : {
            "number_of_shards" : 2,
            "number_of_replicas" :1
        }
    }
}

// delte index
DELETE http://192.168.0.110:9200/runtimeunits
//close index
POST http://192.168.0.16:9200/analyticalgconnect_v1/_close
//add alias name to index
POST http://192.168.0.110:9200/_aliases/
{
    "actions" : [
        { "remove" : { "index" : "runtimeunits_v1", "alias" : "runtimeunits" } }
        { "add" : { "index" : "runtimeunits", "alias" : "runtimeunits" } }
    ]
}


Comments

Popular posts from this blog

Installing pyspark with Jupyter

Installing pyspark with Jupyter Check List Python is a wonderful programming language for data analytics. Normally, I prefer to write python codes inside   Jupyter Notebook  (previous known as  IPython ), because it allows us to create and share documents that contain live code, equations, visualizations and explanatory text.  Apache Spark  is a fast and general engine for large-scale data processing.  PySpark  is the Python API for Spark. So it’s a good start point to write PySpark codes inside jupyter if you are interested in data science: IPYTHON_OPTS="notebook" pyspark --master spark://localhost:7077 --executor-memory 7g Install Jupyter If you are a pythoner, I highly recommend installing  Anaconda . Anaconda conveniently installs Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science. Go to  https://www.continuum.io/downloads , find the ins...

Spark A to Z

Spark A to Z Simplicity, Flexibility and Performance are the major advantages of using Spark . Criteria Hadoop MapReduce Apache Spark Memory  Does not leverage the memory of the hadoop cluster to maximum. Let's save data on memory with the use of RDD's. Disk usage MapReduce is disk oriented. Spark caches data in-memory and ensures low latency. Processing Only batch processing is supported Supports real-time processing through spark streaming. Installation Is bound to hadoop. Is not bound to Hadoop. ·  Spark is 100 times faster than Hadoop for big data processing as it stores the data in-memory, by placing it in Resilient Distributed Databases (RDD). ·  Spark is easier to program as it comes with an interactive mode. ·  It provides complete recovery using lineage graph whenever something goes wrong. high availability in Apache Spark ·  Implementing single node recovery with local file system ·  Using Sta...

Google Nexus 7 - Buy in India

Don't get hurry it's better to wait for another month or two and get it cheaper! The 8GB Google Nexus 7 officially costs USD 199/- (approx Rs. 11,000 ) and 16GB costs USD 249/- (approx Rs. 13,750) in US. Pre-order listing at Grabmore is a 8GB model.  The good part though about this listing is that Rs. 16499 is all inclusive which includes shipping, handling, taxes as well as customs charges. Given that Nexus 7 has a quad core Tegra processor, 1GB ram and comes loaded with Google’s latest Jelly Bean version on Android – it still looks quite attractive compared to other tablets on offer in Indian market currently. Before you hit the order button though, you have to keep couple of things in mind. This is a pre-order listing and expected delivery time is about 4 to 5 weeks. Even in the US, Nexus 7 is currently not available and expected to start shipping from mid July. The expected delivery date on Grabmore is from 13 th  August to 18 th  August. So, if you a...