Skip to main content

ElasticSearch

Basic of elasticsearch go to https://www.elastic.co/

Chrome plugin for elasticsearch: Go to add extention and search for sense 

Enable debug of NEST lib:


Create elasticClient from following code:

var node = new Uri(nodeUrl);
var settings = new ConnectionSettings(node).DefaultIndex(index).DisableDirectStreaming()
.OnRequestCompleted(details =>
{
Debug.WriteLine("### ES REQEUST ###");
if (details.RequestBodyInBytes != null) Debug.WriteLine(Encoding.UTF8.GetString(details.RequestBodyInBytes));
Debug.WriteLine("### ES RESPONSE ###");
if (details.ResponseBodyInBytes != null) Debug.WriteLine(Encoding.UTF8.GetString(details.ResponseBodyInBytes));
}).PrettyJson();

elasticClient = new ElasticClient(settings);

Nest.ISearchResponse<DataModel.Position> searchResponse = new Nest.SearchResponse<DataModel.Position>();
int[] units = Array.ConvertAll(unitFilter.Split(','), c => int.Parse(c));

//Query using Search Descriptor
var sd = new SearchDescriptor<DataModel.Position>();
sd.Query(q => q.Range(r => r.Field(f => f.EventCount).GreaterThan(0)) && q.Terms(t => t.Field(f => f.UnitId).Terms<int>(units)))
.Sort(s => s.Descending(d => d.DeviceTime))
.From(paging.From)
.Take(paging.Take).Size(20);

searchResponse = elasticClient.Search<DataModel.Position>(sd);

//Query of elastic search after var jsonBody = Encoding.UTF8.GetString(searchResponse.ApiCall.RequestBodyInBytes);

return searchResponse.Hits.ToList().Select(s => s.Source);

Book on ES : https://drive.google.com/open?id=0B--gpA0TMqR1dVZfbXJrb2ZIWnM

Basic Query to monitor elasticsearch and index


//Query for order by desc record
POST http://192.168.0.110:9200/satelocgconnect/position/_search?pretty=true
{
 "size": 10,
 "sort": [
   {
     "deviceTime": {
       "order": "desc"
     }
   }
 ]
}
//Query to get top 10 record
GET http://192.168.0.110:9200/runtimeunits/unitrunTime/_search?pretty=true
{
 "size": 10
}
//To know node state
GET http://192.168.0.110:9200/_nodes/stats/indices/fielddata?level=indices&fields=*

//To know index health doc count etc
GET http://192.168.0.110:9200/_cat/indices?v
//to check all alias
GET http://192.168.0.110:9200/*/_alias
//to check cluster health
GET http://192.168.0.110:9200/_cluster/health
//to get mapping of index
GET http://192.168.0.110:9200/runtimeunits/_mapping  
//create thread with number_of_replicas and number_of_shards
PUT http://192.168.0.110:9200/runtimeunits_v1/
{
    "settings" : {
        "index" : {
            "number_of_shards" : 2,
            "number_of_replicas" :1
        }
    }
}

// delte index
DELETE http://192.168.0.110:9200/runtimeunits
//close index
POST http://192.168.0.16:9200/analyticalgconnect_v1/_close
//add alias name to index
POST http://192.168.0.110:9200/_aliases/
{
    "actions" : [
        { "remove" : { "index" : "runtimeunits_v1", "alias" : "runtimeunits" } }
        { "add" : { "index" : "runtimeunits", "alias" : "runtimeunits" } }
    ]
}


Comments

Popular posts from this blog

Installing pyspark with Jupyter

Installing pyspark with Jupyter Check List Python is a wonderful programming language for data analytics. Normally, I prefer to write python codes inside   Jupyter Notebook  (previous known as  IPython ), because it allows us to create and share documents that contain live code, equations, visualizations and explanatory text.  Apache Spark  is a fast and general engine for large-scale data processing.  PySpark  is the Python API for Spark. So it’s a good start point to write PySpark codes inside jupyter if you are interested in data science: IPYTHON_OPTS="notebook" pyspark --master spark://localhost:7077 --executor-memory 7g Install Jupyter If you are a pythoner, I highly recommend installing  Anaconda . Anaconda conveniently installs Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science. Go to  https://www.continuum.io/downloads , find the ins...

Spark A to Z

Spark A to Z Simplicity, Flexibility and Performance are the major advantages of using Spark . Criteria Hadoop MapReduce Apache Spark Memory  Does not leverage the memory of the hadoop cluster to maximum. Let's save data on memory with the use of RDD's. Disk usage MapReduce is disk oriented. Spark caches data in-memory and ensures low latency. Processing Only batch processing is supported Supports real-time processing through spark streaming. Installation Is bound to hadoop. Is not bound to Hadoop. ·  Spark is 100 times faster than Hadoop for big data processing as it stores the data in-memory, by placing it in Resilient Distributed Databases (RDD). ·  Spark is easier to program as it comes with an interactive mode. ·  It provides complete recovery using lineage graph whenever something goes wrong. high availability in Apache Spark ·  Implementing single node recovery with local file system ·  Using Sta...
10.9% of Android devices run OS 4.0 Ice Cream Sandwich By  Ian Hardy  on July 3, 2012 at 8:01am in  Mobile News Google introduced Android OS 4.0 Ice Cream Sandwich on October 18th. Now, almost 9 months later, the OS has officially reached double diget numbers when it comes to the percentage of Android devices running a particular version. According to the Android Developers site ICS now powers 10.9% of device,  up from 7.1% a month ago . This increase is probably from all the new 4.0 powered devices, such as the HTC One X, One S and One V, plus the Samsung Galaxy S III. Unfortunately the OS that still has the lions share users is the 15-month old 2.3 Gingerbread with 64%, dropping 1% over last month. Cupcake (v1.5) and Donut (v1.6) continue to drop off the face of the world and now represent a combined percentage of 0.7%. Next month should be better for Ice Cream Sandwich numbers, plus we’ll probably see first stats on OS 4.1 Jelly Bean – which will be comin...