This repository has been archived by the owner on Dec 14, 2019. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 3
/
atom.xml
766 lines (550 loc) · 58.5 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title><![CDATA[Manufacturing Big Data]]></title>
<link href="http://www.manufacturingbigdata.com/atom.xml" rel="self"/>
<link href="http://www.manufacturingbigdata.com/"/>
<updated>2012-07-10T10:40:01+05:30</updated>
<id>http://www.manufacturingbigdata.com/</id>
<author>
<name><![CDATA[System Insights]]></name>
</author>
<generator uri="http://octopress.org/">Octopress</generator>
<entry>
<title type="html"><![CDATA[Energy Regulations in Tamil Nadu, India]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/07/10/tneb-regulations/"/>
<updated>2012-07-10T11:00:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/07/10/tneb-regulations</id>
<content type="html"><![CDATA[<p>Industrial energy consumers in Tamil Nadu, India have seen a sharp increase in energy costs beginning from April 2012. In this post we look at the revised tariff of the Tamil Nadu Electricity Board (TNEB), and examine its impact on the overall energy costs for a manufacturing plant.</p>
<h2>The Tariff</h2>
<p>The revised Tariff for Industrial Consumers (HT1A) is as follows (available <a href="http://tnerc.tn.nic.in/orders/Tariff%20Order%202009/2012/T.O%20No.%201%20of%202012%20dated%2030-03-2012.pdf">here</a>):</p>
<h3>Basic Charges</h3>
<ul>
<li>Demand Charges – INR 300 / kVA / month</li>
<li>Energy Charges – INR 5.50 / kWh</li>
</ul>
<h3>Restrictions and Surchages</h3>
<h4>Power Factor:</h4>
<ul>
<li>Power Factor => 0.9 – No Surcharge</li>
<li>0.9 > Power Factor => 0.85 – 1% of Current Consumption Charges for every 0.01 reduction in PF from 0.9</li>
<li>0.85 > Power Factor => 0.75 – 1.5% of Current Consumption Charges for every 0.01 reduction in PF from 0.9</li>
<li>Power Factor < 0.75 – 2% of Current Consumption Charges for every 0.01 reduction in PF from 0.9</li>
</ul>
<h4>Billable Demand:</h4>
<ul>
<li>Demand Charges will levied on Maximum Demand that has actually been registered for the month or 90% of the Sanctioned Demand, whichever is higher.</li>
</ul>
<h4>Peak Hour:</h4>
<ul>
<li>HT Industrial Consumers will be billed 20% extra on the Energy Charges for the Energy recorded during the Peak hours</li>
<li>Duration of Peak hours will be 6:00am to 9:00am & 6:00pm to 9:00pm</li>
</ul>
<h4>Night Hour:</h4>
<ul>
<li>HT Industrial Consumers will get a reduction of 5% on the Energy Charges for the Energy recorded during the Night hours</li>
<li>Duration of Night hour will be 10:00pm to 5:00am</li>
</ul>
<h4>Demand Integration Period:</h4>
<ul>
<li>Maximum Demand Integration period will be 15minutes</li>
</ul>
<h4>Harmonics:</h4>
<ul>
<li>Total Voltage Harmonic Distortion should not exceed 5%</li>
<li>Total Current Harmonic Distortion should not exceed 8%</li>
<li>If the harmonics level are not within the limits, then the consumer has to pay 15% of respective tariff as Compensation</li>
</ul>
<h2>Power Scenario in the State</h2>
<p>The state of Tamil Nadu has installed capacity of 10,364.5 MW and the average power availability is about 8500 MW. The demand ranges from 11,500 MW to 12,500 MW which gives a clear indication that the state has a shortage about 3000 to 4000 MW of power. Moreover if we see the growth of consumers, it keeps increasing at a rate of 5% every year. Because of this gap between demand and supply, TNEB has taken the following mitigation measures:</p>
<ul>
<li>40% cut on demand and energy for High Tension Industrial and Commercial Services</li>
<li>Load shedding of 2 hrs in Chennai and its suburbs</li>
<li>Load shedding of 4 hrs in other urban and rural areas</li>
<li>10% of power supply during Peak hours for Industrial and Commercial Services</li>
<li>Power Holiday for all HT & LT consumers</li>
</ul>
<p>These restrictions can be relaxed based on the power availability. However, HT consumers are allowed to make power purchase to inter- and intra-state Open Access providers where cheaper power may be available.</p>
<h2>So… What’s the Impact?</h2>
<p>Lets examine how these revised prices impact an average manufacturing plant.</p>
<p>Lets consider a Manufacturing Facility, with the following cost structure:</p>
<ul>
<li>Permitted Demand: 1000 kVA</li>
<li>Permitted Energy Quota: 300,000 kWh</li>
</ul>
<p>Based on this structure, lets assume that the energy consumed and the costs incurred during a representative month is as follows:</p>
<iframe width='410' height='500' frameborder='0' src='https://docs.google.com/spreadsheet/pub?key=0AjFwRioMlxbbdEptNTRoLVdtQUpsa1pIMW9NSXl5S1E&single=true&gid=0&output=html&widget=true'></iframe>
<p>Lets see how costs change with the new pricing under different scenarios:</p>
<h3>Case 1: Grid Only</h3>
<p>The revised grid costs are Rs. 5.50/kWh and the plant faces a 40% reduction on its demand and energy limits. The revised permitted demand is 600 kVA and the permitted energy is 180,000 kWH. The plant is penalized at twice the price for exceeding the energy or demand limits.</p>
<iframe width='410' height='600' frameborder='0' src='https://docs.google.com/spreadsheet/pub?key=0AjFwRioMlxbbdEptNTRoLVdtQUpsa1pIMW9NSXl5S1E&single=true&gid=1&output=html&widget=true'></iframe>
<p>If the plant is purely dependent on the grid (EB), then monthly energy costs grow by more than 120% (more than doubles!!!).</p>
<h3>Case 2: Grid and Diesel</h3>
<p>If the plant offsets 500 kVA of demand and 130,000 kWh of energy by running a diesel generator, which costs Rs. 15/kWh:</p>
<iframe width='410' height='650' frameborder='0' src='https://docs.google.com/spreadsheet/pub?key=0AjFwRioMlxbbdEptNTRoLVdtQUpsa1pIMW9NSXl5S1E&single=true&gid=2&output=html&widget=true'></iframe>
<p>Even with using an auxilliary Diesel Generator to supplement grid energy, the plant spends 82% more on energy.</p>
<h3>Case 3: Grid and Power Purchase</h3>
<p>The plant purchases 130,000 kWh at Rs. 8/kWh, and gets a equivalent deemed demand of 260 kVA:</p>
<iframe width='410' height='700' frameborder='0' src='https://docs.google.com/spreadsheet/pub?key=0AjFwRioMlxbbdEptNTRoLVdtQUpsa1pIMW9NSXl5S1E&single=true&gid=3&output=html&widget=true'></iframe>
<p>The plant still sees an increase of about 60% after purchasing power from third party suppliers.</p>
<h2>What do we do?</h2>
<p>Its clear that there are no simple ways of reducing or even maintaining energy costs at the “pre-hike” levels in Chennai. Simply changing the energy source to Diesel is not an option either, and buying third-party power can be just as expensive as using energy from the TNEB grid. What this calls for is more aggressive and hands-on management of energy consumption in the manufacturing facility, looking at which machines and systems consume the most energy, and finding ways to decrease their usage. Our vimana platform does just this and we will be back with a followup post on how <a href="http://www.systeminsights.com/vimana">vimana</a> can be applied in improving energy efficiency and reducing energy costs in a manufacturing facility.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Monitoring MTConnect Streams: MTConnect Graphr]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/07/02/monitoring-mtc-streams-mtconnect-graphr/"/>
<updated>2012-07-02T11:15:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/07/02/monitoring-mtc-streams-mtconnect-graphr</id>
<content type="html"><![CDATA[<p>A few months back I took up the fun task of exploring MTConnect streams and the amazing possibilities which it presented to a developer. That culminated into a web base monitoring app, MTConnect Graphr, which can now be downloaded from <a href="http://github.com/princearora/mtconnect-graphr">Github</a>. In this post I’ll run down through the development process of the same.</p>
<p>The web scene has changed remarkably in a past few years. Web applications are expected to be compatible to smartphones and tablets. They are supposed to be clean and responsive. This requirement was enough to persuade me to use <a href="http://twitter.github.com/bootstrap/">Bootstrap by Twitter</a> as the basic framework for the app. It is tiny, expandable and has good documentation to get you started.</p>
<p>Coming to the implementation, the first task was to dynamically connect to the XML stream provided by the MTConnect Agent. Though it sounds pretty easy, the task becomes a bit tricky because it requires a recurring connection with external urls. The easiest way out of this situation is to implement a PHP proxy. For this we write a PHP loader script, and then use it every time we need to connect to an external host.</p>
<pre><code>header('Content-type: application/xml'); //specififying the return content type
$q = $_GET['url'];
handle = fopen($q, "r"); //connecting to the url
if ($handle) {
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer; //reading and returning content
}
fclose($handle);
}
</code></pre>
<p>That sums up our <code>loader.php</code>. Next all we need to do is to write a simple function to load the XML file via the proxy.</p>
<pre><code>function getCurrentXML(conn_url) { //function to retrieve a xml file
var n = "";
return $.ajax({
url: "loader.php?url=http://"+conn_url, //using the proxy
cache: !1,
async: !1,
dataType: "txt",
success: function (t) {
n = t
}
}), n
}
</code></pre>
<p>Once we have access to the XML stream, there are a plethora of tools available to parse and get data out of it. I chose to use a combination of jQuery and JSON for the job. The xml2json plugin available <a href="http://www.fyneworks.com/jquery/xml-to-json/">here</a> provided an easy conversion to JSON.</p>
<pre><code>var xmldata = getCurrentXML(conn_url),
i = $.xml2json(xmldata);
</code></pre>
<p>With JSON, life is easy. It can’t be any simpler to parse data than it is with JSON. All I did was to write regular funtions to parse conditions and device parameters. But there is a catch. Not all XML tags will be meaningful and to make them appear right, we need to write individual functions for each of them. Here I will explain the working of the function used to parse the conditions for all parameters.</p>
<pre><code>this.getCondition = function (n) {
var r=new Object();
r.type= new Array(),r.value=new Array();
var count=0;
for (var t = 0; t < n.ComponentStream.length; t++) {
var i = n.ComponentStream[t],
u = i.name;
if (i.Condition)
{
var v = i.Condition;
if(v.Normal){ if(v.Normal.length>1){
for(var f =0; f < v.Normal.length; f++)
{
r.type[count] = u+' '+v.Normal[f].type;
r.value[count++] = "Normal"
}}
else{
r.type[count] = u+' '+v.Normal.type;
r.value[count++] = "Normal"}
}
..........
//similarly for other conditions
}r.len=count;
}
return r
},
</code></pre>
<p>Though this takes away the reusability of the script, the data displayed turns out to be easier to comprehend.</p>
<p>With all these pieces in place, a simple requirement is to refresh the input from the MTC stream every few milliseconds. A recursive function with delay takes care of that</p>
<pre><code>var updateFromMTC = function(){
.....
setTimeout('updateFromMTC("'+conn_url+'")' , 1000);
.....
}
</code></pre>
<p>The task we confront next is to display it elegantly. That is taken care by using power of HTML5. The devices in the stream are populated at the top of the page with the color of each dependent on the availability of the device. To display all the parameters we use two empty divs, in which the parameters and the conditions are populated using a javascript user function.</p>
<pre><code><div class="container" id='SelStats' position='absolute'">
<div id='conditions' position='absolute'></div>
</div>
var updateSelected = function(){
if(thePage.ActiveShape){
var selShp = thePage.ActiveShape;
var SelDisplay = document.getElementById('SelStats');
if(SelDisplay && selShp){
if(selShp.deviceName != ''){
$(SelDisplay).empty();
$(SelDisplay).append('<b>Machine:</b>' + selShp.text);
......
</code></pre>
<p>So, this completes the basic task of monitoring an MTConnect stream. Next we need to plot it. There are some really advanced open source scripts out there to assist plotting data, but for this particular task <a href="http://smoothiecharts.org/">Smoothie Charts</a> seemed a perfect fit to me. It is a really small charting library designed for live streaming data. Integration with the existing code was easy. A few more lines to the code, and it plots like a charm.</p>
<pre><code>var smoothie = new SmoothieChart();
smoothie.streamTo(document.getElementById("mycanvas") 3000 /*delay*/);
var line1 = new TimeSeries();
setInterval(function() {line1.append(new Date().getTime(), math.random());}, 3000 /*delay*/);
</code></pre>
<p>Finally, we need to add an emergency alarm light for the parameter being monitored. A slick form to enter the maximum/minimum value, and a basic function to compare instantaneous values are enough to pull it off. With the div being populated dynamically every few seconds, we need to save some info in a cookie which is made easy by the <a href="https://github.com/carhartl/jquery-cookie/">jQuery-cookie</a> plugin.</p>
<div style="text-align: center;">
<img src="http://www.manufacturingbigdata.com/images/graphr-1.jpg" width=360 height=600 /> <img src="http://www.manufacturingbigdata.com/images/graphr-2.jpg" width=360 height=600 /> </div>
<p>I guess that’s it. The app is ready to roll. I checked it out locally on a PC, iPOD touch and an android device. Seems to be working fine for me. Let me know if any of you notice anything off about it.</p>
<p>PS: Please ensure that the application is run on a PHP server. Otherwise the application will fail to connect to the stream and all you will see is a white blank page. I’d recommend WAMP/LAMP for users trying it on their personal PCs.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Octopress + S3 + Cloudfront]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/06/13/s3/"/>
<updated>2012-06-13T07:03:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/06/13/s3</id>
<content type="html"><![CDATA[<p>We have finally migrated to Octopress from Google Blogger. Feels a lot better to write in Markdown than using Blogger’s editor! The static content is being stored in an S3 bucket and served through AWS Cloudfront.</p>
<p>Thank you: <a href="http://www.jerome-bernard.com/blog/2011/08/20/quick-tip-for-easily-deploying-octopress-blog-on-amazon-cloudfront/">Jerome Bernard</a>, <a href="http://blog.jacobelder.com/2012/03/octopress-and-cloudfront/">Jacob Elder</a>, <a href="http://www.octopress.org">Octopress</a>, and <a href="http://www.github.com">Github</a>.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Why SaaS?]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/06/08/why-saas/"/>
<updated>2012-06-08T11:16:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/06/08/why-saas</id>
<content type="html"><![CDATA[<p>A common question that we always get is why vimana is a software-as-a-service (SaaS) product. Surely, given how much data we are collecting, it must be easier to run it locally inside a plant, right? Well, if all vimana was doing was creating plots of part counts and utilization, then yes, running it locally does make sense. But vimana does a lot more – it helps understand the patterns behind productivity (and the lack thereof), and being able to support these capabilities requires a whole lot more of computational resources.</p>
<p>So lets dig deeper – why SaaS?</p>
<ol>
<li><strong>Keep it Growing</strong>: SaaS allows us to scale product functionality as your operations grow. This means that vimana can scale to support an increasing number of devices, along with the analytical capabilities required to support them. SaaS also allows us to keep the app at the latest version without requiring long downtimes for the updates.</li>
<li><strong>Keep it All</strong>: SaaS allows us to keep historical plant data securely for as long as you want us to. This makes it possible to baseline against historical data to put current performance in context, and to make better decisions about the future based on past usage and operational patterns.</li>
<li><strong>Keep it Lean</strong>: SaaS enables simple, annual, pay-as-you-go pricing, where you pay based on the number of devices you have connected to vimana, and the kind of analysis being performed on the devices. Since the app is delivered over the web, any number of users can access it (even simultaneously!).</li>
</ol>
<p>SaaS deployments also allow us to farm out specific analytical processes to elastic clusters, using map reduce and other big-data-crunching technologies. We will be talking about this in detail in upcoming posts.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[BEC Article in Livebetter Magazine]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/06/08/livebetter-article/"/>
<updated>2012-06-08T08:23:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/06/08/livebetter-article</id>
<content type="html"><![CDATA[<p>The June issue of the <a href="http://livebettermagazine.com">Livebetter Magazine</a> features an article about the BEC standard by Ralph Resnick from the National Center for Defense Manufacturing and Machining and myself. You can read the article <a href="http://livebettermagazine.com/eng/magazine/article_detail.lasso?id=307">here</a>.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Powered by Octopress]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/05/30/powered-by-octopress/"/>
<updated>2012-05-30T10:00:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/05/30/powered-by-octopress</id>
<content type="html"><![CDATA[<p>We are moving to the Octopress framework. The blog will be hosted on Github Pages.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[MongoDB and Replica Sets]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/05/08/mongo-replica-sets/"/>
<updated>2012-05-08T10:50:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/05/08/mongo-replica-sets</id>
<content type="html"><![CDATA[<h2>Why do we care</h2>
<p>We use mongoDB to persist data from the vimana tenants. All historical data from our customers is stored in mongo, and its very important that we can continuously persist into mongo without losing data. We are using replica sets to make sure that our data is redundantly stored and to ensure that we have a failover mechanism when one of the mongoDBs lose connection.</p>
<h2>Introduction to Replica Sets</h2>
<p>Replica sets are a form of asynchronous master/slave replication, adding automatic failover and automatic recovery of member nodes. A replica set consists of two or more nodes that are copies of each other. (i.e.: replicas). The replica set automatically elects a primary (master). Drivers (and mongos) can automatically detect when a replica set primary changes and will begin sending writes to the new primary.</p>
<h2>Those GOTCHAs</h2>
<p>In order enable replica sets, you need to pass the “replSet” parameter while starting the mongod processes.
Replica sets cannot be initiated on those mongod instance which were already started with out “replSet” parameter. Stop and restart the mongod processes with replSet parameter.
When replica sets are configured, all the writes will only go to the primary!!! Mongo has its own algorithms for syncing the data across the nodes.</p>
<h2>That setup</h2>
<p>Starting multiple mongo instances.</p>
<pre><code>$ mongod --dbpath mongo_rpl/data1 --replSet set1 --port 27018
$ mongod --dbpath mongo_rpl/data2 --replSet set1 --port 27019
$ mongod --dbpath mongo_rpl/data3 --replSet set1 --port 27020
</code></pre>
<p>This starts 3 different instances of mongo running on different ports.</p>
<h2>Setting up the replica config</h2>
<pre><code>➜ ~ mongo localhost:27018
MongoDB shell version: 1.8.2
Mon Apr 30 11:13:59 *** warning: spider monkey build without utf8 support. consider rebuilding with utf8 support
connecting to: localhost:27018/test
> config = {_id: "set1", members: [{_id: 0, host: "localhost:27018"}, {_id: 1, host: "localhost:27019"},{_id: 2, host:"localhost:27020"}]}
{
"_id" : "set1",
"members" : [
{
"_id" : 0,
"host" : "localhost:27018"
},
{
"_id" : 1,
"host" : "localhost:27019"
},
{
"_id" : 2,
"host" : "localhost:27020"
}
]
}
> rs.initiate(config)
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
>
</code></pre>
<p>This config enables the replicas to talk with each other. The talking would involve the replicas communicating who is the primary and who are all the secondaries.</p>
<h2>That app</h2>
<p>We wrote a <a href="https://github.com/deepakprasanna/mongo_oplog_watcher">small app</a> which would tail the oplog of our test servers and insert some interesting records into the replicas.</p>
<h2>Those observations</h2>
<p>The most interesting part of playing with replica sets is to understand how mongoDB behaves and handles a node failure.
I have classified my observations into two catagories, a node failure during a read and during a write.</p>
<h3>Reading Scenarios</h3>
<p>MongoDB doesnot by default support serving reads from the replicas. MongoDB will serve all reads from primary by default. Reads from secondaries can be configured, which is actually a 2 step process. The first step is to configure “slaveOK” in the mongo console. This will tell mongo, it is okay to serve the reads from the secondaries. The second step is to instantiate the MongoReplConnection with :read => :secondary option.
This will tell the driver that it is okay to send the reads to the secondaries. Mongo driver will randomly select one of the secondaries to serve the reads. The distribution of the reads across the secondaries is handled by the driver.</p>
<ul>
<li><p><strong>Secondary goes down:</strong> <br/>
<strong>slaveOk:</strong> <code>Mongo::ConnectionFailure</code> will be raised when there is a failure. Mongo driver is intelligent, when it sees a <code>Mongo::ConnectionFailure</code> it prevents the next reads from going to that dead secondary.
The driver has its own algorithm to find out if the dead secondary is back alive or not. As far a read is concerned we need to catch <code>Mongo::ConnectionFailure</code> and make the read once more(Assuming that another secondary will be up).
If the secondaries are configured to serve the reads, then the primary is not touched at all until all other secondaries are dead. But there is no real way to find out which mongod instance served the read. <br/>
<strong>Without SlaveOk:</strong> Rest of the world goes as usual.</p></li>
<li><p><strong>Primary goes down:</strong> <br/>
<strong>slaveOk:</strong> Rest of the world goes as usual. <br/>
<strong>without SlaveOk:</strong> All the reads are going to fail since primary can only serve the reads. There are 2 ways to solve this problem, catch the exception and throw an error message. Or keep polling the server until one of the other secondaries becomes a primary and read becomes successful. If the client decides to retry, it’s not guaranteed that another member of the replica set will have been promoted to primary right away, so it’s still possible that the driver will raise another <code>Mongo::ConnectionFailure</code>.</p></li>
</ul>
<h3>Writing scenarios</h3>
<ul>
<li><strong>Secondary goes down:</strong> When replica sets are configured in mongoDB, all the writes go to the primary. So there is no problem at all if a secondary goes down.
If the secondary comes up again, mongo will take care of replicating the records which were lost by the time when it was down. Perfect!</li>
<li><strong>Primary goes down:</strong> The ruby mongo driver raises <code>Mongo::ConnectionFailure</code>. Checkout the oplog watcher, we have caught this exception and we are doing a <code>puts</code> that the connection is lost.
But how ever the after a few seconds, when one of the other secondaries get elected as the primary the writes become successful. The interesting fact is that all the writes which failed
during this recovery process is lost. Since we are able to catch <code>Mongo::ConnectionFailure</code>, it is up-to the client to pull the sleeves and persist the data somewhere else until another
secondary becomes a primary.While testing we lost about 20-30 records when the primary was down. I guess this number would be more or less depending the latency which we would face in the realtime.</li>
<li><strong>Last mongod instance is not becoming primary:</strong> As you can see from the config above, we have 3 replicas. So we would have one primary(27018) and two secondaries(27019 and 27020). We are stopping the primary(27018). Now one of the secondaries becomes a primary(say 28019). Now we would have one primary(27019) and one secondary(27020). Again we are stopping the primary(27019). But the left over secondary(27020) will not become the primary!!!!!!! This causes all the writes to fail. But if we bring another dead mongod instance(27018) up, the leftover secondary(27020) becomes the primary and from then on the writes become successful. This was found while digging into the mongo logs.</li>
</ul>
<p><code>[rs Manager] replSet can't see a majority, will not try to elect self:</code></p>
<p>From what I understand , Replica sets will do the election process only if there are 2 or more replicas available. If at all we have only one replica alive, the election will not happen and all the writes will fail because there will be no primary. We need have 2 replicas alive at anytime for the writes to become successful. This is interesting.</p>
<p>Happy hacking, <br/>
Deepak.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Welcome Deepak!]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/04/14/welcome-deepak/"/>
<updated>2012-04-14T10:48:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/04/14/welcome-deepak</id>
<content type="html"><![CDATA[<p>We would like to welcome Deepak Prasanna to System Insights. Deepak starts this month as a Software Developer working from our Chennai office. You will be hearing from him soon in this blog.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[McKinsey on (Manufacturing) Big Data - Part 2 - What do do with it]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/03/21/McKinsey-Part-2-What-do-do-with-it/"/>
<updated>2012-03-21T13:47:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/03/21/McKinsey-Part-2-What-do-do-with-it</id>
<content type="html"><![CDATA[<p>We are back studying McKinsey’s report on Big Data following the last post, and here lets take a closer look at what we can do with Big Data. The report identifies several “levers” where data can be used to improve manufacturing performance (see below), and of these levers, we are primarily interested in these two:</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2012-03-21-McKinsey-Part-2-What-do-do-with-it-levers.png"></p>
<ol>
<li>Implementing Lean Manufacturing (#5)</li>
<li>Using sensor data-driven operations analytics (#6)</li>
</ol>
<p>Lets look at the first of these in this post.</p>
<h2>Lean Manufacturing and the Digital Factory</h2>
<p>McKinsey identifies using Big Data to “create process transparency, develop dashboards, and visualize bottlenecks”. Our vimana application is one example of applying Big Data to create realtime dashboards of manufacturing equipment (to learn more about vimana, please visit www.systeminsights.com/vimana.) The idea here is to provide a shopfloor user both a high-level “macro” view of the shopfloor, as well as a low-level “micro” view of a single device. Questions the dashboards can help answer include:</p>
<ul>
<li>Which devices are producing parts today?</li>
<li>Which parts are being made right now?</li>
<li>How many parts have I made?</li>
<li>What has my device been doing today?</li>
<li>What is my efficiency?</li>
<li>Why have I been in a downtime?
(This is a much more interesting question to answer than just “What are my downtimes”. Knowing <em>why</em> downtimes occur can directly help in reducing or eliminating the downtimes, and is a step beyond simply knowing <em>that</em> a has downtime occurred).</li>
</ul>
<p>The data for vimana is streamed using the MTConnect Agent associated with each machine tool. The application itself runs in the cloud, aggregating hundreds of events every second from each machine tool. The analysis is done in realtime, and the visualizations are rendered instantly. Process transparency is achieved here because the application serves as a central repository of what is going on in the shopfloor, and multiple stakeholders (production, engineering, maintenance, management) can all use it to support decisions that come under their purview. This also takes us one step closer to the Digital Factory, where detailed operational data from the factory equipment is applied in building a complete digital model of the factory’s operations, which can then be applied in optimizing its performance.</p>
<p><em>Macro Dashboard: Shopfloor</em>
<img class="center" src="http://www.manufacturingbigdata.com/images/2012-03-21-McKinsey-Part-2-What-do-do-with-it-dashboard.png"></p>
<p><em>Micro Dashboard: Device</em>
<img class="center" src="http://www.manufacturingbigdata.com/images/2012-03-21-McKinsey-Part-2-What-do-do-with-it-details.png"></p>
<p>Coming back to McKinsey, they estimate a reduction of 10 to 50% in costs from applying Big Data to implement Lean Manufacturing and the Digital Factory, accompanied by a marginal increase in revenue (2%). We have already seen vimana helping improve device utilization by over 25%, which directly leads to cost reductions. The real impact of Big Data here is not as much as in enabling us to ask new questions about a shop’s productivity, but in helping us find even better answers for the same questions we have been asking for a long time and thus driving down costs. Of course, thats not to say that we cannot ask new questions based on the data –this is where data specifically from ubiquitous and low cost sensors can play a role, which we will examine in a future post.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Setting up a MTConnect Agent on a Linux (Ubuntu) machine]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/03/16/Setting-up-a-MTConnect-Agent-on-a-Linux-machine/"/>
<updated>2012-03-16T16:10:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/03/16/Setting-up-a-MTConnect-Agent-on-a-Linux-machine</id>
<content type="html"><![CDATA[<p>This was blogged first at my personal blog <a href="http://princearora.wordpress.com/2012/02/24/setting-up-a-mtconnect-agent-on-a-unixubuntu-machine/">here</a>.</p>
<p>While working on my internship project, I got a chance to test out an MTConnect agent built in C++ for Linux (ubuntu). I was surprised to find that absolutely no documentation existed for setting up the Agent in a Linux environment. Although it didn’t turn out to be a big hassle in the end, I thought that it would be a good idea to list down the process of setting up a MTConnect agent on Linux. So, here you go:</p>
<ul>
<li>Download the zip archive of the latest version of MTConnect C++ Agent SDK from MTConnect Github and extract its contents onto your local disk.</li>
<li><p>Download & Install libxml-2.0 and libxml-dev packages from the apt repository.</p>
<pre><code> $ apt-get install libxml2
$ apt-get install libxml2-dev
</code></pre></li>
<li><p>Now you need to prepare a Makefile in order to compile the agent. This can be done using the Cmake package. Download & install cmake if you don’t already have it.</p>
<pre><code> $ apt-get install cmake
</code></pre></li>
<li><p>Open the ‘agent’ folder in the terminal and run cmake and make.</p>
<pre><code> $ cd agent/
$ cmake .
$ make
</code></pre></li>
<li><p>If everything went right, your agent would have been build. You can now start it off as a service.</p>
<pre><code> $ ./agent daemonize
</code></pre></li>
<li><p>If you are unsure whether the process is running, you can check out the process status:</p>
<pre><code> $ ps aux | grep agent
</code></pre></li>
</ul>
<p>The agent service should be up and running. You may change the agent.cfg file in any text editor based on the instructions here.</p>
<p>Have fun!</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Introducing our Intern - Prince Arora]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/03/16/Introducing-our-Intern-Price-Arora/"/>
<updated>2012-03-16T09:30:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/03/16/Introducing-our-Intern-Price-Arora</id>
<content type="html"><![CDATA[<p>I want to introduce Prince Arora, a student from IIT Madras, who is interning with us at our Chennai office. Prince is developing a suite of MTConnect-based tools to track the maintenance status of machine tools. He is blogging about his experiences at System Insights at his blog, <a href="http://princearora.in">The 20four hour log</a>.</p>
<p>Prince started out by building a simple webapp to display various maintenance-related parameters from an MTConnect data stream. Specifically <a href="http://princearora.wordpress.com/2012/02/14/mtconnect-the-problem-statement/">he developed an app</a> to:</p>
<ul>
<li>Connect to any MTConnect Stream specified by the user</li>
<li>Recognize all the devices within the stream</li>
<li>Continuously read & display multiple parameters for each of the device</li>
<li>Display condition of all components</li>
<li>Plot a curve of the variation of a parameter over time</li>
<li>Allow user to monitor a parameter and alarm if it moves outside the range of values entered</li>
</ul>
<p>Here is a screenshot of viewing realtime data from an MTConnect Agent. The app plots the value of an MTConnect Sample DataItem in realtime. The screen below shows the app plotting the X position, but this can also be used to plot maintenance-related data like a vibration or temperature.</p>
<p>You can also load up the status of various conditions active in the machine tool, and see if the parameter that is being plotted is within some user-determined bounds. The screen below shows bounds set for the DataItem Commanded Y Position between 3 and -2. A green indication is shown because the data item is operating within bounds.</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2012-03-16-Introducing-our-Intern-Price-Arora-pic-2.jpeg"></p>
<p>Prince will be posting more about his internship in these pages. Stay tuned!</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[MTConnect Screencasts]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/02/28/MTConnect-Screencasts/"/>
<updated>2012-02-28T02:27:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/02/28/MTConnect-Screencasts</id>
<content type="html"><![CDATA[<p>I will producing a series of MTConnect screencasts in the near future on various topics. The first in the series will cover extending the MTConnect standard by adding custom Data Items and Components. The tutorial will take you through all the steps necessary to develop an extension using the open source tools and deploy an integrated solution with all the schema files deployed on the MTConnect Agent. I am working on the screen casts right now and learning my way through the various technologies necessary to make a usable online education series.</p>
<p>If this is a success, we will be producing additional tutorials on adapter development demonstrating how to get data from machine tools as well as many more. Another I’m considering will provide guidance on collecting tooling data from your controller and publishing it through MTConnect in compliance with part 1.2. All the source code, tools, and documentation will be made freely available to the community. I am currently looking for an XMLSpy equivalent or at least a good schema validator to use since XMLSpy has a high price tag. If anyone knows of anything, please reply. Any help in that area will be appreciated. I have used XMLSpy for many years, but for the purposes of these tutorials, I don’t want to burden everyone with the cost, even though it is a great tool.</p>
<p>If there are any tutorials you would be interested in seeing, I will be taking suggestions and will prioritize based on the responses I get. Otherwise I will make up my own priorities and guess as best I can what the community would like to see…</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Presentation on Baseline Energy Consumption]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/02/17/Presentation-on-BEC/"/>
<updated>2012-02-17T12:52:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/02/17/Presentation-on-BEC</id>
<content type="html"><![CDATA[<p>Following up to the previous posts (here and here) on Baseline Energy Consumption, here is a presentation about the standard:</p>
<script src="http://speakerdeck.com/embed/4f3dd5bcdf5d29001f0047d8.js"></script>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[McKinsey on (Manufacturing) Big Data - Part 1 - How Much Data?]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/02/14/McKinsey-Part-1-How-Much-Data/"/>
<updated>2012-02-14T12:20:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/02/14/McKinsey-Part-1-How-Much-Data</id>
<content type="html"><![CDATA[<p>McKinsey recently published a report about Big Data, going into considerable detail about its impact on different fields, including manufacturing. In the next few posts, I will be digging into this report looking at the specific impacts of big data on machining-related manufacturing, focusing on ways that improve productivity and efficiency.
First, lets start with how the “Big Data” is defined:</p>
<blockquote><p>“Big data” refers to datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze. This definition is intentionally subjective and incorporates a moving definition of how big a dataset needs to be in order to be considered big data—i.e., we don’t define big data in terms of being larger than a certain number of terabytes (thousands of gigabytes).</p></blockquote>
<p>This definition applies remarkably well to Manufacturing Big Data. The “bigness” of the data is not necessarily its absolute size, because process data from a machine tool might be in the tens of gigabytes, which is paltry compared to Internet data like website click-throughs or ad impressions. The bigness comes from the fact that traditional manufacturing decision making systems (spreadsheets, sticky-notes-on-whiteboards, MES systems) deal with very small sets of data – perhaps in the megabytes – and we are now looking at harnessing data that is several orders of magnitude larger than that.</p>
<p>So how much data are we exactly talking about? Lets take the case of collecting MTConnect-based data streams from manufacturing equipment. With Basic monitoring (looking at production efficiency, part count, alarms, messages, and overrides), we can estimate a data rate of about 10 samples a second, with each sample consisting of 10 data items. This generates over 400MB of data daily, or more than 150GB annually per device. With Advanced monitoring (which will complement Basic monitoring with data from embedded and external sensors), this grows to over 21 GB a day, or close to 8 TB a year. With these estimates, a small manufacturing shop with about 10 devices, wil generate over 2 TB of data a day with Basic monitoring, or close to 80 TB with Advanced monitoring. Moving up to a multi-facility enterprise with about 500 devices, we are looking at about 80 TB a year with Basic monitoring, and close to 4 PB (Petabytes) with Advanced monitoring. The table below gives a few more examples.</p>
<iframe frameborder="0" height="360" src="https://docs.google.com/spreadsheet/pub?key=0AnJkF2eeISMAdG5yTjZ4TkgyVEVEWE5rM2E5bmE1eUE&single=true&gid=0&output=html&widget=true" width="500"></iframe>
<p>If we extrapolated this further to look at the total number of machine tools currently installed in the United States today (approximately 1.2 Million, based on estimates from AMT), Basic monitoring will generate over 189 PB of data, while Advanced monitoring will generate over 9,400 PB of data (over 9.4 Exabytes). Of course, this does overstate the total data load since not all of these machines can be readily addressed, but it gives a sense of the scale of manufacturing data, and – more importantly – the opportunity to harness it.</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2012-02-14-McKinsey-Part-1-How-Much-Data-Sectoral-Data-Storage.png"></p>
<p>McKinsey estimates about 966 PB of stored data in the Discrete Manufacturing sector (see above). Add Process data to that, we are looking at greatly increasing the total storage requirements of the sector. Manufacturing has the largest storage needs of all the surveyed sectors, and the potential of Big Data analytics on process data will further increase them.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Big Data in the news]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/02/13/Big-Data-in-the-news/"/>
<updated>2012-02-13T13:49:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/02/13/Big-Data-in-the-news</id>
<content type="html"><![CDATA[<p>The New York Times <a href="http://www.nytimes.com/2012/02/12/sunday-review/big-datas-impact-in-the-world.html">talks</a> about the impact of Big Data across a variety of fields, including retailing, voice recognition, and public health. They cite a report by Prof. Erik Brynjolfsson from MIT Sloan that</p>
<blockquote><p>studied 179 large companies and found that those adopting “data-driven decision making” achieved productivity gains that were 5 percent to 6 percent higher than other factors could explain.</p></blockquote>
<p>Certainly we can expect a larger impact in productivity gains in manufacturing. Traditional “data driven” techniques like process monitoring has itself brought productivity gains of over 10%. Big Data Analytics gives the opportunity to wring out further gains, because of the ability to handle unstructured data, which the article mentions:</p>
<blockquote><p>Data is not only becoming more available but also more understandable to computers. Most of the Big Data surge is data in the wild — unruly stuff like words, images and video on the Web and those streams of sensor data. It is called unstructured data and is not typically grist for traditional databases.</p></blockquote>
<p>But the computer tools for gleaning knowledge and insights from the Internet era’s vast trove of unstructured data are fast gaining ground. At the forefront are the rapidly advancing techniques of artificial intelligence like natural-language processing, pattern recognition and machine learning.</p>
<p>We apply similar tools and techniques to look at unstructured manufacturing data. More about this coming up!</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Baseline Energy Consumption - Case Study 1]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/02/08/BEC-Case-Study-1/"/>
<updated>2012-02-08T16:06:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/02/08/BEC-Case-Study-1</id>
<content type="html"><![CDATA[<p>The BEC standard discussed in the last post was applied in comparing the performance of three similar small-sized Lathes. The three devices were as follows (all brand names anonymized to protect the innocent!):</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2012-02-08-BEC-Case-Study-1-A.png"></p>
<p>The test conditions were as follows:</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2012-02-08-BEC-Case-Study-1-B.png"></p>
<p>The BEC metric for the three devices is as follows:</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2012-02-08-BEC-Case-Study-1-C.png"></p>
<p>Here is how the metric looks broken down into its components:</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2012-02-08-BEC-Case-Study-1-D.png"></p>
<p>The results showed that Device A consumes significantly less energy than Device B and Device C. Extrapolating the BEC metric, the estimated annual energy usage for these machine tools can be calculated assuming that each device operates for 8,000 hours per year at a cost of US $0.09 per kWh of energy; similarly, the Carbon Footprint of the devices is also calculated, assuming a footprint of 0.6 kg CO2-eq/kWh. As we can see in the table below, selecting Device A saves more than $800/year compared to Device C, and $400/year compared to Device B.</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2012-02-08-BEC-Case-Study-1-E.png"></p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Baseline Energy Consumption]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/01/17/Baseline-Energy-Consumption/"/>
<updated>2012-01-17T13:09:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/01/17/Baseline-Energy-Consumption</id>
<content type="html"><![CDATA[<p>Measuring machine tool energy consumption has always been tricky because of the diversity and complexity of machine tools. We took a first stab at developing a standard to measure machine tool energy consumption as part of a project supported by General Dynamics – OTS, National Center for Defense Manufacturing and Machining (NCDMM), and AMT. We developed the Baseline Energy Consumption (BEC) metric as a standard measure of the energy consumption of machine tools along with a companion test methodology. Applications of this metric include:</p>
<ul>
<li>Estimating the approximate energy requirements of operating a machine tool to manufacture a specific part</li>
<li>Comparing the energy requirements of two machine tools that are being applied in similar activities</li>
<li>Performing Return-on-Investment calculations to justify energy efficiency improvements in machine tool and machining technologies</li>
<li>Assessing environmental impact of manufacturing processes from equipment energy consumption</li>
</ul>
<p>A key consideration in designing the test methodology was to ensure that they could be performed within a reasonable amount of a time in a standard industrial setting without requiring any special workpieces, fixtures, or equipment. This pragmatic approach was explicitly selected because the goal of this metric is to serve as a figure of merit for the energy consumption of machine tools and manufacturing equipment. This metric does not intend to serve as a precise indication of the energy required by a machine tool to manufacture a specific part or to execute a specific operation. The first edition of the standard focused on measuring the BEC of lathe-type machine tools based on recording the power consumption during a series of controlled tests.</p>
<p>The energy consumption of a machine tool in a factory environment is determined by the relative time it spends in different states, including, idle, axes movement, and cutting. To capture the effect of these states, the baseline tests measure the average power consumption during tare usage, axes / component usage, and machining usage. Tare usage is measured based on the power consumption of the machine tool when it is in standby with its peripheral units turned on. Component usage is measured based on the power consumption when the machine tool components are being exercised, without any loaded workpiece (analogous to “air-cutting”). Machining usage is measured based on power consumption during metal cutting at a fixed spindle speed at varying material removal rates on an arbitrary workpiece. These tests are based on the observation that the energy required to remove unit volume of material in a machine tool is largely dependent on the volumetric rate of removal, or material removal rate, and not on the process parameters.</p>
<p>The BEC metric is based on the average power consumed during these states. The three terms are weighted by duration factors to estimate the energy consumed by the machine during one hour of representative operation, where it is assumed that the machine tool spends 25% of the duration in tare usage, 25% in component usage (air-cutting/warmup etc.,) and 50% in machining usage.</p>
<p>We will be following up with case studies looking at how this standard was applied in comparing different machine tools.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[It's all about Parts]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/01/11/It%27s-all-about-Parts/"/>
<updated>2012-01-11T00:30:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/01/11/It’s-all-about-Parts</id>
<content type="html"><![CDATA[<p>As we begin work on the 1.3 version of the MTConnect standard, the next big area to tackle is Parts. A part can be roughly defined as a piece of material that will have some transformative processes done to it to obtain another shape, composition, or structure. Along the way information will be collected about these transformative processes as they are performed and validated.</p>
<p>A part itself is an asset, but for the treatment in the standard we will consider it as a collection of assets. These assets are composed of the various programs, measurement plans, and processes that will be applied to it. They may also indicate the schedule and operation, but that is questionable. If we consider a part an asset that references other assets, then the problem becomes one of asset references, instead of having a master document that has everything.</p>
<p>One consideration is to make parts have the following information:</p>
<ol>
<li>Descriptive information</li>
<li>Reference to part programs for each device</li>
<li>Reference to quality execution program for each device</li>
</ol>
<p>Another area that needs to be considered is the quality measurement data collected from gauges and coordinate measurement devices and from process metrology. The information collected for the part measurements will be associated with the part itself through a parent reference. The part may or may not need to be updated to refer back to the measurement data, but this could be done. The measurements may reference a quality execution plan for that step and may replicate the tolerance information as well as the geometric information as well.</p>
<p>We will be talking more about Parts in MTConnect as we work on the standard. Watch this space!</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Reports of Manufacturing's Death Greatly Exaggerated]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2012/01/03/Reports-of-Manufacturing%27s-Death-Greatly-Exaggerated/"/>
<updated>2012-01-03T09:46:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2012/01/03/Reports-of-Manufacturing’s-Death-Greatly-Exaggerated</id>
<content type="html"><![CDATA[<p>Scott Anderson, a senior economist at Wells Fargo, in an interview with the Star Tribune (hat tip to @JHJackoCMO for tweeting about this):</p>
<blockquote><p>… before 1980 there was a strong positive correlation between manufacturing and jobs, but since then that correlation has basically been turned upside down. What that means is that we’re producing more and more, yet we’re actually losing jobs. The extent of that shift tells us there’s a structural change that’s gone on that deserves a deeper look.</p></blockquote>
<p>The interesting implication here is that companies are finding ways of increasing their output even as their labor size shrinks. This places greater importance on the tools that manufacturers have available to analyze and improve the efficiency of their operations. Greater automation means that more data is being generated by more equipment and we will need ways of reasoning over them in both realtime and over longer (historical) periods. Big Data to the rescue? We think so!</p>
<p>A good example is again going back to the MTConnect Alarms discussed in the previous post. While previously shops might have had one operator per machine tool at all times, with greater automation and multi-tasking machines, one operator might be supervising a cell of several machine tools. Instead of barraging the operator with all the Alarms these machines generate, we can apply filtering and classification techniques to only display pertinent Alarms. We will examine some of these techniques in an upcoming post.</p>
]]></content>
</entry>
<entry>
<title type="html"><![CDATA[Analyzing MTConnect Alarm Data]]></title>
<link href="http://www.manufacturingbigdata.com/blog/2011/12/28/Analyzing-MTConnect-Alarm-Data/"/>
<updated>2011-12-28T17:53:00+05:30</updated>
<id>http://www.manufacturingbigdata.com/blog/2011/12/28/Analyzing-MTConnect-Alarm-Data</id>
<content type="html"><![CDATA[<p>Machine Tools produce enormous quantities of Alarm data, but analyzing this data can be a challenge. We are primarily interested in finding out how Alarms can help us understand Production disruptions and downtimes. While Machine Tools tend to be chatty with alarms, the alarms contain only limited information of value. Part of the problem is the lack of descriptive alarm text. Take a look at some examples from a modern, multi-axis CNC-controlled machine tool:</p>
<ul>
<li>OVERTRAVEL ( SOFT 1 )</li>
<li>1-ROT MOTOR SENSOR ERROR 81</li>
<li>HYDRAULIC PRESSURE DOWN</li>
<li>HIGH PRESSURE COOLANT CLEAN TANK OIL LOW LEVEL</li>
<li>OIL-MATIC TEMP./FILTER ALARM</li>
<li>Y AXIS HOME POSITION RETURN REQUEST</li>
</ul>
<p>While some of these alarms are self-explanatory (like ”<code>HYDRAULIC PRESSURE DOWN</code>”), other alarms can be a little harder to understand without any context. One way of adding context to Alarms is to look at the ControllerMode and the ExecutionStatus when the Alarm fired, and how they changed across the duration the alarm was active.</p>
<p>We took several months of data from a multi-axis CNC-controlled Lathe, and we looked at the different Alarms that occurred in this period. To keep the list manageable, we only looked at Alarms which had the severity level “Fault”.
Here are the alarms that occurred during this period:</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2011-12-28-Analyzing-MTConnect-Alarm-Data-AlarmCount.png"></p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2011-12-28-Analyzing-MTConnect-Alarm-Data-AlarmDuration.png"></p>
<p>We can see that the Chuck Barrier alarm occurred the most number of of times, but was not active the longest (that distinction went to the two Spindle-related alarms). To place these alarms in better light, we can look at the <code>ControllerMode</code> and <code>ExecutionStatus</code> when these alarms were active:</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2011-12-28-Analyzing-MTConnect-Alarm-Data-AlarmDurationByExec.png"></p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2011-12-28-Analyzing-MTConnect-Alarm-Data-AlarmDurationByMode.png"></p>
<p>We are most interested in Alarms that have stopped part production, so we can focus on those have have occurred when the <code>ExecutionStatus</code> was <code>STOPPED</code> or <code>INTERRUPTED</code>. Similarly, Alarms that occur when the Controller was in <code>Manual</code> or <code>MDI</code> mode are probably those which were a by-product of something a user was manually doing on the machine, so we can ignore those to focus on those that occurred when the mode was <code>AUTOMATIC</code>. With this filter, we can see for example that the <code>CHUCK BARRIER</code> alarm which occurred the most number of times, seems to have exclusively occurred when the ExecutionStatus was <code>READY</code> (implying that it was not interrupting program execution) and when the ControllerMode was <code>MANUAL</code> (implying that a user was manually operating the machine when the alarm fired). The two Spindle Alarms now look more interesting, since they occurred when the ExecutionStatus was <code>STOPPED</code>, and the ControllerMode was both <code>AUTOMATIC</code> and <code>MANUAL</code>. This can imply that this alarm did interrupt production, and that it led the the Mode being changed from <code>AUTOMATIC</code> to <code>MANUAL</code>.</p>
<p>We can dig one level deeper, and look at the combined state mapped by the <code>ControllerMode</code> and <code>ExecutionStatus</code>:</p>
<p><img class="center" src="http://www.manufacturingbigdata.com/images/2011-12-28-Analyzing-MTConnect-Alarm-Data-AlarmDurationByState.png"></p>
<p>This gives us even more clarity: we can see that two Spindle Alarms switched two states – between <code>STOPPED/AUTOMATIC</code> and <code>STOPPED/MANUAL</code>, further confirming that this Alarm did stop program execution, and that the mode changed from <code>AUTOMATIC</code> to <code>MANUAL</code>.</p>
<p>Given the lack of clarity in alarm data, looking at the <code>ControllerMode</code> and <code>ExecutionStatus</code> gives us a better understanding of Alarms that can have an impact on Production. In vimana, we filter Alarms based on the <code>ControllerMode</code> and <code>ExecutionStatus</code> so that we can filter and look at only those that interrupt Production. We also look at temporal patterns between alarms, so that the ordering of alarms can be studied to better understand the phenomena they are describing. Downtimes are classified based on these characteristics of alarms. These will be discussed in this blog in an upcoming post.</p>
]]></content>
</entry>
</feed>