Saturday 5 August 2017

Rrdtool Moving Average


Saya bekerja dengan sejumlah besar deret waktu. Seri waktu ini pada dasarnya adalah pengukuran jaringan yang datang setiap 10 menit, dan beberapa di antaranya bersifat periodik (yaitu bandwidth), sementara beberapa arent lainnya (yaitu jumlah lalu lintas perutean). Saya ingin algoritma sederhana untuk melakukan deteksi outlier online. Pada dasarnya, saya ingin menyimpan memori (atau disk) keseluruhan data historis untuk setiap rangkaian waktu, dan saya ingin mendeteksi outlier apapun dalam skenario hidup (setiap kali sampel baru ditangkap). Apa cara terbaik untuk mencapai hasil ini? Saat ini saya menggunakan rata-rata bergerak untuk menghilangkan beberapa kebisingan, tapi lalu apa hal-hal sederhana berikut seperti standar deviasi, gila. Terhadap seluruh data set doesnt bekerja dengan baik (I cant menganggap deret waktu itu stasioner), dan saya ingin sesuatu yang lebih akurat, idealnya kotak hitam seperti: double outlierdetection (double vector, double value) dimana vektor adalah array berisi ganda Data historis, dan nilai kembalian adalah skor anomali untuk nilai sampel baru. Tanya 2 Agustus pukul 20:37 Ya, saya telah mengasumsikan frekuensi diketahui dan ditentukan. Ada metode untuk memperkirakan frekuensi secara otomatis, tapi itu akan mempersulit fungsinya. Jika Anda perlu memperkirakan frekuensi, cobalah mengajukan pertanyaan terpisah tentangnya - dan mungkin saya akan memberikan jawaban. Tetapi, ini memerlukan lebih banyak ruang daripada yang ada dalam komentar. Ndash Rob Hyndman Aug 3 10 at 23:40 Solusi yang bagus akan memiliki beberapa bahan, termasuk: Gunakan jendela yang tahan dan bergerak dengan mulus untuk menghilangkan ketidakstabilan. Mengungkapkan ulang data asli sehingga residu yang berkaitan dengan kelancaran kira-kira terdistribusi secara simetris. Mengingat sifat data Anda, kemungkinan akar kuadrat atau logaritma mereka akan memberi residu simetris. Terapkan metode diagram kontrol, atau paling tidak bagan kontrol berpikir, ke residu. Sejauh yang terakhir berjalan, pemikiran bagan kontrol menunjukkan bahwa ambang konvensional seperti 2 SD atau 1,5 kali IQR di luar kuartil bekerja dengan buruk karena mereka memicu terlalu banyak sinyal out-of-control yang salah. Orang biasanya menggunakan 3 SD dalam pekerjaan bagan kontrol, dari mana 2,5 (atau bahkan 3) kali IQR di luar kuartil akan menjadi titik awal yang baik. Saya memiliki sedikit banyak hal yang menggarisbawahi sifat solusi Rob Hyndmans sambil menambahkan dua hal utama: kebutuhan potensial untuk mengungkapkan kembali data dan kebijaksanaan menjadi lebih konservatif dalam menandakan outlier. Saya tidak yakin bahwa Loess bagus untuk detektor online, karena itu tidak bekerja dengan baik pada titik akhir. Anda mungkin malah menggunakan sesuatu yang sederhana seperti filter median yang bergerak (seperti pada penghindaran tahan Tukeys). Jika outliers tidak masuk semburan, Anda dapat menggunakan jendela yang sempit (5 titik data, mungkin, yang akan rusak hanya dengan semburan 3 atau lebih penghenti dalam kelompok 5). Setelah Anda melakukan analisis untuk menentukan ekspresi ulang data yang baik, kemungkinan Anda tidak perlu mengubah ekspresi ulang. Oleh karena itu, detektor online Anda benar-benar hanya perlu referensi nilai terbaru (jendela terbaru) karena tidak menggunakan data sebelumnya sama sekali. Jika Anda memiliki rangkaian waktu yang sangat lama, Anda bisa melangkah lebih jauh untuk menganalisis autokorelasi dan musiman (seperti fluktuasi harian atau mingguan berulang) untuk memperbaiki prosedur. Menjawab 26 Agu 10 at 18:02 John, 1.5 IQR adalah rekomendasi asli Tukey untuk kumis terpanjang di boxplot dan 3 IQR adalah rekomendasinya untuk menandai poin sebagai outlierquot quotfar (riff pada frase 6039 yang populer). Ini dibangun ke dalam banyak algoritma boxplot. Rekomendasi tersebut dianalisis secara teoritis di Hoaglin, Mosteller, amp Tukey, Understanding robust and Exploratory Data Analysis. Ndash w huber 9830 9 Okt 12 at 21:38 Ini mengonfirmasi data deret waktu yang telah saya coba analisa. Jendela rata-rata dan juga standar deviasi jendela. ((X - avg) sd) gt 3 nampaknya menjadi poin yang ingin saya tandai sebagai outlier. Yah setidaknya memperingatkan sebagai outlier, saya bendera apapun lebih tinggi dari 10 sd sebagai outlier error ekstrim. Masalah yang saya hadapi adalah berapa lama jendela ideal yang dimainkan dengan apa saja antara 4-8 titik data. Ndash NeoZenith Jun 29 16 at 8:00 Neo Taruhan terbaik Anda mungkin bereksperimen dengan subkumpulan data Anda dan konfirmasikan kesimpulan Anda dengan tes sisanya. Anda bisa melakukan validasi lintas yang lebih formal juga (tapi perawatan khusus dibutuhkan dengan data deret waktu karena saling ketergantungan semua nilai). Ndash w huber 9830 29 Jun 16 at 12:10 (Jawaban ini merespons pertanyaan duplikat (sekarang ditutup) pada Mendeteksi kejadian yang luar biasa, yang menyajikan beberapa data dalam bentuk grafis) Deteksi pendahuluan bergantung pada sifat data dan pada apa yang Anda inginkan. Bersedia berasumsi tentang mereka. Metode tujuan umum mengandalkan statistik yang kuat. Semangat pendekatan ini adalah untuk mengkarakterisasi sebagian besar data dengan cara yang tidak dipengaruhi oleh outlier manapun dan kemudian menunjukkan nilai individu yang tidak sesuai dengan karakterisasi tersebut. Karena ini adalah deret waktu, ia menambahkan komplikasi perlu (re) mendeteksi outlier secara terus menerus. Jika ini harus dilakukan saat seri terbentang, maka kita diperbolehkan hanya menggunakan data yang lebih tua untuk deteksi, bukan data masa depan. Selain itu, sebagai perlindungan terhadap banyak pengujian berulang, kita ingin menggunakan metode yang memiliki false sangat rendah. Tingkat positif Pertimbangan ini menyarankan untuk melakukan pengujian outlier outlier yang sederhana dan kuat terhadap data. Ada banyak kemungkinan, tapi satu yang sederhana, mudah dipahami dan mudah diterapkan seseorang didasarkan pada MAD yang berjalan: rata-rata penyimpangan absolut dari median. Ini adalah ukuran variasi kuat dalam data, mirip dengan standar deviasi. Puncak terluar adalah beberapa MAD atau lebih besar dari median. Masih ada beberapa tuning yang harus dilakukan. Berapa banyak penyimpangan dari sebagian besar data harus dipertimbangkan di luar dan seberapa jauh waktu yang dibutuhkan seseorang untuk meninggalkannya sebagai parameter untuk eksperimen. Heres sebuah implementasi R diterapkan pada data x (1,2, ldot, n) (dengan n1150 untuk meniru data) dengan nilai yang sesuai y: Diterapkan ke dataset seperti kurva merah yang diilustrasikan dalam pertanyaan, menghasilkan hasil ini: Data Ditunjukkan dalam warna merah, jendela 30 hari batas median5MAD berwarna abu-abu, dan outlier - yang hanya merupakan nilai data di atas kurva abu-abu - berwarna hitam. (Ambang batas hanya dapat dihitung mulai dari akhir jendela awal. Untuk semua data di dalam jendela awal ini, ambang pertama digunakan: mengapa kurva abu-abu rata antara x0 dan x30). Efek dari perubahan parameter adalah (A) meningkatkan nilai jendela akan cenderung menghaluskan kurva abu-abu dan (b) ambang kenaikan akan menaikkan kurva abu-abu. Mengetahui hal ini, seseorang dapat mengambil segmen data awal dan dengan cepat mengidentifikasi nilai parameter yang paling sesuai memisahkan puncak terluar dari data lainnya. Terapkan nilai parameter ini untuk memeriksa sisa data. Jika sebuah plot menunjukkan metode ini memburuk dari waktu ke waktu, itu berarti sifat data berubah dan parameternya mungkin perlu disetel ulang. Perhatikan betapa sedikit metode ini mengasumsikan tentang data: mereka tidak harus terdistribusi secara normal sehingga mereka tidak perlu menunjukkan periodisitas apapun yang mereka tidak perlu non-negatif. Yang diasumsikan adalah bahwa data berperilaku dengan cara yang cukup mirip dari waktu ke waktu dan bahwa puncak terluar tampak lebih tinggi daripada data lainnya. Jika ada yang ingin bereksperimen (atau bandingkan beberapa solusi lain dengan yang ditawarkan di sini), inilah kode yang saya gunakan untuk menghasilkan data seperti yang ditunjukkan dalam pertanyaan. Saya menebak model deret waktu yang canggih tidak akan bekerja untuk Anda karena waktu yang dibutuhkan untuk mendeteksi outlier menggunakan metodologi ini. Oleh karena itu, berikut ini adalah solusinya: Pertama buat pola lalu lintas normal dasar untuk setahun berdasarkan analisis manual data historis yang memperhitungkan waktu, minggu, akhir pekan, bulan, dll. Gunakan baseline ini bersama dengan beberapa mekanisme sederhana. (Misalnya moving average yang disarankan oleh Carlos) untuk mendeteksi outlier. Anda mungkin juga ingin meninjau literatur kontrol proses statistik untuk beberapa gagasan. Ya, inilah yang saya lakukan: sampai sekarang saya membagi sinyal menjadi periode secara manual, sehingga untuk masing-masing dari mereka, saya dapat menentukan interval kepercayaan di mana sinyal seharusnya tidak bergerak, dan oleh karena itu saya dapat menggunakan metode standar seperti Sebagai standar deviasi. Masalah sebenarnya adalah saya tidak dapat menentukan pola yang diharapkan untuk semua sinyal yang harus saya analisis, dan itulah mengapa saya mencari sesuatu yang lebih cerdas. Ndash gianluca 2 Agustus pukul 21:37 Inilah satu ide: Langkah 1: Melaksanakan dan memperkirakan model rangkaian waktu generik secara satu kali berdasarkan data historis. Hal ini bisa dilakukan secara offline. Langkah 2: Gunakan model yang dihasilkan untuk mendeteksi outlier. Langkah 3: Pada beberapa frekuensi (mungkin setiap bulan), ulang kalibrasi model deret waktu (ini bisa dilakukan secara offline) sehingga langkah 2 deteksi outlier Anda tidak berjalan terlalu jauh dari langkah dengan pola lalu lintas saat ini. Apakah itu bekerja untuk konteks Anda ndash user28 2 Agustus 10 di 22:24 Ya, ini mungkin berhasil. Saya sedang memikirkan pendekatan serupa (menghitung ulang baseline setiap minggu, yang bisa sangat intensif jika Anda memiliki ratusan rangkaian waktu univariat untuk dianalisis). BTW pertanyaan yang sulit sebenarnya adalah algoritma blackbox-style terbaik untuk memodelkan sinyal yang sama sekali generik, mengingat noise, estimasi tren dan seasonalityquot. AFAIK, setiap pendekatan dalam literatur memerlukan fase tuningquot yang benar-benar keras, dan satu-satunya metode otomatis yang saya temukan adalah model ARIMA oleh Hyndman (robjhyndmansoftwareforecast). Apakah saya kehilangan sesuatu ndash gianluca 2 Agustus 22:38 Sekali lagi, ini bekerja dengan cukup baik jika sinyal seharusnya memiliki musiman seperti itu, tapi jika saya menggunakan rangkaian waktu yang sama sekali berbeda (yaitu rata-rata waktu perjalanan TCP sepanjang waktu ), Metode ini tidak akan berhasil (karena akan lebih baik menangani yang satu dengan mean global dan deviasi standar sederhana menggunakan jendela geser yang berisi data historis). Ndash gianluca 2 Agustus 10 at 22:02 Kecuali Anda bersedia menerapkan model deret waktu umum (yang membawa kontra dalam hal latency dll), saya pesimis bahwa Anda akan menemukan penerapan umum yang pada saat bersamaan cukup sederhana. Untuk bekerja untuk segala macam deret waktu. Ndash user28 2 Agustus 10 at 22:06 Komentar lain: Saya tahu jawaban yang bagus mungkin mungkin Anda memperkirakan periodisitas sinyal, dan memutuskan algoritma yang akan digunakan sesuai dengan itquot, tapi saya tidak menemukan solusi bagus untuk yang lain ini. Masalah (saya bermain sedikit dengan analisis spektral menggunakan DFT dan analisis waktu menggunakan fungsi autokorelasi, tapi deret waktu saya mengandung banyak suara dan metode semacam itu memberikan beberapa hasil gila sebagian besar waktu) ndash gianluca 2 Agustus 10 di 22:06 A Komentari komentar terakhir Anda: itu sebabnya saya mencari pendekatan yang lebih umum, tapi saya memerlukan semacam kotak komik quotblack karena saya tidak dapat membuat asumsi tentang sinyal yang dianalisis, dan karena itu saya tidak dapat membuat parameter kuukur yang ditetapkan untuk algoritma pembelajaran. Ndash gianluca 2 Agustus at 22:09 Karena ini adalah data deret waktu, filter eksponensial sederhana en. wikipedia. orgwikiExponentialsmoothing akan memperlancar data. Ini adalah filter yang sangat bagus karena Anda tidak perlu mengumpulkan data titik lama. Bandingkan setiap nilai data yang baru diratakan dengan nilai unsmoothed-nya. Begitu deviasi melebihi ambang batas yang telah ditentukan sebelumnya (tergantung pada apa yang Anda yakini sebagai outlier dalam data Anda), maka outlier Anda dapat dengan mudah dideteksi. Dijawab Apr 30 15 at 8:50 Anda bisa menggunakan standar deviasi dari pengukuran N terakhir (Anda harus memilih N yang sesuai). Skor anomali yang bagus adalah berapa banyak standar deviasi yang diukur dari moving average. Menjawab 2 Agustus 20:48 Terima kasih atas tanggapan Anda, tapi bagaimana jika sinyal menunjukkan musim tinggi (yaitu banyak pengukuran jaringan ditandai dengan pola harian dan mingguan pada saat yang bersamaan, misalnya malam vs hari atau akhir pekan Vs hari kerja) Pendekatan berdasarkan standar deviasi tidak akan bekerja dalam kasus itu. Ndash gianluca 2 Agustus pukul 20:57 Misalnya, jika saya mendapatkan sampel baru setiap 10 menit, dan saya melakukan deteksi outlier penggunaan bandwidth jaringan perusahaan, pada dasarnya jam 6:00, ukuran ini akan jatuh (ini adalah perkiraan Pola normal total), dan standar deviasi yang dihitung di atas jendela geser akan gagal (karena akan memicu peringatan pasti). Pada saat bersamaan, jika ukurannya turun pada jam 4 sore (menyimpang dari garis dasar yang biasa), ini adalah outlier nyata. Ndash gianluca 2 Agustus pukul 20:58 apa yang saya lakukan adalah mengelompokkan pengukuran menurut jam dan hari dalam seminggu dan bandingkan standar deviasi itu. Masih tidak benar untuk hal-hal seperti liburan dan musim panas musim panas tapi benar sebagian besar waktu. Kelemahannya adalah Anda benar-benar perlu mengumpulkan data sekitar setahun agar cukup sehingga stddev mulai masuk akal. Analisis spektral mendeteksi periodisitas dalam rangkaian waktu stasioner. Pendekatan domain frekuensi berdasarkan perkiraan kerapatan spektral adalah pendekatan yang akan saya rekomendasikan sebagai langkah pertama Anda. Jika untuk periode tertentu ketidakteraturan berarti puncak yang jauh lebih tinggi daripada yang khas untuk periode itu maka rangkaian dengan penyimpangan semacam itu tidak akan menjadi anjuran stasioner dan spektral tidak sesuai. Tapi dengan asumsi Anda telah mengidentifikasi periode yang memiliki penyimpangan, Anda harus dapat menentukan kira-kira berapa tinggi puncak normal dan kemudian dapat menetapkan ambang batas pada tingkat tertentu di atas rata-rata untuk menunjuk kasus tidak beraturan. Jawab 3 September pukul 14:59 Saya sarankan skema di bawah ini, yang seharusnya bisa diimplementasikan dalam satu hari atau lebih: Kumpulkan sampel sebanyak yang dapat Anda simpan di memori Hapus outlier yang jelas menggunakan standar deviasi untuk setiap atribut Hitung dan simpan matriks korelasi Dan juga rata-rata dari masing-masing atribut Hitung dan simpan jarak Mahalanobis dari semua sampel Anda. Menghitung tingkat tinggi: Untuk sampel tunggal yang ingin Anda ketahui dengan sungguh-sungguh: Ambil matrik matrik kovarian dan jarak Mahalanobis dari pelatihan Hitung jarak Mahalanobis d Untuk sampel Anda Mengembalikan persentil di mana d jatuh (menggunakan jarak Mahalanobis dari pelatihan) Itu akan menjadi skor outlier Anda: 100 adalah outlier yang ekstrem. PS. Dalam menghitung jarak Mahalanobis. Gunakan matriks korelasi, bukan matriks kovarians. Ini lebih kuat jika pengukuran sampel bervariasi dalam satuan dan angka. Grafit 1 melakukan dua tugas yang cukup sederhana: menyimpan angka yang berubah seiring waktu dan membuat grafik. Sudah banyak perangkat lunak yang ditulis selama bertahun-tahun untuk melakukan tugas yang sama ini. Apa yang membuat Graphite unik adalah bahwa ia menyediakan fungsi ini sebagai layanan jaringan yang mudah digunakan dan sangat terukur. Protokol untuk memberi makan data ke Graphite cukup sederhana sehingga Anda bisa belajar melakukannya dengan tangan dalam beberapa menit (bukan yang sebenarnya Anda inginkan, tapi tes lakmus yang layak untuk kesederhanaan). Membuat grafik dan mengambil titik data semudah mengambil URL. Hal ini membuat sangat alami untuk mengintegrasikan Graphite dengan perangkat lunak lain dan memungkinkan pengguna untuk membangun aplikasi yang hebat di atas Graphite. Salah satu penggunaan Graphite yang paling umum adalah membangun dasbor berbasis web untuk pemantauan dan analisis. Grafit lahir di lingkungan e-commerce bervolume tinggi dan disainnya mencerminkan hal ini. Skalabilitas dan akses real-time ke data adalah tujuan utama. Komponen yang memungkinkan Graphite untuk mencapai tujuan ini mencakup perpustakaan basis data khusus dan format penyimpanannya, mekanisme caching untuk mengoptimalkan operasi IO, dan metode pengelompokkan server Graphite yang sederhana namun efektif. Alih-alih hanya menggambarkan bagaimana karya Grafit hari ini, saya akan menjelaskan bagaimana Graphite diimplementasikan (cukup naif), masalah apa yang saya hadapi, dan bagaimana saya merancang solusi untuk mereka. 7.1. Perpustakaan Database: Menyimpan Data Seri Waktu Grafit ditulis seluruhnya dengan Python dan terdiri dari tiga komponen utama: sebuah perpustakaan database bernama whisper. Sebuah daemon back-end bernama karbon. Dan webapp front end yang membuat grafik dan menyediakan UI dasar. Sementara bisikan ditulis khusus untuk Grafit, bisa juga digunakan secara terpisah. Ini sangat mirip dengan desain round-robin-database yang digunakan oleh RRDtool, dan hanya menyimpan data numerik time-series. Biasanya kita memikirkan database sebagai proses server yang aplikasi kliennya berbicara dengan over sockets. Namun, berbisik. Sama seperti RRDtool, adalah library database yang digunakan oleh aplikasi untuk memanipulasi dan mengambil data yang tersimpan dalam file berformat khusus. Operasi bisikan yang paling dasar dibuat untuk membuat file bisikan baru, update untuk menulis titik data baru ke dalam sebuah file, dan mengambil untuk mengambil titik data. Gambar 7.1: Anatomi Dasar Berkas Benang Seperti ditunjukkan pada Gambar 7.1. File berbisik terdiri dari bagian header yang berisi berbagai metadata, diikuti oleh satu atau beberapa bagian arsip. Setiap arsip adalah urutan titik data berturut-turut yaitu pasangan (timestamp, value). Saat update atau operasi pengambilan dilakukan, bisikan menentukan offset pada file dimana data harus ditulis atau dibaca, berdasarkan cap waktu dan konfigurasi arsip. 7.2. The Back End: Layanan Penyimpanan Sederhana Graphites back end adalah proses daemon yang disebut carbon-cache. Biasanya hanya disebut sebagai karbon. Ini dibangun di Twisted, kerangka IO berbasis event yang sangat terukur untuk Python. Twisted memungkinkan karbon berbicara secara efisien dengan sejumlah besar klien dan menangani sejumlah besar lalu lintas dengan biaya overhead rendah. Gambar 7.2 menunjukkan aliran data antar karbon. Bisikan dan webapp: Aplikasi klien mengumpulkan data dan mengirimkannya ke bagian akhir Grafit, karbon. Yang menyimpan data menggunakan bisikan. Data ini kemudian bisa digunakan oleh Graphite webapp untuk menghasilkan grafik. Gambar 7.2: Aliran Data Fungsi utama karbon adalah menyimpan titik data untuk metrik yang disediakan oleh klien. Dalam terminologi Grafit, metrik adalah jumlah terukur yang dapat bervariasi dari waktu ke waktu (seperti penggunaan CPU dari server atau jumlah penjualan produk). Titik data hanyalah sebuah pasangan (timestamp, value) yang sesuai dengan nilai terukur metrik tertentu pada suatu titik waktu. Metrik dikenali secara unik oleh nama mereka, dan nama masing-masing metrik serta titik datanya disediakan oleh aplikasi klien. Jenis aplikasi klien yang umum adalah agen pemantauan yang mengumpulkan metrik sistem atau aplikasi, dan mengirimkan nilai yang terkumpul ke karbon untuk memudahkan penyimpanan dan visualisasi. Metrik di Graphite memiliki nama hierarkis sederhana, mirip dengan jalur filesystem, kecuali bahwa titik digunakan untuk membatasi hierarki daripada garis miring atau garis miring terbalik. Karbon akan menghormati nama hukum apapun dan membuat file bisikan untuk setiap metrik untuk menyimpan datanya. File-file bisikan disimpan dalam direktori data sains karbon dalam hirarki filesystem yang mencerminkan hirarki dot-delimited dalam setiap nama metrik, sehingga (misalnya) server. www01.cpuUsage memetakan ke hellipserverswww01cpuUsage. wsp. Ketika aplikasi klien ingin mengirim poin data ke Graphite, ia harus membuat koneksi TCP ke karbon. Biasanya pada port 2003 2. Klien melakukan semua karbon yang berbicara tidak mengirimkan apapun melalui koneksi. Klien mengirimkan titik data dalam format teks sederhana sementara koneksi dibiarkan terbuka dan digunakan kembali sesuai kebutuhan. Formatnya adalah satu baris teks per titik data dimana setiap baris berisi nama metrik, nilai, dan cap waktu zaman Unix yang dipisahkan oleh spasi. Misalnya, klien mungkin mengirim: Pada tingkat tinggi, semua karbon memang mendengarkan data dalam format ini dan mencoba menyimpannya di disk secepat mungkin dengan menggunakan bisikan. Nantinya kita akan membahas detil beberapa trik yang digunakan untuk memastikan skalabilitas dan mendapatkan performa terbaik yang bisa kita dapatkan dari harddisk yang khas. 7.3. Front End: Grafik On-Demand Graphite webapp memungkinkan pengguna untuk meminta grafik khusus dengan API berbasis URL sederhana. Parameter grafik ditentukan dalam query-string dari permintaan HTTP GET, dan gambar PNG dikembalikan sebagai tanggapan. Misalnya, URL: meminta grafik 500times300 untuk server metrik. www01.cpuUsage dan data 24 jam terakhir. Sebenarnya hanya parameter target yang dibutuhkan, semua yang lain bersifat opsional dan gunakan nilai default jika dihilangkan. Grafit mendukung berbagai pilihan tampilan serta fungsi manipulasi data yang mengikuti sintaks fungsional sederhana. Misalnya, kita bisa membuat grafik rata-rata pergerakan 10 titik metrik dalam contoh sebelumnya seperti ini: Fungsi dapat disarangkan, memungkinkan untuk ekspresi dan perhitungan yang rumit. Berikut adalah contoh lain yang memberikan total penjualan untuk hari ini dengan menggunakan metrik per penjualan produk per menit: Fungsi sumSeries menghitung deret waktu yang merupakan jumlah dari setiap metrik yang cocok dengan produk pola. alesPerMinute. Kemudian integral menghitung total berjalan daripada hitungan per menit. Dari sini, tidaklah sulit membayangkan bagaimana seseorang bisa membangun UI web untuk melihat dan memanipulasi grafik. Graphite hadir dengan UI Komposernya sendiri, yang ditunjukkan pada Gambar 7.3. Yang melakukan ini dengan menggunakan Javascript untuk memodifikasi parameter URL grafik saat pengguna mengklik menu dari fitur yang tersedia. Gambar 7.3: Graphites Composer Interface 7.4. Dasbor Sejak awal, Graphite telah digunakan sebagai alat untuk membuat dashboard berbasis web. API URL membuat kasus penggunaan alami ini. Membuat dasbor semudah membuat halaman HTML yang penuh dengan tag seperti ini: Namun, tidak semua orang menyukai menyusun URL dengan tangan, jadi Graphites Composer UI menyediakan metode titik-dan-klik untuk membuat grafik yang bisa Anda gunakan untuk menyalin dan Paste URL Bila digabungkan dengan alat lain yang memungkinkan pembuatan halaman web dengan cepat (seperti wiki), ini menjadi cukup mudah sehingga pengguna non-teknis dapat membuat dasbor mereka sendiri dengan mudah. 7.5. Hambatan yang Jelas Begitu pengguna saya mulai membangun dashboard, Graphite dengan cepat mulai memiliki masalah kinerja. Saya menyelidiki log server web untuk melihat permintaan apa yang macet. Sudah cukup jelas bahwa masalahnya adalah banyaknya permintaan grafik. Webapp adalah CPU-bound, rendering graphs terus-menerus. Saya melihat ada banyak permintaan yang sama, dan dasbor itu harus disalahkan. Bayangkan Anda memiliki dasbor dengan 10 grafik di dalamnya dan refresh halaman sekali semenit sekali. Setiap kali pengguna membuka dasbor di browser mereka, Graphite harus menangani 10 permintaan lagi per menit. Ini cepat menjadi mahal. Solusi sederhana adalah membuat setiap grafik hanya satu kali dan kemudian salin ke masing-masing pengguna. Kerangka web Django (yang dibuat oleh Grafit) menyediakan mekanisme caching yang sangat baik yang dapat menggunakan berbagai ujung belakang seperti memcached. Memcached 3 pada dasarnya adalah tabel hash yang disediakan sebagai layanan jaringan. Aplikasi klien bisa mendapatkan dan mengatur pasangan kunci-nilai seperti tabel hash biasa. Manfaat utama menggunakan memcached adalah bahwa hasil permintaan mahal (seperti rendering grafik) dapat disimpan dengan sangat cepat dan kemudian diambil untuk menangani permintaan berikutnya. Untuk menghindari mengembalikan grafik basi yang sama selamanya, memcached dapat dikonfigurasi untuk mengakhiri grafik cache setelah waktu yang singkat. Sekalipun ini hanya beberapa detik, beban yang lepas landas Graphite sangat besar karena permintaan duplikat sangat umum terjadi. Kasus umum lainnya yang membuat banyak permintaan render adalah ketika pengguna mengutak-atik pilihan tampilan dan menerapkan fungsi di UI Komposer. Setiap kali pengguna mengubah sesuatu, Graphite harus menggambar ulang grafik. Data yang sama dilibatkan dalam setiap permintaan sehingga masuk akal untuk memasukkan data yang mendasarinya ke dalam memcache juga. Hal ini membuat UI responsif terhadap pengguna karena langkah mengambil data dilewati. 7.6. Mengoptimalkan IO Bayangkan bahwa Anda memiliki 60.000 metrik yang Anda kirim ke server Grafit Anda, dan masing-masing metrik ini memiliki satu titik data per menit. Ingat bahwa setiap metrik memiliki file bisikan sendiri pada filesystem. Ini berarti karbon harus melakukan satu operasi tulis sampai 60.000 file yang berbeda setiap menitnya. Selama karbon bisa menulis ke satu file setiap milidetik, seharusnya bisa bertahan. Ini terlalu tidak masuk akal, namun katakanlah Anda memiliki 600.000 metrik yang diperbarui setiap menit, atau metrik Anda diperbarui setiap detik, atau mungkin Anda tidak mampu membeli penyimpanan dengan cukup cepat. Apapun masalahnya, asumsikan tingkat titik data masuk melebihi tingkat operasi tulis yang dapat disimpan oleh penyimpanan Anda. Bagaimana situasi ini harus ditangani Sebagian besar hard drive akhir-akhir ini memiliki waktu pencarian yang lambat 4. yaitu penundaan antara melakukan operasi IO di dua lokasi yang berbeda, dibandingkan dengan menulis urutan data yang bersebelahan. Ini berarti tulisan yang lebih bersebelahan yang kita lakukan, semakin banyak throughput yang kita dapatkan. Tapi jika kita memiliki ribuan file yang perlu dituliskan sering, dan masing-masing tulisan sangat kecil (satu titik data bisikan hanya 12 byte) maka cakram kita pasti akan menghabiskan sebagian besar waktu mereka untuk mencari. Bekerja dengan asumsi bahwa tingkat operasi tulis memiliki plafon yang relatif rendah, satu-satunya cara untuk meningkatkan throughput data point di luar tingkat tersebut adalah dengan menulis beberapa titik data dalam satu operasi tulis. Hal ini dimungkinkan karena bisikan mengatur titik data berturut-turut secara kontinu pada disk. Jadi saya menambahkan fungsi updatemany untuk berbisik. Yang mengambil daftar titik data untuk satu titik data metrik dan compacts bersebelahan menjadi satu operasi tulis. Meskipun ini membuat masing-masing menulis lebih besar, perbedaan waktu yang dibutuhkan untuk menulis sepuluh titik data (120 byte) versus satu titik data (12 byte) dapat diabaikan. Diperlukan beberapa titik data lagi sebelum ukuran masing-masing tulisan mulai terasa mempengaruhi latency. Selanjutnya saya menerapkan mekanisme penyangga karbon. Setiap titik data yang masuk dipetakan ke antrian berdasarkan nama metriknya dan kemudian ditambahkan ke antrean tersebut. Thread lain berulang kali berulang-ulang melewati semua antrian dan untuk masing-masing ia menarik semua data dan menuliskannya ke file bisikan yang sesuai dengan updatemany. Kembali ke contoh kita, jika kita memiliki 600.000 metrik yang diupdate setiap menit dan penyimpanan kita hanya bisa mengikuti 1 tulis per milidetik, maka antriannya akan bertahan sekitar 10 titik data rata-rata. Satu-satunya sumber daya ini adalah biaya memori kita, yang relatif banyak karena setiap titik data hanya beberapa byte. Strategi ini secara dinamis menyangga sebanyak datapoints seperlunya untuk mempertahankan tingkat datapoint yang masuk yang mungkin melebihi tingkat operasi IO yang dapat disimpan oleh penyimpanan Anda. Keuntungan bagus dari pendekatan ini adalah menambahkan tingkat ketahanan untuk menangani slowdowns IO sementara. Jika sistem perlu melakukan pekerjaan IO lain di luar Grafit maka kemungkinan tingkat operasi tulis akan menurun, dalam hal ini antrian karbon hanya akan tumbuh. Semakin besar antriannya, semakin besar pula tulisannya. Karena keseluruhan throughput titik data sama dengan kecepatan operasi menulis, rata-rata ukuran masing-masing tulisan, karbon mampu bertahan selama ada cukup memori untuk antrian. Mekanisme antrian karbon digambarkan pada Gambar 7.4. Gambar 7.4: Mekanisme Antrian Karbon 7.7. Keeping It Real-Time Menyangga poin data adalah cara yang bagus untuk mengoptimalkan IO karbon tapi tidak butuh waktu lama bagi pengguna saya untuk melihat efek samping yang agak menyulitkan. Mengembalikan contoh kita lagi, kita memiliki 600.000 metrik yang diperbarui setiap menit dan mengasumsikan penyimpanan kita hanya bisa mengikuti operasi tulis 60.000 per menit. Ini berarti kita akan memiliki kira-kira 10 menit data yang tersimpan dalam antrian karbon pada waktu tertentu. Bagi pengguna ini berarti bahwa grafik yang mereka minta dari webapp Grafit akan hilang dalam 10 menit terakhir data: Tidak bagus Untungnya solusinya cukup lurus ke depan. Saya hanya menambahkan pendengar soket ke karbon yang menyediakan antarmuka kueri untuk mengakses titik data buffer dan kemudian memodifikasi webapp Grafit untuk menggunakan antarmuka ini setiap kali dibutuhkan untuk mengambil data. Webapp kemudian menggabungkan titik data yang diambil dari karbon dengan titik data yang diambil dari disk dan voila, grafiknya real-time. Memang, dalam contoh kita titik data diperbarui sampai menit dan dengan demikian tidak benar-benar real-time, namun fakta bahwa setiap titik data dapat langsung diakses dalam grafik setelah diterima oleh karbon secara real-time. 7.8. Kernel, Caches, dan Catastrophic Failures Seperti yang mungkin terlihat sekarang, karakteristik utama dari kinerja sistem yang kinerja Graphites sendiri bergantung pada adalah latensi IO. Sejauh ini diasumsikan sistem kita memiliki latensi IO yang rendah secara konsisten rata-rata sekitar 1 milidetik per menulis, namun ini adalah asumsi besar yang memerlukan sedikit analisis lebih dalam. Kebanyakan hard drive hanya arent yang cepat bahkan dengan puluhan disk dalam array RAID ada kemungkinan sangat lebih dari 1 milidetik latency untuk akses acak. Namun, jika Anda mencoba dan menguji seberapa cepat laptop lama bisa menulis keseluruhan kilobyte ke disk Anda akan mendapati bahwa panggilan sistem tulis kembali dalam waktu kurang dari 1 milidetik. Mengapa setiap kali perangkat lunak memiliki karakteristik kinerja yang tidak konsisten atau tidak terduga, biasanya buffering atau caching adalah penyebabnya. Dalam kasus ini, berurusan dengan keduanya. Panggilan sistem tulis tidak secara teknis menulis data Anda ke disk, ia hanya memasukkannya ke dalam buffer yang kemudian ditulis oleh kernel ke disk nanti. Inilah sebabnya mengapa panggilan menulis biasanya kembali begitu cepat. Bahkan setelah buffer ditulis ke disk, sering kali disimpan di cache untuk dibaca selanjutnya. Kedua perilaku ini, buffering dan caching, membutuhkan memori tentunya. Pengembang kernel, yang menjadi orang-orang pintar, mereka akan menjadi ide bagus untuk menggunakan memori user-space apa pun saat ini, bukan mengalokasikan memori secara langsung. Ini ternyata merupakan booster kinerja yang sangat berguna dan ini juga menjelaskan mengapa tidak peduli berapa banyak memori yang Anda tambahkan ke sistem biasanya akan berakhir dengan memori bebas hampir nol setelah melakukan jumlah IO yang sederhana. Jika aplikasi pengguna-space Anda tidak menggunakan memori itu maka kernel Anda mungkin ada. Kelemahan dari pendekatan ini adalah bahwa memori bebas ini dapat diambil dari kernel saat aplikasi pengguna-ruang memutuskan bahwa ia perlu mengalokasikan lebih banyak memori untuk dirinya sendiri. Kernel tidak memiliki pilihan lain selain menyerahkannya, kehilangan buffer apa pun yang mungkin ada di sana. Jadi, apa artinya semua ini bagi Grafit Kami hanya menyoroti ketergantungan karbon pada latensi IO yang rendah secara konsisten dan kami juga tahu bahwa panggilan sistem tulis hanya akan kembali dengan cepat karena datanya hanya disalin ke penyangga. Apa yang terjadi bila tidak ada cukup memori agar kernel terus buffering. Penulis menulis menjadi sinkron dan sangat lamban. Hal ini menyebabkan penurunan dramatis dalam laju operasi penulisan karbon, yang menyebabkan antrian karbon meningkat, yang bahkan memakan lebih banyak memory, starving the kernel even further. In the end, this kind of situation usually results in carbon running out of memory or being killed by an angry sysadmin. To avoid this kind of catastrophe, I added several features to carbon including configurable limits on how many data points can be queued and rate-limits on how quickly various whisper operations can be performed. These features can protect carbon from spiraling out of control and instead impose less harsh effects like dropping some data points or refusing to accept more data points. However, proper values for those settings are system-specific and require a fair amount of testing to tune. They are useful but they do not fundamentally solve the problem. For that, well need more hardware. 7.9. Clustering Making multiple Graphite servers appear to be a single system from a user perspective isnt terribly difficult, at least for a naiumlve implementation. The webapps user interaction primarily consists of two operations: finding metrics and fetching data points (usually in the form of a graph). The find and fetch operations of the webapp are tucked away in a library that abstracts their implementation from the rest of the codebase, and they are also exposed through HTTP request handlers for easy remote calls. The find operation searches the local filesystem of whisper data for things matching a user-specified pattern, just as a filesystem glob like. txt matches files with that extension. Being a tree structure, the result returned by find is a collection of Node objects, each deriving from either the Branch or Leaf sub-classes of Node. Directories correspond to branch nodes and whisper files correspond to leaf nodes. This layer of abstraction makes it easy to support different types of underlying storage including RRD files 5 and gzipped whisper files. The Leaf interface defines a fetch method whose implementation depends on the type of leaf node. In the case of whisper files it is simply a thin wrapper around the whisper librarys own fetch function. When clustering support was added, the find function was extended to be able to make remote find calls via HTTP to other Graphite servers specified in the webapps configuration. The node data contained in the results of these HTTP calls gets wrapped as RemoteNode objects which conform to the usual Node. Branch. and Leaf interfaces. This makes the clustering transparent to the rest of the webapps codebase. The fetch method for a remote leaf node is implemented as another HTTP call to retrieve the data points from the nodes Graphite server. All of these calls are made between the webapps the same way a client would call them, except with one additional parameter specifying that the operation should only be performed locally and not be redistributed throughout the cluster. When the webapp is asked to render a graph, it performs the find operation to locate the requested metrics and calls fetch on each to retrieve their data points. This works whether the data is on the local server, remote servers, or both. If a server goes down, the remote calls timeout fairly quickly and the server is marked as being out of service for a short period during which no further calls to it will be made. From a user standpoint, whatever data was on the lost server will be missing from their graphs unless that data is duplicated on another server in the cluster. 7.9.1. A Brief Analysis of Clustering Efficiency The most expensive part of a graphing request is rendering the graph. Each rendering is performed by a single server so adding more servers does effectively increase capacity for rendering graphs. However, the fact that many requests end up distributing find calls to every other server in the cluster means that our clustering scheme is sharing much of the front-end load rather than dispersing it. What we have achieved at this point, however, is an effective way to distribute back-end load, as each carbon instance operates independently. This is a good first step since most of the time the back end is a bottleneck far before the front end is, but clearly the front end will not scale horizontally with this approach. In order to make the front end scale more effectively, the number of remote find calls made by the webapp must be reduced. Again, the easiest solution is caching. Just as memcached is already used to cache data points and rendered graphs, it can also be used to cache the results of find requests. Since the location of metrics is much less likely to change frequently, this should typically be cached for longer. The trade-off of setting the cache timeout for find results too long, though, is that new metrics that have been added to the hierarchy may not appear as quickly to the user. 7.9.2. Distributing Metrics in a Cluster The Graphite webapp is rather homogeneous throughout a cluster, in that it performs the exact same job on each server. carbon s role, however, can vary from server to server depending on what data you choose to send to each instance. Often there are many different clients sending data to carbon. so it would be quite annoying to couple each clients configuration with your Graphite clusters layout. Application metrics may go to one carbon server, while business metrics may get sent to multiple carbon servers for redundancy. To simplify the management of scenarios like this, Graphite comes with an additional tool called carbon-relay. Its job is quite simple it receives metric data from clients exactly like the standard carbon daemon (which is actually named carbon-cache ) but instead of storing the data, it applies a set of rules to the metric names to determine which carbon-cache servers to relay the data to. Each rule consists of a regular expression and a list of destination servers. For each data point received, the rules are evaluated in order and the first rule whose regular expression matches the metric name is used. This way all the clients need to do is send their data to the carbon-relay and it will end up on the right servers. In a sense carbon-relay provides replication functionality, though it would more accurately be called input duplication since it does not deal with synchronization issues. If a server goes down temporarily, it will be missing the data points for the time period in which it was down but otherwise function normally. There are administrative scripts that leave control of the re-synchronization process in the hands of the system administrator. 7.10. Design Reflections My experience in working on Graphite has reaffirmed a belief of mine that scalability has very little to do with low-level performance but instead is a product of overall design. I have run into many bottlenecks along the way but each time I look for improvements in design rather than speed-ups in performance. I have been asked many times why I wrote Graphite in Python rather than Java or C, and my response is always that I have yet to come across a true need for the performance that another language could offer. In Knu74 , Donald Knuth famously said that premature optimization is the root of all evil. As long as we assume that our code will continue to evolve in non-trivial ways then all optimization 6 is in some sense premature. One of Graphites greatest strengths and greatest weaknesses is the fact that very little of it was actually designed in the traditional sense. By and large Graphite evolved gradually, hurdle by hurdle, as problems arose. Many times the hurdles were foreseeable and various pre-emptive solutions seemed natural. However it can be useful to avoid solving problems you do not actually have yet, even if it seems likely that you soon will. The reason is that you can learn much more from closely studying actual failures than from theorizing about superior strategies. Problem solving is driven by both the empirical data we have at hand and our own knowledge and intuition. Ive found that doubting your own wisdom sufficiently can force you to look at your empirical data more thoroughly. For example, when I first wrote whisper I was convinced that it would have to be rewritten in C for speed and that my Python implementation would only serve as a prototype. If I werent under a time-crunch I very well may have skipped the Python implementation entirely. It turns out however that IO is a bottleneck so much earlier than CPU that the lesser efficiency of Python hardly matters at all in practice. As I said, though, the evolutionary approach is also a great weakness of Graphite. Interfaces, it turns out, do not lend themselves well to gradual evolution. A good interface is consistent and employs conventions to maximize predictability. By this measure, Graphites URL API is currently a sub-par interface in my opinion. Options and functions have been tacked on over time, sometimes forming small islands of consistency, but overall lacking a global sense of consistency. The only way to solve such a problem is through versioning of interfaces, but this too has drawbacks. Once a new interface is designed, the old one is still hard to get rid of, lingering around as evolutionary baggage like the human appendix. It may seem harmless enough until one day your code gets appendicitis (i. e. a bug tied to the old interface) and youre forced to operate. If I were to change one thing about Graphite early on, it would have been to take much greater care in designing the external APIs, thinking ahead instead of evolving them bit by bit. Another aspect of Graphite that causes some frustration is the limited flexibility of the hierarchical metric naming model. While it is quite simple and very convenient for most use cases, it makes some sophisticated queries very difficult, even impossible, to express. When I first thought of creating Graphite I knew from the very beginning that I wanted a human-editable URL API for creating graphs 7. While Im still glad that Graphite provides this today, Im afraid this requirement has burdened the API with excessively simple syntax that makes complex expressions unwieldy. A hierarchy makes the problem of determining the primary key for a metric quite simple because a path is essentially a primary key for a node in the tree. The downside is that all of the descriptive data (i. e. column data) must be embedded directly in the path. A potential solution is to maintain the hierarchical model and add a separate metadata database to enable more advanced selection of metrics with a special syntax. 7.11. Becoming Open Source Looking back at the evolution of Graphite, I am still surprised both by how far it has come as a project and by how far it has taken me as a programmer. It started as a pet project that was only a few hundred lines of code. The rendering engine started as an experiment, simply to see if I could write one. whisper was written over the course of a weekend out of desperation to solve a show-stopper problem before a critical launch date. carbon has been rewritten more times than I care to remember. Once I was allowed to release Graphite under an open source license in 2008 I never really expected much response. After a few months it was mentioned in a CNET article that got picked up by Slashdot and the project suddenly took off and has been active ever since. Today there are dozens of large and mid-sized companies using Graphite. The community is quite active and continues to grow. Far from being a finished product, there is a lot of cool experimental work being done, which keeps it fun to work on and full of potential. launchpadgraphite There is another port over which serialized objects can be sent, which is more efficient than the plain-text format. This is only needed for very high levels of traffic. memcached. org Solid-state drives generally have extremely fast seek times compared to conventional hard drives. RRD files are actually branch nodes because they can contain multiple data sources an RRD data source is a leaf node. Knuth specifically meant low-level code optimization, not macroscopic optimization such as design improvements. This forces the graphs themselves to be open source. Anyone can simply look at a graphs URL to understand it or modify itBSD Planet February 24, 2017 The second release candidate of NetBSD 7.1 is now available for download at: Those of you who prefer to build from source can continue to follow the netbsd-7 branch or use the netbsd-7-1-RC2 tag. Most changes made since 7.1RC1 have been security fixes. See srcdocCHANGES-7.1 for the full list. Please help us out by testing 7.1RC2. We love any and all feedback. Report problems through the usual channels (submit a PR or write to the appropriate list). More general feedback is welcome at email160protected February 23, 2017 Goals: to use pkgcomp 2.0 to build a binary repository of all the packages you are interested in to keep the repository fresh on a daily basis and to use that repository with pkgin to maintain your macOS system up-to-date and secure. This tutorial is specifically targeted at macOS and relies on the macOS-specific self-installer package. For a more generic tutorial that uses the pkgcomp-cron package in pkgsrc, see Keeping NetBSD up-to-date with pkgcomp 2.0 . Getting started First download and install the standalone macOS installer package. To find the right file, navigate to the releases page on GitHub. pick the most recent release, and download the file with a name of the form pkgcomp-ltversiongt-macos. pkg . Then double-click on the file you downloaded and follow the installation instructions. You will be asked for your administrator password because the installer has to place files under usrlocal note that pkgcomp requires root privileges anyway to run (because it uses chroot(8) internally), so you will have to grant permission at some point or another. The installer modifies the default PATH (by creating etcpaths. dpkgcomp ) to include pkgcomps own installation directory and pkgsrcs installation prefix. Restart your shell sessions to make this change effective, or update your own shell startup scripts accordingly if you dont use the standard ones. Lastly, make sure to have Xcode installed in the standard ApplicationsXcode. app location and that all components required to build command-line apps are available. Tip: try running cc from the command line and seeing if it prints its usage message. Adjusting the configuration The macOS flavor of pkgcomp is configured with an installation prefix of usrlocal. which means that the executable is located in usrlocalsbinpkgcomp and the configuration files are in usrlocaletcpkgcomp. This is intentional to keep the pkgcomp installation separate from your pkgsrc installation so that it can run no matter what state your pkgsrc installation is in. The configuration files are as follows: usrlocaletcpkgcompdefault. conf. This is pkgcomps own configuration file and the defaults configured by the installer should be good to go for macOS. In particular, packages are configured to go into optpkg instead of the traditional usrpkg. This is a necessity because the latter is not writable starting with OS X El Capitan thanks to System Integrity Protection (SIP). usrlocaletcpkgcompsandbox. conf. This is the configuration file for sandboxctl, which is the support tool that pkgcomp uses to manage the compilation sandbox. The default settings configured by the installer should be good. usrlocaletcpkgcompextra. mk. conf. This is pkgsrcs own configuration file. In here, you should configure things like the licenses that are acceptable to you and the package-specific options youd like to set. You should not configure the layout of the installed files (e. g. LOCALBASE ) because thats handled internally by pkgcomp as specified in default. conf . usrlocaletcpkgcomplist. txt. This determines the set of packages you want to build automatically (either via the auto command or your periodic cron job). The automated builds will fail unless you list at least one package. Make sure to list pkgin here to install a better binary package management tool. Youll find this very handy to keep your installation up-to-date. Note that these configuration files use the varpkgcomp directory as the dumping ground for: the pkgsrc tree, the downloaded distribution files, and the built binary packages. We will see references to this location later on. The cron job The installer configures a cron job that runs as root to invoke pkgcomp daily. The goal of this cron job is to keep your local packages repository up-to-date so that you can do binary upgrades at any time. You can edit the cron job configuration interactively by running sudo crontab - e . This cron job wont have an effect until you have populated the list. txt file as described above, so its safe to let it enabled until you have configured pkgcomp. If you want to disable the periodic builds, just remove the pkgcomp entry from the crontab. On slow machines, or if you are building a lot of packages, you may want to consider decreasing the build frequency from daily to weekly . Sample configuration Here is what the configuration looks like on my Mac Mini as dumped by the config subcommand. Use this output to get an idea of what to expect. Ill be using the values shown here in the rest of the tutorial: Building your own packages by hand Now that you are fully installed and configured, youll build some stuff by hand to ensure the setup works before the cron job comes in. The simplest usage form, which involves full automation and assumes you have listed at least one package in list. txt. is something like this: This trivially-looking command will: clone or update your copy of pkgsrc create the sandbox bootstrap pkgsrc and pbulk use pbulk to build the given packages and destroy the sandbox. After a successful invocation, youll be left with a collection of packages in the varpkgcomppackages directory. If youd like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTOPACKAGES (which was derived from your list. txt file). But what if you wanted to invoke all stages separately, bypassing auto. The command above would be equivalent to: Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkgcomp(8) for more details. Lastly note that the root user will receive email messages if the periodic pkgcomp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under varpkgcomplog . Installing the resulting packages Now that you have built your first set of packages, you will want to install them. This is easy on macOS because you did not use pkgsrc itself to install pkgcomp. First, unpack the pkgsrc installation. You only have to do this once: Thats it. You can now install any packages you like: The command above assume you have restarted your shell to pick up the correct path to the pkgsrc installation. If the call to pkgadd fails because of a missing binary, try restarting your shell or explicitly running the binary as optpkgsbinpkgadd . Keeping your system up-to-date Thanks to the cron job that builds your packages, your local repository under varpkgcomppackages will always be up-to-date you can use that to quickly upgrade your system with minimal downtime. Assuming you are going to use pkgtoolspkgin as recommended above (and why not), configure your local repository: And, from now on, all it takes to upgrade your system is: February 22, 2017 At the obvious risk of this post getting downvoted and eventually closed as too biasedopionated, Id nevertheless ask this question. The NetBSD projects tagline is of course, it runs NetBSD. I understand that one of the main goals is to run on every possible hardware out there (pages on the internet are full of possible hyperbole, such as anything with a computing chip in it, even a toaster shall run NetBSD). However, if you examine the webpages of IoT hardware from mid-2010s, there is poor visibility of NetBSD as the first choice of OS. Eg. on the Raspberry Pi, Raspbian OS is regarded as the go-to starter OS. Arduinos Wikipedia page says that it runs either Windows, macOS or Linux. Snappy Ubuntu-Core and even Win10 IoT (gasp) are staking a claim as leading OSes in the IoT market. While I understand that the last two OSes mentioned above have corporate muscle-power behind them, even open-source job requirement listings do not place much emphasis on NetBSD expertise. The question distills down to: Why is NetBSD not considered the first-rate choice in these IoT hardware. This seems as an anti-pattern given the projects canonical goals All of a sudden (read: without changing any parameters) my netbsd virtualmachine started acting oddly. The symptoms concern ssh tunneling. From my laptop I launch: Then, in another shell: The ssh debug says: I tried also with localhost:80 to connect to the (remote) web server, with identical results. The remote host runs NetBSD: I am a bit lost. I tried running tcpdump on the remote host, and I spotted these bad chksum: I tried restarting the ssh daemon to no avail. I havent rebooted yet - perhaps somebody here can suggest other diagnostics. I think it might either be the virtual network card driver, or somebody rooted our ssh. February 20, 2017 Introduction I have been working on and off for almost a year trying to get reproducible builds (the same source tree always builds an identical cdrom) on NetBSD. I did not think at the time it would take as long or be so difficult, so I did not keep a log of all the changes I needed to make. I was also not the only one working on this. Other NetBSD developers have been making improvements for the past 6 years. I would like to acknowledge the NetBSD build system (aka build. sh ) which is a fully portable cross-build system. This build system has given us a head-start in the reproducible builds work. I would also like to acknowledge the work done by the Debian folks who have provided a platform to run, test and analyze reproducible builds. Special mention to the diffoscope tool that gives an excellent overview of whats different between binary files, by finding out what they are (and if they are containers what they contain) and then running the appropriate formatter and diff program to show whats different for each file. Finally other developers who have started, motivated and did a lot of work getting us here like Joerg Sonnenberger and Thomas Klausner for their work on reproducible builds, and Todd Vierling and Luke Mewburn for their work on build. sh. Sources of difference Heres is what we found that we needed to fix, how we chose to fix it and why, and where are we now. There are many reasons why two separate builds from the same sources can be different. Heres an (incomplete) list: timestamps Many things like to keep track of timestamps, specially archive formats ( tar(1) . ar(1) ), filesystems etc. The way to handle each is different, but the approach is to make them either produce files with a 0 timestamp (where it does not matter like ar), or with a specific timestamp when using 0 does not make sense (it is not useful to the user). datestimesauthors etc. embedded in source files Some programs like to report the datetime they were built, the author, the system they were built on etc. This can be done either by programmatically finding and creating source files containing that information during build time, or by using standard macros such as DATE, TIME etc. Usually putting a constant time or eliding the information (such as we do with kernels and bootblocks) solves the problem. timezone sensitive code Certain filesystem formats (iso 9660 etc.) dont store raw timestamps but formatted times to achieve this they convert from a timestamp to localtime, so they are affected by the timezone. directory orderbuild order The build order is not constant especially in the presence of parallel builds neither is directory scan order. If those are used to create output files, the output files will need to be sorted so they become consistent. non-sanitized data stored into files Writing data structures into raw files can lead to problems. Running the same program in different operating systems or using ASLR makes those issues more obvious. symbolic linkspaths Having paths embedded into binaries (specially for debugging information) can lead to binary differences. Propagation of the logical path can prove problematic. general tool inconsistencies gcc(1) profiling uses a PROFILEHOOK macro on RISC targets that utilizes the current function number to produce labels. Processing order of functions is not guaranteed. gpt(8) creation involves uuid generation these are generally random. block allocation on msdos filesystems had a random component. makefs(8) uses timezones with timestamps (iso9660 ), randomness for block selection (msdos ), stores stray pointers in superblock (ffs ). Every program that is used to generate other output needs to have consistent results. In NetBSD this is done with build. sh. which builds a set of tools from known sources before it can use those tools to build the rest of the system). There is a large number of tools. There are also internal issues with the tools that make their output non reproducible, such as nondeterministic symbol creation or capturing parts of the environment in debugging information. build information tunables environment There are many environment settings, or build variable settings that can affect the build. This needs to be kept constant across builds so weve changed the list of variables that are reported in Makefile. params. making sure that the source tree has no local changes Variables controlling reproducible builds Reproducible builds are controlled on NetBSD with two variables: MKREPRO (which can be set to yes or no) and MKREPROTIMESTAMP which is used to set the timestamp of the builds artifacts. This is usually set to the number of seconds from the epoch. The build. sh - P flag handles reproducible builds automatically: sets the MKREPRO variable to yes, and then finds the latest source file timestamp in the tree and sets MKREPROTIMESTAMP to that. Handling timestamps The first thing that we needed to understand was how to deal with timestamps. Some of the timestamps are not very useful (for example inside random ar archives) so we choose to 0 them out. Others though become annoying if they are all 0. What does it mean when you mount install media and all the dates on the files are Jan 1, 1970 We decided that a better timestamp would be the timestamp of the most recently modified file in the source tree. Unfortunately this was not easy to find on NetBSD, because we are still using CVS as the source control system, and CVS does not have a good way to provide that. For that we wrote a tool called cvslatest. that scans the CVS metadata files (CVSEntries) and finds the latest commit. This works well for freshly checked out trees (since CVS uses the source timestamp when checking out), but not with updated trees (because CVS uses the current time when updating files, so that make(1) thinks theyve been modified). To fix that, weve added a new flag to the cvs(1) update command - t . that uses the source checkout time. The build system needs now to evaluate the tree for the latest file running cvslatest(1) and find the latest timestamp in seconds from the Epoch which is set in the MKREPROTIMESTAMP variable. This is the same as SOURCEDATEEPOCH. Various Makefiles are using this variable and MKRERPO to determine how to produce consistent build artifacts. For example many commands ( tar(1) . makefs(8) . gpt(8) . ) have been modified to take a --timestamp or - T command line switch to generate output files that use the given timestamp, instead of the current time. Other software (am-utils, acpica, bootblocks, kernel) used DATE or TIME, or captured the user, machine, etc. from the environment and had to be changed to a constant time, user, machine, etc. roff(7) documents used the td macro to generate the date of formatting in the document have been changed to conditionally use the macro based on register R . for example as in intro. me and then the Makefile was changed to set that register for MKREPRO. Handling Order We dont control the build order of things and we also dont control the directory order which can be filesystem dependent. The collation order also is environment specific, and sorting needs to be stable (we have not encountered that problem yet). Two different programs caused us problems here: file(1) with the generation of the compiled magic file using directory order (fixed by changing file(1) ). install-info(1) . texinfo(5) files that have no specific order. For that we developed another tool called sortinfo(1) that sorts those files as a post-process step. Fortunately the filesystem builders and tar programs usually work with input directories that appear to have a consistent order so far, so we did not have to fix things there. Permissions NetBSD already keeps permissions for most things consistent in different ways: the build system uses install(8) and specifies ownership and mode. the mtree(8) program creates build artifacts using consistent ownership and permissions. Nevertheless, the various architecture-specific distribution media installers used cp(1) mkdir(1) and needed to be corrected. Most of the issues found had to do with capturing the environment in debugging information. The two biggest issues were: DWATProducer and DWATcompdir . Here you see two changes we made for reproducible builds: We chose to allow variable names (and have gcc(1) expand them) for the source of the prefix map because the source tree location can vary. Others have chosen to skip - fdebug-prefix-map from the variables to be listed. We added - fdebug-regex-map so that we could handle the NetBSD specific objdir build functionality. Object directories can have many flavors in NetBSD so it was difficult to use - fdebug-prefix-map to capture that. DWATcompdir presented a different challenge. We got non-reproducibility when building on paths where either the source or the object directories contained symbolic links. Although gcc(1) does the right thing handling logical paths (respects PWD), we found that there were problems both in the NetBSD sh(1) (fixed here ) and in the NetBSD make(1) (fixed here ). Unfortunately we cant depend on the shell to obey the logical path so we decided to go with: This works because make(1) is a tool (part of the toolchain we provide) whereas sh(1) is not. Another weird issue popped up on sparc64 where a single file in the whole source tree does not build reproducibly. This file is asn1krb5asn1.c which is generated in here. The problem is that when profiling on RISC machines gcc uses the PROFILEHOOK macro which in turn uses the function number to generate labels. This number is assigned to each function in a source file as it is being compiled. Unfortunately this number is not deterministic because of optimization (a bug), but fortunately turning optimization off fixes the problem. Status and future work As of 2017-02-20 we have fully reproducible builds on amd64 and sparc64. We are planning to work on the following areas: Vary more parameters on the system build (filesystem types, build OSs) Verify that cross building is reproducible Verify that unprivileged builds work Test on all the platforms February 19, 2017 At the second annual PillarCon. I facilitated a workshop called Fundamentals of C and Embedded using Mob Programming. On a Mac, we test-drove toggling a Raspberry Pis onboard LED. Before and after Before: ACT LED off Here are the takeaways we wrote down: Could test return type of main() Why wasnt numcalls 0 to begin with Maybe provide the mocks in advance (maybe use CMock ) Fun idea: fake GPIO device Vim tricks Cool But maybe use an easier editor for target audience Appropriate amount of effort need bigger payoff Mob programming supported the learning processobjective My own thoughts for next time I do this material: Try: providing the mocks in the starting state Keep: providing multi-target Makefile and prebuilt cross compiler Try: using a more discoverable (e. g. non-modal) text editor Keep: being prepared with a test list Try: providing already-written test cases to uncomment one at a time (one of the aspects of James Grennings training course I especially loved) Keep: being prepared with corners to cut if time gets short Try: knowing more of the mistakes we might make when cutting corners Keep: mobbing Participants who already knew some of this stuff liked th e mobbing (new to some of them) and appreciated how I structured the material to unfold. Participants who were new to C andor embedded (my target audience) came away feeling that they neednt be intimidated by it, and that programming in this context can be as fun and feedbacky as theyre accustomed to. Play along at home Then follow the steps outlined in the README . Further learning Youre welcome to use the workshop materials for any purpose, including your own workshop. If you do, Id love to hear about it. Or if youd like me to come facilitate it for your company, meetup group, etc. lets talk. February 18, 2017 This is a tutorial to guide you through the shiny new pkgcomp 2.0 on NetBSD. Goals: to use pkgcomp 2.0 to build a binary repository of all the packages you are interested in to keep the repository fresh on a daily basis and to use that repository with pkgin to maintain your NetBSD system up-to-date and secure. This tutorial is specifically targeted at NetBSD but should work on other platforms with some small changes. Expect, at the very least, a macOS-specific tutorial as soon as I create a pkgcomp standalone installer for that platform. Getting started First install the sysutilssysbuild-user package and trigger a full build of NetBSD so that you get usable release sets for pkgcomp. See sysbuild(1) and pkginfo sysbuild-user for details on how to do so. Alternatively, download release sets from the FTP site and later tell pkgcomp where they are. Then install the pkgtoolspkgcomp-cron package. The rest of this tutorial assumes you have done so. Adjusting the configuration To use pkgcomp for periodic builds, youll need to do some minimal edits to the default configuration files. The files can be found directly under varpkgcomp. which is pkgcomp-cron s home: varpkgcomppkgcomp. conf. This is pkgcomps own configuration file and the defaults installed by pkgcomp-cron should be good to go. The contents here are divided in three major sections: declaration on how to download pkgsrc, definition of the file system layout on the host machine, and definition of the file system layout for the built packages. You may want to customize the target system paths, such as LOCALBASE or SYSCONFDIR. but you should not have to customize the host system paths. varpkgcompsandbox. conf. This is the configuration file for sandboxctl. The default settings installed by pkgcomp-cron should suffice if you used the sysutilssysbuild-user package as recommended otherwise tweak the NETBSDNATIVERELEASEDIR and NETBSDSETSRELEASEDIR variables to point to where the downloaded release sets are. varpkgcompextra. mk. conf. This is pkgsrcs own configuration file. In here, you should configure things like the licenses that are acceptable to you and the package-specific options youd like to set. You should not configure the layout of the installed files (e. g. LOCALBASE ) because thats handled internally by pkgcomp as specified in pkgcomp. conf . varpkgcomplist. txt. This determines the set of packages you want to build in your periodic cron job. The builds will fail unless you list at least one package. WARNING: Make sure to include pkgcomp-cron and pkgin in this list so that your binary kit includes these essential package management tools. Otherwise youll have to deal with some minor annoyances after rebootstrapping your system. Lastly, review roots crontab to ensure the job specification for pkgcomp is sane. On slow machines, or if you are building many packages, you will probably want to decrease the build frequency from daily to weekly . Sample configuration Here is what the configuration looks like on my NetBSD development machine as dumped by the config subcommand. Use this output to get an idea of what to expect. Ill be using the values shown here in the rest of the tutorial: Building your own packages by hand Now that you are fully installed and configured, youll build some stuff by hand to ensure the setup works before the cron job comes in. The simplest usage form, which involves full automation, is something like this: This trivially-looking command will: checkout or update your copy of pkgsrc create the sandbox bootstrap pkgsrc and pbulk use pbulk to build the given packages and destroy the sandbox. After a successful invocation, youll be left with a collection of packages in the directory you set in PACKAGES. which in the default pkgcomp-cron installation is varpkgcomppackages . If youd like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTOPACKAGES (which was derived from your list. txt file). But what if you wanted to invoke all stages separately, bypassing auto. The command above would be equivalent to: Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkgcomp(8) for more details. Lastly note that the root user will receive email messages if the periodic pkgcomp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under varpkgcomplog . Installing the resulting packages Now that you have built your first set of packages, you will want to install them. On NetBSD, the default pkgcomp-cron configuration produces a set of packages for usrpkg so you have to wipe your existing packages first to avoid build mismatches. WARNING: Yes, you really have to wipe your packages. pkgcomp currently does not recognize the package tools that ship with the NetBSD base system (i. e. it bootstraps pkgsrc unconditionally, including bmake ), which means that the newly-built packages wont be compatible with the ones you already have. Avoid any trouble by starting afresh. To clean your system, do something like this: Now, rebootstrap pkgsrc and reinstall any packages you previously had: Finally, reconfigure any packages where you had have previously made custom edits. Use the backup in rootetc. old to properly update the corresponding files in etc. I doubt you made a ton of edits so this should be easy. IMPORTANT: Note that the last command in this example includes pkgin and pkgcomp-cron. You should install these first to ensure you can continue with the next steps in this tutorial. Keeping your system up-to-date If you paid attention when you installed the pkgcomp-cron package, you should have noticed that this configured a cron job to run pkgcomp daily. This means that your packages repository under varpkgcomppackages will always be up-to-date so you can use that to quickly upgrade your system with minimal downtime. Assuming you are going to use pkgtoolspkgin (and why not), configure your local repository: And, from now on, all it takes to upgrade your system is: Lots of storage this week. February 17, 2017 After many (many) years in the making, pkgcomp 2.0 and its companion sandboxctl 1.0 are finally here Read below for more details on this launch. I will publish detailed step-by-step tutorials on setting up periodic package rebuilds in separate posts. What are these tools pkgcomp is an automation tool to build pkgsrc binary packages inside a chroot-based sandbox. The main goal is to fully automate the process and to produce clean and reproducible packages. A secondary goal is to support building binary packages for a different system than the one doing the builds: e. g. building packages for NetBSDi386 6.0 from a NetBSDamd64 7.0 host. The highlights of pkgcomp 2.0 . compared to the 1.x series, are: multi-platform support . including NetBSD, FreeBSD, Linux, and macOS use of pbulk for efficient builds management of the pkgsrc tree itself via CVS or Git and a more robust and modern codebase . sandboxctl is an automation tool to create and manage chroot-based sandboxes on a variety of operating systems . sandboxctl is the backing tool behind pkcomp. sandboxctl hides the details of creating a functional chroot sandbox on all supported operating systems in some cases, like building a NetBSD sandbox using release sets, things are easy but in others, like on macOS, they are horrifyingly difficult and brittle. Storytelling time pkgcomps history is a long one. pkgcomp 1.0 first appeared in pkgsrc on September 6th, 2002 as the pkgtoolspkgcomp package in pkgsrc. As of this writing, the 1.x series are at version 1.38 and have received contributions from a bunch of pkgsrc developers and external users even more, the tool was featured in the BSD Hacks book back in 2004. This is a long time for a shell script to survive in its rudimentary original form: pkgcomp 1.x is now a teenager at its 14 years of age and is possibly one of my longest-living pieces of software still in use. Motivation for the 2.x rewrite For many of these years, I have been wanting to rewrite pkgcomp to support other operating systems. This all started when I first got a Mac in 2005, at which time pkgsrc already supported Darwin but there was no easy mechanism to manage package updates. What would happenand still happens to this dayis that, once in a while, Id realize that my packages were out of date (read: insecure) so Id wipe the whole pkgsrc installation and start from scratch. Very inconvenient I had to automate that properly. Thus the main motivation behind the rewrite was primarily to support macOS because this was, and still is, my primary development platform. The secondary motivation came after writing sysbuild in 2012, which trivially configured daily builds of the NetBSD base system from cron I wanted the exact same thing for my packages. One, two no, three rewrites The first rewrite attempt was sometime in 2006, soon after I learned Haskell in school. Why Haskell Just because that was the new hotness in my mind and it seemed like a robust language to drive a pretty tricky automation process. That rewrite did not go very far, and thats possibly for the better: relying on Haskell would have decreased the portability of the tool, made it hard to install it, and guaranteed to alienate contributors. The second rewrite attempt started sometime in 2010, about a year after I joined Google as an SRE. This was after I became quite familiar with Python at work, wanting to use the language to rewrite this tool. That experiment didnt go very far though, but I cant remember why probably because I was busy enough at work and creating Kyua. The third and final rewrite attempt started in 2013 while I had a summer intern and I had a little existential crisis. The year before I had written sysbuild and shtk. so I figured recreating pkgcomp using the foundations laid out by these tools would be easy. And it was to some extent. Getting the barebones of a functional tool took only a few weeks, but that code was far from being stable, portable, and publishable. Life and work happened, so this fell through the cracks until late last year, when I decided it was time to close this chapter so I could move on to some other project ideas. To create the focus and free time required to complete this project, I had to shift my schedule to start the day at 5am instead of 7amand, many weeks later, the code is finally here and Im still keeping up with this schedule. Granted: this third rewrite is not a fancy one, but it wasnt meant to be. pkgcomp 2.0 is still written in shell, just as 1.x was, but this is a good thing because bootstrapping on all supported platforms is easy. I have to confess that I also considered Go recently after playing with it last year but I quickly let go of that thought: at some point I had to ship the 2.0 release, and 10 years since the inception of this rewrite was about time. The launch of 2.0 On February 12th, 2017, the authoritative sources of pkgcomp 1.x were moved from pkgtoolspkgcomp to pkgtoolspkgcomp1 to make room for the import of 2.0. Yes, the 1.x series only existed in pkgsrc and the 2.x series exist as a standalone project on GitHub . And here we are. Today, February 17th, 2017, pkgcomp 2.0 saw the light Why sandboxctl as a separate tool sandboxctl is the supporting tool behind pkgcomp, taking care of all the logic involved in creating chroot-based sandboxes on a variety of operating systems. Some are easy, like building a NetBSD sandbox using release sets, and others are horrifyingly difficult like macOS. In pkgcomp 1.x, this logic used to be bundled right into the pkgcomp code, which made it pretty much impossible to generalize for portability. With pkgcomp 2.x, I decided to split this out into a separate tool to keep responsibilities isolated. Yes, the integration between the two tools is a bit tricky, but allows for better testability and understandability. Lastly, having sandboxctl as a standalone tool, instead of just a separate code module, gives you the option of using it for your own sandboxing needs. I know, I know the world has moved onto containerization and virtual machines, leaving chroot-based sandboxes as a very rudimentary thing but thats all weve got in NetBSD, and pkgcomp targets primarily NetBSD. Note, though, that because pkgcomp is separate from sandboxctl, there is nothing preventing adding different sandboxing backends to pkgcomp. Installation Installation is still a bit convoluted unless you are on one of the tier 1 NetBSD platforms or you already have pkgsrc up and running. For macOS in particular, I plan on creating and shipping a installer image that includes all of pkgcomp dependenciesbut I did not want to block the first launch on this. For now though, you need to download and install the latest source releases of shtk. sandboxctl. and pkgcomp in this order pass the --with-atfno flag to the configure scripts to cut down the required dependencies. On macOS, you will also need OSXFUSE and the bindfs file system. If you are already using pkgsrc, you can install the pkgtoolspkgcomp package to get the basic tool and its dependencies in place, or you can install the wrapper pkgtoolspkgcomp-cron package to create a pre-configured environment with a daily cron job to run your builds. See the packages MESSAGE (with pkginfo pkgcomp-cron ) for more details. Documentation Both pkgcomp and sandboxctl are fully documented in manual pages. See pkgcomp(8). sandboxctl(8). pkgcomp. conf(5) and sandbox. conf(5) for plenty of additional details. As mentioned at the beginning of the post, I plan on publishing one or more tutorials explaining how to bootstrap your pkgsrc installation using pkgcomp on, at least, NetBSD and macOS. Stay tuned. And, if you need support or find anything wrong, please let me know by filing bugs in the corresponding GitHub projects: jmmvpkgcomp and jmmvsandboxctl . February 16, 2017 I claim an IPv6 address using ifconfig in a script. This address is then immediately used to listen on a TCP port. When I write the script like this, it fails because the service is unable to listen: However, it succeeds when I do it like this: I tried writing the output of ifconfig directly after running the add - operation. It appears that ifconfig reports the IP-address as being tentative . which apparently prevents a service from listening on it. Naturally, waiting exactly one second and hoping that the address has become available is not a very good way to handle this. How can I wait for a tentative address to become available, or make ifconfig return later so that the address is all set up I finally registered, have been reading the forum for years. Ill simply copy this from LQ. Already have written to a couple of lists (including netbsd-users) but without results. Running 7.0.2 with out of the box kernel. All my GTK2 apps segfault on keyboard input. lxappearance for example, when looking for a theme you can start pressing keys and it will search. But in my case it dumps core with usrliblibpthread. so.1 . usrliblibc. so.12 and usrpkgliblibXcursor. so.1 . The same thing happens when typing something into a GTK2 text editor, leafpad, or looking for something in CtrlO window in firefox or gimp or any other programme. gimp cant even run inside gdb because of: Program received signal SIGTRAP, Tracebreakpoint trap. 0x00007f7fea49f6aa in lwppark60 () from usrliblibc. so.12 (gdb) bt 0 0x00007f7fea49f6aa in lwppark60 () from usrliblibc. so.12 1 0x00007f7fec808f2b in pthreadcondtimedwait () from usrliblibpthread. so.1 2 0x00007f7feb880b80 in gcondwait () from usrpkgliblibglib-2.0.so.0 3 0x00007f7feb81d7cd in gasyncqueuepopinternunlocked () from usrpkgliblibglib-2.0.so.0 4 0x00007f7feb86742f in gthreadpoolthreadproxy () from usrpkgliblibglib-2.0.so.0 5 0x00007f7feb866a7d in gthreadproxy () from usrpkgliblibglib-2.0.so.0 6 0x00007f7fec80a9cc in. () from usrliblibpthread. so.1 7 0x00007f7fea483de0 in. () from usrliblibc. so.12 8 0x0000000000000000 in. () Firefox also has problems in libc. so.12 and libpthread. so.1 but doesnt say about lwppark60. It also cant run inside gdb. lxappearance also dumps core when clicking Apply after changing something (themes, cursor or icon themes, fonts etc.) with another output: 0 0x00007f7fefcb27ba in. () from usrliblibc. so.12 1 0x00007f7fefcb2bc7 in malloc () from usrliblibc. so.12 2 0x00007f7ff1849782 in gmalloc () from usrpkgliblibglib-2.0.so.0 3 0x00007f7ff185ef1c in gmemdup () from usrpkgliblibglib-2.0.so.0 4 0x00007f7ff18356b8 in ghashtableinsertnode () from usrpkgliblibglib-2.0.so.0 5 0x00007f7ff1835823 in ghashtableinsertinternal () from usrpkgliblibglib-2.0.so.0 6 0x00007f7ff183ccb1 in gkeyfileflushparsebuffer () from usrpkgliblibglib-2.0.so.0 7 0x00007f7ff183cf62 in gkeyfileparsedata () from usrpkgliblibglib-2.0.so.0 8 0x00007f7ff183d0e1 in gkeyfileloadfromfd () from usrpkgliblibglib-2.0.so.0 9 0x00007f7ff183d99e in gkeyfileloadfromfile () from usrpkgliblibglib-2.0.so.0 10 0x0000000000405532 in start () Apart from these programmes I receive SIGILL in mplayer when trying to play videos. Backtrace doesnt tell anything useful. sxiv, an image viewer, segfaults with this: 0 0x00007f7ff64b209f in. () from usrliblibc. so.12 1 0x00007f7ff64b3983 in free () from usrliblibc. so.12 2 0x000000000040729c in removefile () 3 0x0000000000409a92 in main () Previously, if built from local pkgsrc tree it worked but now stopped working at all at all. mpg321 dumps core and says Memory fault with this backtrace: 0 0x00007f7ff78068b1 in sempost () from usrliblibpthread. so.1 1 0x000000000040afe0 in. () 2 0x0000000000403695 in. () 3 0x00007f7ff7ffa000 in. () 4 0x0000000000000002 in. () 5 0x00007f7ffffffdb0 in. () 6 0x00007f7ffffffdb7 in. () 7 0x0000000000000000 in. () I did memtests, once for four hours (two passes) and once for eight hours (eight passes). I did Dells ePSA tests (diagnostic utility accessed from BIOS), it has own memtest, apart from monitoring the hard drive, the power supply, the keyboard, the fans, the CPU all of them returned no errors. I rebuilt gtk2 with debug symbols but it changed nothing. On LQ it was suggested that I have hardware problems, but I am not convinced. Every programme described above worked inside Ubuntu LiveUSB and Void Linux LiveUSB on the same machine (picked because they have different libcs). Before I had NetBSD with X11 a couple of months ago (and earlier) and I didnt have these errors. In the Interwebs I found similar messages on Arch forum and Launchpad. Is there a need for a 24 hour memtest Should I just remove each of the two memory modules and try Is it hardware related after all Thanks everyone for any kind of help. February 14, 2017 The LLVM project is a quickly moving target, this also applies to the LLVM debugger -- LLDB. Its actively used in several first-class operating systems, while - thanks to my spare time dedication - NetBSD joined the LLDB club in 2014, only lately the native support has been substantially improved and the feature set is quickly approaching the support level of Linux and FreeBSD. During this work 12 patches were committed to upstream, 12 patches were submitted to review, 11 new ATF were tests added, 2 NetBSD bugs filed and several dozens of commits were introduced in pkgsrc-wip, reducing the local patch set to mostly Native Process Plugin for NetBSD. What has been done in NetBSD 1. Triagged issues of ptrace(2) in the DTraceNetBSD support Chuck Silvers works on improving DTrace in NetBSD and he has detected an issue when tracer signals are being ignored in libproc . The libproc library is a compatibility layer for DTrace simulating proc capabilities on the SunOS family of systems. Ive verified that the current behavior of signal routing is incorrect. The NetBSD kernel correctly masks signals emitted by a tracee, not routing them to its tracer. On the other hand the masking rules in the inferior process blacklists signals generated by the kernel, which is incorrect and turns a debugger into a deaf listener. This is the case for libproc as signals were masked and software breakpoints triggering INT3 on i386 amd64 CPUs and SIGTRAP with TRAPBRKP sicode wasnt passed to the tracer. This isnt limited to turning a debugger into a deaf listener, but also a regular execution of software breakpoints requires: rewinding the program counter register by a single instruction, removing trap instruction and restoring the original instruction. When an instruction isnt restored and further code execution is pretty randomly affected, it resulted in execution anomalies and breaking of tracee. A workaround for this is to disable signal masking in tracee. Another drawback inspired by the DTrace code is to enhance PTSYSCALL handling by introducing a way to distinguish syscall entry and syscall exit events. Im planning to add dedicated sicodes for these scenarios. While there, there are users requesting PTSTEP and PTSYSCALL tracing at the same time in an efficient way without involving heuristcs. Ive filed the mentioned bug: Ive added new ATF tests: Verify that masking single unrelated signal does not stop tracer from catching other signals Verify that masking SIGTRAP in tracee stops tracer from catching this raised signal Verify that masking SIGTRAP in tracee does not stop tracer from catching software breakpoints Verify that masking SIGTRAP in tracee does not stop tracer from catching single step trap Verify that masking SIGTRAP in tracee does not stop tracer from catching exec() breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEFORK breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEVFORK breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEVFORKDONE breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACELWPCREATE breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACELWPEXIT breakpoint 2. EL F Auxiliary Vectors The ELF file format permits to transfer additional information for a process with a dedicated container of properties, its named ELF Auxilary Vector . Every system has its dedicated way to read this information in a debugger from a tracee. The NetBSD approach is to transfer this vector with a ptrace (2) API PIODREADAUXV . Our interface shares the API with OpenBSD. I filed a bug that our interface returns vector size of 8496 bytes, while OpenBSD has constant 64 bytes. It was diagnosed and fixed by Christos Zoluas that we were incorrectly counting bits and bytes and this enlarged the data streamlined. The bug was harmless and had no known side-effects besides large chunk of zeroed data. There is also a prepared local patch extending NetBSD platform support to read information for this vector, its primarily required for correct handling of PIE binaries. At the moment there is no interface similar to info auxv to the one from GDB. Unfortunately at the current stage, this code is still unused by NetBSD. I will return to it once the Native Process Plugin is enhanced. Ive filed the mentioned bug: Ive added new ATF test: Verify PTREADAUXV called for tracee . What has been done in LLDB 1. Resolving executables name with sysctl(7) In the past the way to retrieve a specified process executable path name was using Linux-compatibile feature in procfs ( proc ). The canonical solution on Linux is to resolve path of procPIDexe . Christos Zoulas added in DTrace port enhancements a solution similar to FreeBSD to retrieve this property with sysctl (7). This new approach removes dependency on proc mounted and Linux compatibility functionality. Support for this has been submitted to LLDB and merged upstream: 2. Real-Time Signals The key feature of the POSIX standard with Asynchronous IO is to support Real-Time Signals. One of their use-cases is in debugging facilities. Support for this set of signals was developed during Google Summer of Code 2016 by Charles Cui and reviewed and committed by Christos Zoulas. Ive extended the LLDB capabilities for NetBSD to recognize these signals in the NetBSDSignals class. Support for this has been submitted to LLDB and merged upstream: 3. Conflict removal with system-wide six. py The transition from Python 2.x to 3.x is still ongoing and will take a while. The current deadline support for the 2.x generation has been extended to 2020. One of the ways to keep both generations supported in the same source-code is to use the six. py library (py2 x py3 6.py). It abstracts commonly used constructs to support both language families. The issue for packaging LLDB in NetBSD was to install this tiny library unconditionally to a system-wide location. There were several solutions to this approach: drop Python 2.x support, install six. py into subdirectory, make an installation of six. py conditional. The first solution would turn discussion into flamewar, the second one happened to be too difficult to be properly implemented as the changes were invasive and Python is used in several places of the code-base (tests, bindings. ). The final solution was to introduce a new CMake option LLDBUSESYSTEMSIX - disabled by default to retain the current behavior. To properly implement LLDBUSESYSTEMSIX . I had to dig into installation scripts combined in CMake and Python files. It wasnt helping that Python scripts were reinventing getopt (3) functionality. and I had to alter it in order to introduce a new option --useSystemSix . Support for this has been submitted to LLDB and merged upstream: 4. Do not pass non-POD type variables through variadic function There was a long standing local patch in pkgsrc, added by Tobias Nygren and detected with Clang. According to the C11 standard 5.2.27: Passing a potentially-evaluated argument of class type having a non-trivial copy constructor, a non-trivial move constructor, or a non-trivial destructor, with no corresponding parameter, is conditionally-supported with implementation-defined semantics. A short example to trigger similar warning was presented by Joerg Sonnenberg: This code compiled against libc gives: Support for this has been submitted to LLDB and merged upstream: 5. Add NetBSD support in Host::GetCurrentThreadID Linux has a very specific thread model, where process is mostly equivalent to native thread and POSIX thread - its completely different on other mainstream general-purpose systems. That said fallback support to translate pthreadt on NetBSD to retrieve the native integer identifier was incorrect. The proper NetBSD function to retrieve light-weigth process identification is to call lwpself (2). Support for this has been submitted to LLDB and merged upstream: 6. Synchronize PlatformNetBSD with Linux The old PlatformNetBSD code was based on the FreeBSD version. While the FreeBSD current one is still similar to the one from a year ago, its inappropriate to handle a remote process plugin approach. This forced me to base refreshed code on Linux. After realizing that PlatformPlugin on POSIX platforms suffers from code duplication, Pavel Labath helped out to eliminate common functions shared by other systems. This resulted in a shorter patch synchronizing PlatformNetBSD with Linux, this step opened room for FreeBSD to catch up. Support for this has been submitted to LLDB and merged upstream: 7. Transform ProcessLauncherLinux to ProcessLauncherPosixFork It is UNIX specific that signal handlers are global per application. This introduces issues with wait (2)-like functions called in tracers, as these functions tend to conflict with real-life libraries, notably GUI toolkits (where SIGCHLD events are handled). The current best approach to this limitation is to spawn a forkee and establish a remote connection over the GDB protocol with a debugger frontend. ProcessLauncherLinux was prepared with this design in mind and I have added support for NetBSD. Once FreeBSD will catch up, they might reuse the same code. Support for this has been submitted to LLDB and merged upstream: reviews. llvm. orgD29347 - Add ProcessLauncherNetBSD to spawn a tracee renamed to Transform ProcessLauncherLinux to ProcessLauncherPosixFork committed r293768 8. Document that LaunchProcessPosixSpawn is used on NetBSD Host::GetPosixspawnFlags was built for most POSIX platforms - however only Apple, Linux, FreeBSD and other-GLIBC ones (I assume DebiankFreeBSD to be GLIBC-like) were documented. Ive included NetBSD to this list. Support for this has been submitted to LLDB and merged upstream: Document that LaunchProcessPosixSpawn is used on NetBSD committed r293770 9. Switch std::callonce to llvm::callonce There is a long-standing bug in libstdc on several platforms that std::callonce is broken for cryptic reasons. This motivated me to follow the approach from LLVM and replace it with homegrown fallback implementation llvm::callonce . This change wasnt that simple at first sight as the original LLVM version used different semantics that disallowed straight definition of non - static onceflag . Thanks to cooperation with upstream the proper solution was coined and LLDB now works without known regressions on libstdc out-of-the-box. Support for this has been submitted to LLVM, LLDB and merged upstream: 10. Other enhancements I a had plan to push more code in this milestone besides the mentioned above tasks. Unfortunately not everything was testable at this stage. Among the rescheduled projects: In the NetBSD platform code conflict removal in GetThreadName SetThreadName between pthreadt and lwpidt . It looks like another bite from the Linux thread model. Proper solution to this requires pushing forward the Process Plugin for NetBSD. Host::LaunchProcessPosixSpawn proper setting ::posixspawnattrsetsigdefault on NetBSD - currently untestable. Fix false positives - premature before adding more functions in NetBSD Native Process Plugin. On the other hand Ive fixed a build issue of one test on NetBSD: Plan for the next milestone Ive listed the following goals for the next milestone. mark exect (3) obsolete in libc remove libpthreaddbg (3) from the base distribution add new API in ptrace (2) PTSETSIGMASK and PTGETSIGMASK add new API in ptrace (2) to resume and suspend a specific thread finish switch of the PTWATCHPOINT API in ptrace (2) to PTGETDBREGS amp PTSETDBREGS validate i386, amd64 and Xen proper support of new interfaces upstream to LLDB accessors for debug registers on NetBSDamd64 validate PTSYSCALL and add a functionality to detect and distinguish syscall-entry syscall-exit events validate accessors for general purpose and floating point registers Post mortem FreeBSD is catching up after NetBSD changes, e. g. with the following commit: This move allows to introduce further reduction of code-duplication. There still is a lot of room for improvement. Another benefit for other software distributions, is that they can now appropriately resolve the six. py conflict without local patches. These examples clearly show that streamlining NetBSD code results in improved support for other systems and creates a cleaner environment for introducing new platforms. A pure NetBSD-oriented gain is improvement of system interfaces in terms of quality and functionality, especially since DTraceNetBSD is a quick adopter of new interfaces. and indirectly a sandbox to sort out bugs in ptrace (2). The tasks in the next milestone will turn NetBSDs ptrace (2) to be on par with Linux and FreeBSD, this time with marginal differences. To render it more clearly NetBSD will have more interfaces in readwrite mode than FreeBSD has (and be closer to Linux here), on the other hand not so many properites will be available in a thread specific field under the PTLWPINFO operation that caused suspension of the process. Another difference is that FreeBSD allows to trace only one type of syscall events: on entry or on exit. At the moment this is not needed in existing software, although its on the longterm wishlist in the GDB project for Linux. It turned out that, I was overly optimistic about the feature set in ptrace (2), while the basic ones from the first milestone were enough to implement basic support in LLDB. it would require me adding major work in heuristics as modern tracers no longer want to perform guessing what might happened in the code and what was the source of signal interruption. This was the final motivation to streamline the interfaces for monitoring capabilities and now Im adding remaining interfaces as they are also needed, if not readily in LLDB, there is DTrace and other software that is waiting for them now. Somehow I suspect that I will need them in LLDB sooner than expected. This work was sponsored by The NetBSD Foundation. The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue to fund projects and services to the open-source community. Please consider visiting the following URL, and chip in what you can: February 09, 2017 We became tired of waiting. File Info: 7Min, 3MB. Ogg Link: archive. orgdownloadbsdtalk266bsdtalk266.ogg February 08, 2017 Background I am using a sparc64 Sun Blade 2500 (silver) as a desktop machine - for my pretty light desktop needs. Besides the usual developer tools (editors, compilers, subversion, hg, git) and admin stuff (all text based) I need mpg123 and mserv for music queues, Gimp for image manipulation and of course Firefox. Recently I updated all my installed pkgs to pkgsrc-current and as usual the new Firefox version failed to build. Fortunately the issues were minor, as they all had been handled upstream for Firefox 52 already, all I needed to do was back-porting a few fixes. This made the pkg build, but after a few minutes of test browsing, it crashed. Not surprisingly this was reproducible, any web site trying to play audio triggered it. A bit surprising though: the same happened on an amd64 machine I tried next. After a bit digging the bug was easy to fix, and upstream already took the fix and committed it to the libcubeb repository. So I am now happily editing this post using Firefox 51 on the Blade 2500. I saw one crash in two days of browsing, but unfortunately could not (yet) reproduce it (I have gdb attached now). There will be future pkg updates certainly. Future Obstacles You may have read elsewhere that Firefox will start to require a working Rust compiler to build. This is a bit unfortunate, as Rust (while academically interesting) is right now not a very good implementation language if you care about portability. The only available compiler requires a working LLVM back end, which we are still debugging. Our auto-builds produce sparc sets with LLVM, but the result is not fully working (due to what we believe being code gen bugs in LLVM). It seems we need to fix this soon (which would be good anyway, independent of the Rust issue). Besides the back end, only very recently traces of sparc64 support popped up in Rust. However, we still have a few firefox versions time to get it all going. I am optimistic. Another upcoming change is that Cairo (currently used as 2D graphics back end, at least on sparc64) will be phased out and Skia will be the only supported software rendering target. Unfortunately Skia does (as of now) not support any big endian machine at all. I am looking for help getting Skia to work on big endian hardware in general, and sparc64 in particular. Alternatives Just in case, I tested a few other browsers and (so far) they all failed: NetSurf Nice, small, has a few tweaks and does not yet support JavaScript good enough for many sites MidoriThey call it lightweight but it is based on WebKit, which alone is a few times more heavy than all of Firefox. It crashes immediately at startup on sparc64 (I am investigating, but with low priority - actually I had to replace the hard disk in my machine to make enough room for the debug object files for WebKit - it takes So, while it is a bit of a struggle to keep a modern browser working on my favorite odd-ball architecture, it seems we will get at least to the Firefox 52 ESR release, and that should give us enough time to get Rust working and hopefully continue with Firefox. February 07, 2017 So finally Ive moved all services from my old server to my Christmas Xen box. This was not without problems due to the fact it had to run NetBSD - current gcc toolchain is broken for some packages which affected running any PHP build clang toolchain was broken for my config (USESSP yes and . February 04, 2017 Note the end this week of pc98, the most focused of niche platforms. January 31, 2017 What has been done in NetBSD What has been done in LLDB Plan for the next milestone Accidental theme this week: books. What are the techniques generally people follow to dump full core dump if the size of core dump is more than the RAM and flash. Say, kernel core is of 2GB size but we have exactly 2GB of RAM and 1GB of disk space. I am aware external USB and tftp options. But, reliability and stability matters when we choose these options. How do embedded people handle these type of issues and what are the techniques available Platform: NetBSD, ARM7 January 18, 2017 Previously This is the sixth in a series of Nifty and Minimally Invasive qmail Tricks, following Losing services (and eventually restoring them) When my Mac mini s hard drive died in the Great Crash of Fall 2008. taking this website and my email offline with it, I was already going through a rough time, and my mental bandwidth was extremely limited. I expended some of it explaining to friends what they could do about their hosted domains until such time as my brain became available again (as I assumed andor hoped it eventually would). I expended a bit more asking a friend to do a small thing to keep my email flowing somewhere I could get it. And then I was spent. The years where I used Gmail and had no website felt like years in the wilderness. That feeling could mostly have been about how I missed the habit of reflecting about my life now and again, writing about it, and sharing. But when the website returned four years ago (in order to remember Aaron Swartz ), the feeling didnt go away. All I got was a small sense of relief that my writings and recordings were available and that I could safely revive my old habit. After a year and half of reflecting, writing, and sharing, the feels-needle hadnt rebounded much further. It was only after painstakingly restoring all my old email (from Mail. apps cache, using emlx2maildir ), moving it up to my IMAP server, carefully merging six years worth of Gmail into that, accepting SMTP deliveries for schmonz. and not needing Gmail at all for several weeks that I noticed my long, strange sojourn had ended. Hypothetically speaking If it so happened that Id instead fixed email first, Id also have felt a tiny bit weird till my website was back. But only a tiny bit. When my web servers down, you might not hear from me when my mail servers down, I cant hear from you or, as happened in 2008, from my professors during finals week. So while web hosting can be interesting. mail hosting keeps me attached to what it feels like to be responsible for a production service. Keeping it real I value this firsthand understanding very, very highly. I started as a sysadmin, Im often still a developer, and thats part of why Im sometimes helpful to others. But since Im always in danger of forgetting lessons I learned by doing it, Im always in danger of being harmful when I try to help others do it . As a coach, one of my meta-jobs is to remind myself what it takes to know the risks, decide to ship it, live with the consequences, tighten the shipping-it loop until its tight enough, and notice when that stops being true. And thats why I run my own mail server. Whats new this week My 2014 mail server was configured just about identically with my 2008 one, for which it was handy to consult the earlier articles in this series . Then, recently, my weekly build broke on the software Ive been using to send mail. It was a trivial breakage, easy to fix, but it reminded me about a non-trivial future risk that I didnt want hanging over my head anymore. (For more details, see my previous post .) Now Im sending mail another way. Clients are unchanged, the server no longer needs TMDA or its dependencies, and I no longer have a specific expectation for how this aspect of my mail service will certainly break in the future. (Just some vague guesses, like a newly discovered compromise in the TLS protocol or OpenSSLs implementation thereof, or STARTTLS or Stunnel s implementation thereof.) A couple iterations First, I tried the smallest change that might work: Replacing tmda-ofmipd with the original ofmipd from mess822 (by the author of qmail. the software around which my mail service is built), Wrapped in SMTP AUTH by spamdyke (new use of an existing tool), Wrapped in STARTTLS by stunnel (as before). It worked TMDA no longer needed. I committed an update to my qmail-run package with a new shell script to manage this ofmipd service. uninstalled TMDA, and removed its configuration files. Next, I tried a change that might shorten the chain of executables : It worked Second instance of spamdyke no longer needed. To start a mail submission service on localhost port 26, these are the lines I added to etcrc. conf : To make the service available on the network, this is the config from etcstunnelstunnel. conf : (It already had this stanza, but with 8025 where tmda-ofmipd was listening. I simply changed the port number and restarted stunnel .) Im still relying on spamdyke for other purposes, but Im comfortable with those. Im still relying on stunnel for STARTTLS, but Im relatively comfortable keeping OpenSSL contained in its own address space and user account. Refactoring for mail hosting The present configuration is a refactoring. no externally visible change to email clients, yes internally visible change to email administrator (moi). I believe this refactoring was one of the best kind, able to be performed safely and reducing the risk I was worried about. The current configuration is much more likely to meet my future need to not have a production outage that interrupts my work for arbitrary duration while I scramble to understand and fix it. I dont have any more cheap ideas for lowering my risk, and it feels low enough anyway. So Im comfortable that this is the right place to stop . Conclusion Want to learn to see the consequences of your choices andor help other people do the same Consider productionizing something important to you. January 14, 2017 Im trying to compile a program with clang and libc on NetBSD. Clang version is 3.9.0, and NetBSD version is 7.0.2. The compile is failing with: ltcstddefgt is present, but it appears to be GCCs: If I am parsing Index of pubNetBSDNetBSD-release-7srcexternalbsdlibc correctly, the library is available. When I attempt to install libc or libcxx : Is Clang with libc a supported configuration on NetBSD How do we use Clang and libc on NetBSD January 11, 2017 Ill install netbsd on an old computer, but I am sure Ill have a hard time to get wireless internet working in a way or another. I figured I could do that easily if I managed to install things for this computer, on another one, the one I am using now, by crosscompiling. And that it would be a good training, isnt it For now, if pkgadd and so on are recognized, I still cant pkgadd pkgin or any software: it says it doesnt know that package. How come. I see it, its there. Terima kasih. Heres my PATH variable: PATHusrpkgsbin:usrpkgbin:usrlocalbin:usrbin:bin:usrlocalgames:usrgames ps:some might remember me. Indeed, I failed using this system many time, but I am a romantic, and I cant stop feeling something in my heart anytime I read pkgsrc or netbsd, I just dont know why. so here I am again :D January 09, 2017 NetBSDs scheduler was recently changed to better distribute load of long-running processes on multiple CPUs. So far, the associated sysctl tweaks were not documented, and this was changed now, documenting the kern. sched sysctls. For reference, here is the text that was added to the sysctl(7) manpage. Well, subject says it all. To quote from Soren Jacobsens email. The first release candidate of NetBSD 7.1 is now available for download at: Those of you who prefer to build from source can continue to follow the netbsd-7 branch or use the netbsd-7-1-RC1 tag. There have been quite a lot of changes since 7.0. See srcdocCHANGES-7.1 for the full list. Please help us out by testing 7.1RC1. We love any and all feedback. Report problems through the usual channels (submit a PR or write to the appropriate list). More general feedback is welcome at email160protected Ive installed NetBSD 7.0.1 in a KVM virtual machine under libvirt on a Fedora 25 Linux host. I want to use spice. so i specified the requisite qxl graphic in the virtual machine then installed xf86-video-qxl-0.1.4nb1 with pkgin in the NetBSD guest. But both varlogxdm. log and varlogXorg.0.log complained that they couldnt find the qxl module. Then I realized they were looking in usrX11R7libmodules but the qxl package put it in usrpkglibxorgmodules. To solve that, I manually added a symbolic link. And indeed, that solved the not found problem. (But why the two directories. ) Now they complain that its the wrong driver. Both xdm. log and Xorg.0.log gripe: (EE) module ABI major version (20) doesnt match the servers version (10) (EE) Failed to load module qxl (module requirement mismatch, 0) Why are things out of sync in the NetBSD code base How can anyone get X to work What can I do to solve this January 08, 2017 im trying to install nzbget. i think it was in the pkgsrc way back but its not there anymore. so i tried this: (1) i downloaded the source from nzbget website (2) then. configure said A compiler with support for C14 language features is required.. so i installed gcc6 using pkgin in gcc6 (3) so then i tried PATHusrpkggcc6bin:PATH. configure and it said compiler is ok, but now i got configure: error: ncurses library not found (4) i have ncurses lib in usrpkgincludencurses, how to let. configure know the location of ncurses lib Is it normal that when I use Zlib from Pkgsrc or base as reference via include bl3 for a project (like the current supertuxkart version 0.9.2) that within. buildlinkinclude directory no symlinks exist of zlib. h and zconf. h I newer saw this behaviour before and it breaks the compilation. January 05, 2017 Last night, mere moments from letting me commit a new package of Test::Continuous (continuous testing for Perl), my computer acted as though it knew its replacement was on the way and didnt care to meet it. This tiny mid-2013 11 MacBook Air made it relatively ergonomic to work from planes, buses, and anywhere else when I lived in New York and flew regularly to see someone important in Indiana, and continued to serve me well when that changed and changed again . The next thing I was planning to do with it was write this post. Instead I rebooted into DiskWarrior and crossed my fingers. Things get in your way, or threaten to. Thats life. But when you have slack time. you can Cope better when stuff happens, Invest in reducing obstacles, and Feel more prepared for the next time stuff happens. Having enough slack is as virtuous a cycle as insufficient slack is a vicious one. Paying down non-tech debts Last year I decided to spend more time and energy improving my health. Having recently spent a few weeks deliberately not paying attention to any of that, Im quite sure that I prefer paying attention to it, and am once again doing so. Learning to make my health a priority required that I make other things non-priorities, notably Agile in 3 Minutes. It no longer requires that. Ive recently invested in making the site easier for me to publish, and you may notice that its easier for you to browse. I didnt have enough slack to do these things when I was writing and recording a new episode every week. Now that enough of them have been taken care of, I feel prepared to take new steps with the podcast. And tech debts Earlier this week I noticed a broken link in a comment on Refactorings for web hosting. so I took a moment to check for other broken links on this site (ikiwiki makes it easy ). Before that, I inspected and minimized the differences between dev (my laptop) and prod (my server, where youre reading this), updated prod with the latest ikiwiki settings, and (because its all in Git) rebased dev from prod. In so doing, I observed that more config differences could be easily harmonized by adjusting some server paths to match those on my laptop. (When Apple introduced System Integrity Protection. pkgsrc on Mac OS X could no longer install under usr. and moved to opt. With my automated NetBSD package build. I can easily build the next batch for optpkg as well, retaining usrpkg as a symlink for a while. So I have.) Ive been running lots of these builds in the past week anyway, because a family of packages I maintain in pkgsrc had been outdated for quite a while and I finally got around to catching them up to upstream. Once they built on OS X, I committed the updates to the cross-platform package system. only to notice that at least one of them didnt build on NetBSD. So I fixed it, ran another build, saw what else I broke, and repeated until green. And taking on patience debt telling you about more of this crud Due to another update that temporarily broke the build of TMDA. I was freshly reminded that thats a relatively biggish liability in my server setup. I use TMDA to send mail. which is not mainly what its for, and I never got around to using it for what its for (protecting against spam with automated challenge-response), and it hasnt been maintained for years, and is stuck needing an old version of Python. On the plus side, running a weekly build means that when TMDA breaks more permanently, Ill notice pretty quickly. On the minus side, when that happens, Ill feel pressure to fix or replace it so I can (1) continue to send email like a normal person and (2) restart the weekly build like a me-person. If I can reduce the liability now, maybe I can avoid feeling that pressure later. Investigating alternatives, I remembered that Spamdyke. which I already use for delaying the SMTP greeting. blacklisting from a DNSBL as well as To: addresses that only get spam anymore, and greylisting from unknown senders, can provide SMTP AUTH. So Ill try keeping stunnel and replacing tmda-ofmipd with a second instance of spamdyke. If thats good, Ill remove mailtmda from the list of packages I build every week. then build spamdyke with OpenSSL support and try letting it handle the TLS encryption directly. If thats good, Ill remove securitystunnel from the list of packages too, leaving me at the mercy of fewer pieces of software breaking. Leaning more heavily on Spamdyke isnt a clear net reduction of risk. When a bad bug is found, itll impact several aspects of my mail service. And if and when NetBSD moves from GCC to Clang, Ill have to add langgcc to my list of packages and instruct pkgsrc to use it when building Spamdyke, or else come up with a patch to remove Spamdykes use of anonymous inner functions in C. (That could be fun. I recently started learning C .) I could go on, but Im a nice person who cares about you. Thats enough of that. So what All these builds pushing my soon-to-be-replaced laptop through its final paces as a development machine might have had something to do with triggering its misbehavior last night. And all this work seems like, well, a lot of work. Is there some way I could do less of it Yes, of course. But given my interests and goals, it might not be a clear net improvement. For instance, when Tim Ottinger drew my attention to that Test::Continuous Perl module, being a pkgsrc developer gave me an easy way to uninstall it if I wound up not liking it, which meant it was easy to try, which meant I tried it. I want conditions in my life to favor trying things. So Im invested in preserving and extending those conditions. In Gary Bernhardt s formulation, Im aiming to maximize the area under the curve . No new resolutions, yes new resolvings Im not looking to add new goals for myself for 2017. Im not even trying to make existing things good enough there are too many things, and as a recovering perfectionist I have trouble setting a reasonable bar Im just trying to make them good enough enough that I can expect small slices of time and attention to permit small improvements . Jessica Kerr has a thoughtful side blog named True in software, true in life. Heres something thatd qualify: When conditions are expected to change, smaller batch size helps us adjust. Reducing batch size takes time and effort. Paying down my self-debts (technical and otherwise) feels like resolving . I have, at times, felt quite out of position at managing myself. Lately Im feeling much more in position, and much more like I can expect to continue to make small improvements to my positioning. When you want the option to change your bodys direction, you take smaller steps, lower your center, concentrate on balance. Thats Agile. Moi My current best understanding is that a balanced life is a small-batch-size life. If thats the case, Im getting there. Further repositioning This coming Monday, Ill be switching to one of these weird new MacBook Pros with the row of non-clicky touchscreen keys. If my current computer survives till then, thatll be one smooth step in a series of transitions. (In other news, Bekki defends her dissertation that day.) The following Monday, Ill be starting my next project, a mostly-remote gig pairing in Python to deliver software for a client while encouraging and supporting growth in my Pillar teammates. Ill be in Des Moines every so often if youre there andor have recommendations for me, Id love to hear from you. The Monday after that, well pack up a few things the movers havent already taken away, and our time in Indiana will come to an end. Were headed back to the New York area to live near family and friends. No resolutions, yes intentions For 2017, I declare my intentions to: Continue to improve my health and otherwise attend to my own needs Help more people understand what software development work is like Help more people feel heard I hope to see and hear you along the way. January 04, 2017 So over the holidays, I managed to get in some good quality family time and find some time to work on some Open Source stuff. I meant to work mainly on dhcpcd. but it turned out I spent most of my time working on NetBSD curses library so that Python Curses now works with it. Now, most people r. Adding and removing hardware components in operation is common in todays commoditized computing environments. This was not always the case - in the past century, one had to power down a machine in order to change network cards, harddisks or RAM. A major step towards changing a systems configuration at runtime for customers came with USB, but thats not where it ends - other systems like PCI support hotplugging as well. Another area where changing of the systems configuration is the amount of Ramdom Access Memory (RAM) of a system. Usually fixed, this is determined at system start time, and then managed by the operating systems memory managent system. But esp. with todays virtualized hardware systems, even the amount of RAM assigned to a system can easily be changed. For example a VM can be assigned more RAM when needed, without even rebooting the system, leading to increased system performance without introducing swappingpaging overhead. Of course this required support from the operating system and its memory management subsystem. For NetBSD, the UVM virtual memory system was now changed to support this via the uvmhotplug(9) API, and a first user for this is the Xen balloon(4) driver. Quoting from the balloon(4) manpage. The balloon driver supports the memory ballooning operations offered in Xen environments. It allows shrinking or extending a domains available memory by passing pages between different domains. The uvmhotplug(9) manpage gives us more information on the UVM hotplug functionality: When the kernel is compiled with options UVMHOTPLUG, memory segments are handled in a dynamic data structure (rbtree(3)) com - pared to a static array when not. This enables kernel code to add or remove information about memory segments at any point after boot - thus hotplug. To answer more questions for portmasters who want to change their ports, Cherry G. Mathew has now posted a uvmhotplug(9) port masters FAQ. It covers questions on the background, affected files, and needed changes. For more information on UVM, see Charles Chuck Cranors PhD disertation on Design and Implementation of UVM (PDF) as well as his Usenix talk on the UVM Virtual Memory System (PS). There is also plenty of information available on Xen ballooning - check it out and share your experiences on NetBSDs port-xen mailing list December 29, 2016 My brother got me some very tasty presents for Christmas (and my up-coming Birthday) . namely the GIGABYTE BRIX J1900 and a Samsung EVO 750 250G. Santa also brought me 8G of Crucial memory. Putting them all together is a nice new machine to install NetBSD Xen. The key part is this is a low. December 22, 2016 After my last blog postings on the NetBSD scheduler. some time went by. What has happened that the code to handle process migration was rewritten to give more knobs for tuning, and some testing was done. The initial problem state in PR kern51615 is solved by the code. To reach a wider audience and get more testing, the code was committed to NetBSD-current today. Now, two things remain to be seen: More testing . This best involved situations that compare the systems behaviour without and with the patch. Situations to test include pure computation jobs that involve multiple parallel processes a mix of CPU-crunching and inputoutput, again on a number of concurrent processes full build. sh examples If you have time and an interesting set of numbers, please feel free to let us know on tech-kern.. Documentation . There is already a number of undocumented sysctls under kern. sched, which was now extended by one more, averageweight. While its obvious to add the knob from the formula, testing it under various real-life conditions and see how things change is left to be determined by a PhD thesis or two - be sure to drop us your patches for srcsharemanman7sysctl.7 if you can come up with a comprehensible description of all the scheduler sysctls So just now when you thought there is no more research to be done in scheduling algorithms, here is your chance to fame and glory. -)December 17, 2016 How can I activate Keyboard Latin American on NetBSD Because when I am installing I never saw the Latin American keyboard, only Spanish. December 09, 2016 Where can I find and install an AR9271 driver for the latest NetBSD The target machine does not have Internet access and I need to setup the WiFi dongle first. UPDATE . wpasupplicant was already written, but I didnt see my device. When I plug in the dongle its shown as: ifconfig shows only re0 and lo0 interfaces. UPDATE . I saw on some Linux forums that the dongle uses an Atheros chip, but I checked in Windows and see Ralink. The ral driver is also integrated in NetBSD, but the situation doesnt change - I see no ra device in dmesg. boot. December 08, 2016 So, Ive installed NetBSD 7 and device shown again as ugen (ugein, lol). Then Im installed FreeBSD 10.2 and ugen again. usbconfig gives me ugen4.3: ltproduct 0x7601 vendor 0x148fgt at usbus4, cfg0 mdHOST spdHIGH(480Mbps) pwrON (90ma) So, whats next Buying new dongle is a last thing, which Ill make. UPD: NDIS driver not works. December 07, 2016 At Agile Testing Days. I facilitated a workshop called DevOps Dojo. We role-played Dev and Ops developing and operating a production system, then figured out how to do it better together. Youre welcome to use the workshop materials for any purpose, including your own workshop. If you do, Id love to hear about it. Some firsts Ive spoken at several instances of pkgsrcCon (including twice in nearby Berlin ), but thats more like a hackathon with some talks. Agile Testing Days was a proper conference . with hundreds of people and plenty of conferring. If someone asks whether Im an international speaker, or claims I am one, I now wont feel terribly uncomfortable going along with it. What I expected from many previous Lean Coffees: Id have to control myself to not say all the ideas and suggestions that come to mind. What happened at this Lean Coffee: It was very easy to listen, because I didnt have many ideas or suggestions, because the topics came from people who were mostly testers. Conclusions I immediately drew: Come to think of it, I have not played every role on a team. I dont know what its like to be a tester. Maybe my guesses about what its like are less wrong than some others, but theyre still gonna be wrong. This is evidently my first conference thats more testing than Agile . Cool I bet I can learn a lot here. Thanks to Troy Magennis. Markus Grtner. and Cat Swetel. I decided to try a new idea and spend a few slides drawing attention to the existence and purpose of Agile Testing Days Code of Conduct. I cant tell yet how much good this did, but it took so little time that Ill keep trying it in future conference presentations and workshops. Some nexts My next gig will be remote coaching, centered around what we notice as were pair programming and delivering working software. Ive done plenty of coaching and plenty of remote work. but not usually at the same time. Thanks to Lean Coffee with folks like Janet and Alex Schladebeck. I got some good advice on being a more effective influencer when it takes more intention and effort to have face-to-face interactions. Alex: For a personal connection, start meetings by unloading your baggage whatevers on your mind today that might be dividing your attention and inviting others to unload theirs. (Ideally, establish this practice in person first.) Janet: Ask questions that help people recognize their own situation. (Helping people orient themselves in their problem spaces is one of my go-to strengths. Im ready to be leaning harder on it.) As I learn about remote coaching, I expect to write things down at Shape My Work. a wiki about distributed Agile that Alex Harms and I created. Youll notice it has a Code of Conduct. If it makes good sense to you, wed love to learn what youve learned as a remote Agilist. I found Agile Testing Days to be a lovingly organized and carefully tuned mix of coffee breaks, efficiency, flexibility, and whimsy. The love and whimsy shone through. Im honored to have been part of it, and I sure as heck hope to be back next year. Wed be back next year anyway we visit family in Germany every December. Someday we might choose to live near them for a while. It occurs to me that having participated in Agile Testing Days might well have been an early investment in that option, and the thought pleases me. (As does the thought of hopping on a train to participate again.) Im in Europe through Christmas. I consult, coach. and train. Do you know of anyone who could use a day or three of my services One aspect of being a tester I do identify with is being frequently challenged to explain their discipline or justify their decisions to people who dont know what the work is like (and might not recognize the impact of their not knowing). In that regard, I wonder how helpful Agile in 3 Minutes is for testers. Lets say I could be so lucky as to have a few guest episodes about testing. Who would be the first few people youd want to hear from Who has a way with words and ideas, knows the work, and can speak to it in their unique voice to help the rest of us understand a bit better December 01, 2016 November 24, 2016 Interesting news come in via slashdot: Apple Releases macOS 10.12 Sierra Open Source Darwin Code. Apple has released the open source Darwin code for macOS 10.12 Sierra. The code, located on Apples open source website, can be accessed via direct link now, although it doesnt yet appear on the sites home page. The release builds on a long-standing library of open source code that dates all the way back to OS X 10.0. There, youll also find the Open Source Reference Library, developer tools, along with iOS and OS X Server resources. The lowest layers of macOS, including the kernel, BSD portions, and drivers are based mainly on open source technologies, collectively called Darwin. As such, Apple provides download links to the latest versions of these technologies for the open source community to learn and to use. This may not only be of interest to the OpenDarwin folks (or rather their successors in PureDarwin ) but more investigation not only on the code itself, but also the license it is released under is neccessary to learn if anything can be gained back for NetBSD. Why back As you may or may not remember, mac OS includes some parts of NetBSD (besides lots of FreeBSD, probably some OpenBSD, much other Open Source software and sure a big lot of Apples own code). My first job was in Operations. When I got to be a Developer, I promised myself Id remember how to be good to Ops. Ive sometimes succeeded. And when Ive been effective, its been in part due to my firsthand knowledge of both roles. DevOps is two things (hint: theyre not Dev and Ops) Part of what people mean when they say DevOps is automation. Once a system or service is in operation, it becomes more important to engineer its tendencies toward staying in operation. Applying disciplines from software development can help. These words are brought to you by a Unix server I operate. I rely on it to serve this website, those of a few friends, and a tiny podcast of some repute. Oh yeah, and my email. It has become rather important to me that these services tend to stay operational. One way I improve my chances is to simplify whats already there . If it hurts, do it more often Another way is to update my installed third-party software once a week. This introduces two pleasant tendencies: its much Less likely, at any given time, that Im running something dangerously outdated More likely, when an urgent fix is needed, that Ill have my wits about me to do it right Updating software every week also makes two strong assumptions about safety (see Modern Agiles Make Safety a Prerequisite): that I can quickly and easily Roll back to the previous versions Build and install new versions Since Ive been leaning hard on these assumptions, Ive invested in making them more true. The initial investment was to figure out how to configure pkgsrc to build a complete set of binary packages that could be installed at the same time as another complete set. My hypothesis was that then, with predictable and few side effects, I could select the active software set by moving a symbolic link . It worked. On my PowerPC Mac mini. the best-case upgrade scenario went from half an hours downtime (bring down services, uninstall old packages, install new packages, bring up services) to less than a minute (install new packages, bring down services, move symlink, bring up services, delete old packages after a while). The worst case went from over an hour to maybe a couple of minutes. Until it hurts enough less I liked the payoff on that investment a lot . Ive been adding incremental enhancements ever since. I used to do builds directly on the server: in place for low-risk leaf packages, as a separate full batch otherwise. It was straightforward to do, and I was happy to accept an occasional reduction in responsiveness in exchange for the results. After the Mac mini died. I moved to a hosted Virtual Private Server that was much easier to mimic. So I took the job offline to a local VirtualBox running the same release and architecture of NetBSD (32-bit i386 to begin with, 64-bit amd64 now, both under Xen ). The local job ran faster by some hours (I forget how many), during which the server continued devoting all its IO and CPU bandwidth to its full-time responsibilities. Last time I went and improved something was to fully automate the building and uploading, leaving myself a documented sequence of manual installation steps. Yesterday I extended that shell script to generate another shell script thats uploaded along with the packages. When the uploads done, theres one manual step: run the install script. If you can read these words, it works. DevOps is still two things Applying Dev concepts to the Ops domain is one aspect. When Im acting alone as both Dev and Ops, as in the above example, Ive demonstrated only that one aspect. The other, bigger half is collaboration across disciplines and roles. I find it takes some not-tremendously-useful effort to distinguish this aspect of DevOps from BDD or from anything else that looks like healthy cross-functional teamwork. Its the healthy cross-functional teamwork Im after. There are lots of places to start having more of that. If your teams context suggests to you that DevOps would be a fine place to start, go after it Find ways for Dev and Ops to be learning together and delivering together. Thats the whole deal. Heres another deal Two weeks from today, at Agile Testing Days in Potsdam, Germany, Im running a hands-on DevOps collaboration workshop. Can you join us Its not too late, and you can save 10 off the price of the conference ticket. Just provide my discount code when you register. Id love to see you there. November 22, 2016 According to NetBSDs wiki I can use pkgadd - uu to upgrade packages. However, when I attempt to use pkgadd - uu it results in an error. Ive tried to parse the pkgadd man page but I cant tell what the command it to update everything. I cant use pkgchk because its not installed, and I cant get the package system to install it: What is the secret command to get the OS to update everything Please forgive my ignorance with this question. I only have NetBSD systems for testing software. It gets used a few times a year, and I dont know much about it otherwise. October 27, 2016 A LAN has been set up with IPSubnet mask 192.48.1.0255.255.255.224 What is the maximum number of machines that can be set up in this LAN and why (This comes under class C network so the maximum would be 255 or less - correct me if im wrong) Suresh - email160protected sends a mail to my friend Rahul - email160protected with these three files as separate attachments as below - march-reports. ppt - Powerpoint file of size 256 KB. - locations. rar - Rar archive file of size 460 KB - me-snap. tiff - Tiff picture file of size 2970 KB. a) What is the size of the outgoing mail including mail headers b) What is the outgoing mail size if all the three files are archived as one single. rar file and sent out as one single attachment c) Show the MIME based mail structure of the outgoing mail. Show the NetBSD based C code for sending a text message Hello. This works to a remote server running on IP 122.250.110.14 on port 5050 and getting back an acknowlegement. October 10, 2016 The FreeBSD Release Engineering Team is pleased to announce the availability of FreeBSD 11.0-RELEASE. This is the first release of the stable11 branch. Some of the highlights: OpenSSH DSA key generation has been disabled by default. It is important to update OpenSSH keys prior to upgrading. Additionally, Protocol 1 support has been removed. OpenSSH has been updated to 7.2p2. Wireless support for 802.11n has been added. By default, the ifconfig(8) utility will set the default regulatory domain to FCC on wireless interfaces. As a result, newly created wireless interfaces with default settings will have less chance to violate country-specific regulations. The svnlite(1) utility has been updated to version 1.9.4. The libblacklist(3) library and applications have been ported from the NetBSD Project. Support for the AArch64 (arm64) architecture has been added. Native graphics support has been added to the bhyve(8) hypervisor. Broader wireless network driver support has been added. The release notes provide the in-depth look at the new release, and you can get it from the download page. September 14, 2016 Many programming guides recommend to begin scripts with the usrbinenv shebang in order to to automatically locate the necessary interpreter. For example, for a Python script you would use usrbinenv python. and then the saying goes, the script would just work on any machine with Python installed. The reason for this recommendation is that usrbinenv python will search the PATH for a program called python and execute the first one found and that usually works fine on ones own machine . Unfortunately, this advice is plagued with problems and assuming it will work is wishful thinking. Let me elaborate. Ill use Python below for illustration purposes but the following applies equally to any other interpreted language. i) The first problem is that using usrbinenv lets you find an interpreter but not necessarily the correct interpreter . In our example above, we told the system to look for an interpreter called python but we did not say anything about the compatible versions. Did you want Python 2.x or 3.x Or maybe exactly 2.7 Or at least 3.2 You cant tell right So the the computer cant tell either regardless, the script will probably run with whichever version happens to be called python which could be any thanks to the alternatives system. The danger is that, if the version is mismatched, the script will fail and the failure can manifest itself at a much later stage (e. g. a syntax error in an infrequent code path) under obscure circumstances. ii) The second problem, assuming you ignore the version problem above because your script is compatible with all possible versions (hah), is that you may pick up an interpreter that does not have all prerequisite dependencies installed . Say your script decides to import a bunch of third-party modules: where are those modules located Typically, the modules exist in a centralized repository that is specific to the interpreter installation (e. g. a. libpython2.7site-packages directory that lives alongside the interpreter binary). So maybe your program found a Python 2.7 under usrlocalbin but in reality you needed it to find the one in usrbin because thats where all your Python modules are. If that happens, youll receive an obscure error that doesnt properly describe the exact cause of the problem you got. iii) The third problem, assuming your script is portable to all versions (hah again) and that you dont need any modules (really), is that you are assuming that the interpreter is available via a specific name . Unfortunately, the name of the interpreter can vary. For example: pkgsrc installs all python binaries with explicitly-versioned names (e. g. python2.7 and python3.0 ) to avoid ambiguity, and no python symlink is created by default which means your script wont run at all even when Python is seemingly installed. iv) The fourth problem is that you cannot pass flags to the interpreter . The shebang line is intended to contain the name of the interpreter plus a single argument to it. Using usrbinenv as the interpreter name consumes the first slot and the name of the interpreter consumes the second, so there is no room to pass additional flags to the program. What happens with the rest of the arguments is platform-dependent: they may be all passed as a single string to env or they may be tokenized as individual arguments. This is not a huge deal though: one argument for flags is too restricted anyway and you can usually set up the interpreter later from within the script. v) The fifth and worst problem is that your script is at the mercy of the users environment configuration . If the user has a misconfigured PATH. your script will mysteriously fail at run time in ways that you cannot expect and in ways that may be very difficult to troubleshoot later on. I quote misconfigured because the problem here is very subtle. For example: I do have a shell configuration that I carry across many different machines and various operating systems such configuration has complex logic to determine a sane PATH regardless of the system Im in but this, in turn, means that the PATH can end up containing more than one version of the same program. This is fine for interactive shell use, but its not OK for any program to assume that my PATH will match their expectations. vi) The sixth and last problem is that a script prefixed with usrbinenv is not suitable to being installed . This is justified by all the other points illustrated above: once a program is installed on the system, it must behave deterministically no matter how it is invoked. More importantly, when you install a program, you do so under a set of assumptions gathered by a configure - like script or prespecified by a package manager. To ensure things work, the installed script must see the exact same environment that was specified at installation time. In particular, the script must point at the correct interpreter version and at the interpreter that has access to all package dependencies. So what to do All this considered, you may still use usrbinenv for the convenience of your own throwaway scripts (those that dont leave your machine) and also for documentation purposes and as a placeholder for a better default . For anything else, here are some possible alternatives to using this harmful shebang: Patch up the scripts during the build of your software to point to the specific chosen interpreter based on a setting the user provided at configure time or one that you detected automatically. Yes, this means you need make or similar for a simple script, but these are the realities of the environment theyll run under Rely on the packaging system do the patching, which is pretty much what pkgsrc does automatically (and I suppose pretty much any other packaging system out there). Just dont assume that the magic usrbinenv foo is sufficient or even correct for the final installed program. Bonus chatter: There is a myth that the original shebang prefix was so that the kernel could look for it as a 32-bit magic cookie at the beginning of an executable file. I actually believed this myth for a long time until today, as a couple of readers pointed me at The magic, details about the shebanghash-bang mechanism on various Unix flavours with interesting background that contradicts this. August 24, 2016 Im running NetBSD in a virtual machine. Documentation and explanations on how to use pkgsrc are scarce. Lets say I want to install vim for NetBSD. What would I type Do I need a URL Do I need a specific version Do I need to set up a directory for building the source of vim July 08, 2016 Here are some notes on installing and running NetBSDevbarm on the AllWinner A20 powered CubieBoard2. I bought this board a few weeks ago for its SATA capabilities, despite the fact that there are now cheaper boards with more powerful CPUs. Required steps for creating a bootable micro SD card are detailed on the NetBSD Wiki. and a NetBSD installation is required to run mkubootimage . I used an USB to TTL serial cable to connect to the board and create user accounts. Do not be afraid of serial, as it has in fact only advantages: there is no need to connect an USB keyboard nor an HDMI display, and it also brings back nice memories. Connecting using cu (from my OpenBSD machine) : Device name might be different when using cu on other operating systems. Adding a regular user in the wheel group : Adding a password to the newly created user and changing default shell to ksh : Installing and configuring pkgin : Finally, here is a dmesg for reference purposes : June 30, 2016 Ive been itching to go wireless on my office desk for sometime. The final wires to eradicate are from my Mac into a USB hub connected to two hard discs for backups. Years ago I had an Apple Time Capsule. The Time Capsule is an Airport Wi-Fi basestation with a hard disc for Macs to back up to using the Time Machine backup software. It was pretty solid kit for a couple of years. Under the hood, it runs NetBSD and as an aside, I have had a few beers with the guy who ported the operating system. The power supply decided to give up a very common fault apparently. I will clean the cables up. I promise. When I was on my travels and living in two places, I had hard discs in both locations. The Mac supports multiple discs for backups and I encrypted the backups in case the discs were stolen. But now Im in one home, I want to be able to move around the house with the Mac but still backup without having to go to the office. We are a two Mac house, so we need something more convenient. I already have a base station and I dont really want to shell out loads of money for an Apple one. There are several options to setup a Time Capsule equivalent. If you have a spare Mac, get a copy of Mac OS X Server. It will support Time Machine backups for multiple Macs and also supports quotas so that the size of the backups can be controlled. I dont have a spare stationary Mac. Anything that speaks Appletalk file sharing protocol reasonably well. Enter the Raspberry Pi. I have a Raspberry Pi 3 and within minutes one can install the Netatalk software. This has been available for years on Linux and implements the Apple file sharing protocols really well. With an external drive added, I was able to get a Time Machine backup working using this article . I could not use my existing backup drive as is. Linux will read and write Mac OS drives, but there is a bit of too-ing and fro-ing so it is best to start with a fresh native Linux filesystem. Even if you can get it to work with the Mac OS drive, it will not be able to use a Time Machine backup from a drive previously directly connected. Ive been using this setup for the last couple of weeks. I have not had to do a serious restore yet and I should caveat that I still have a hard drive I use directly into the machine just in case. The first rule of backups a file doesnt exist unless there are three copies on different physical media. (The Raspberry Pi is setup to be MiniDLNA server. It will stream media to Xboxs and other media players.) June 12, 2016 I installed sudo on NetBSD 7.0 using pkg. I copied usrpkgetcsudoers to etcsudoers because the docs say etcsudoers and possibly etcsudoers. local is used. I uncommented the line wheel ALL(ALL) ALL. I then added myself to the wheel group. I verified I am in wheel with groups. I then logged off and then back on. When I attempt to run sudo ltcommandgt. I get the standard: What is wrong with my sudo installation, and how can I fix it May 31, 2016 A brief description of playing around with SunOS 4.1.4, which was the last version of SunOS to be based on BSD. File Info: 17Min, 8Mb Ogg Link: archive. orgdownloadbsdtalk265bsdtalk265.ogg April 30, 2016 Playing around with the gopher protocol. Description of gopher from the 1995 book Students Guide to the Internet by David Clark. Also, at the end of the episode is audio from an interview with Mark McCahilll and Farhad Anklesaria that can be found at youtubewatchvoR76UI7aTvs Check out gopher. floodgapgopher File Info: 27 Min, 13 MB. Ogg Link:archive. orgdownloadbsdtalk264bsdtalk264.ogg March 23, 2016 This episode is brought to you by ftp, the Internet file transfer program, which first appeared in 4.2BSD. An interview with the hosts of the Garbage Podcast, joshua stein and Brandon Mercer. You can find their podcast at garbage. fm File Info: 17Min, 8MB. Ogg Link: archive. orgdownloadbsdtalk263bsdtalk263.ogg via these fine people and places: This planet is operated by Kimmo Suominen. Hosting provided by Global Wire Oy .

No comments:

Post a Comment