Zero Downtime / High Availability with Catalyst & FastCGI external servers Intro Here's the idea - you simultaneously run 2 FastCgiExternalServer's - a production one & a staging one. The staging one you muck around with to your heart's content. Start/stop/restart it - whatever - it won't effect your production environment. At some point you want to promote the staging stuff into production. And o
I'm trying to put some neat cookbook things using Web::Scraper on this journal. They'll eventually be incoropolated into the module document like Web::Scraper::Cookbook, but I'll post here for now since it's easy to update and give a permalink to.The easiest way to keep up with these hacks would be to subscribe to the RSS feed of this journal, or look at my del.icio.us links tagged 'webscraper' (w
Don't get me wrong, I love perldoc. But for documentation that span several manpages, I often catch myself dreaming of tables of content, indexes, real dead-tree margins on which to annotate stuff. I'm funny that way.So I went back to the script that I wrote to create the XPathScript manual and tinkered a bit with it. I then used that newly-concocted black magick (which basically convert and agglo
I've been using Devel::REPL for a while now. Like all good modules (Perl::Critic, POE, Plagger, etc), it's very extensible. Devel::REPL's design is worth studying: keep a simple core and ship all the fancy behavior as plugins. Moose amplifies the power and convenience of this design with roles, method modifiers, and general awesomeness. There are plugins to dump output with Data::Dump::Streamer, e
Web::Scraper with filters, and thought about Text filters A developer release of Web::Scraper is pushed to CPAN, with "filters" support. Let me explain how this filters stuff is useful for a bit. Since an early version, Web::Scraper has been having a callback mechanism which is pretty neat, so you can extract "data" out of HTML, not limited to the string. For instance, if you have an HTML
Some websites require you to login to the site using your credential, to view the content. It's easily scriptable with WWW::Mechanize, but if you visit the site frequently with your browser, why not reusing the browser's cookies, so as you don't need to script the login process?Web::Scraper allows you to call methods, or entirely swap its UserAgent object when it scrapes the website. Here's how to
After a long wait, Test::Harness 3.0 is now available for download. Current, the default distribution still points to Test::Harness 2.65_02, but hopefully that will be resolved soon. As it stands, Andy Armstrong did the bulk of the work finishing things up, and the team is quite grateful for this.There are now only six bugs left in the RT queue, most of which are wishlist items or unlikely to have
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く