d-man.org   News | Photos | Fun Stuff | Forums

ALC: Day 1 June 5, 2012 at 5:00 am

Today was the first day of AIDS LifeCycle. I spent much of the night nervous unable to sleep because, ironically, I was worried I would not get enough sleep and the day would be a disaster.

After getting out of bed at 4am, my wonderful friend Matt drove me and another rider, Russel, to the Cow Palace for what was to be the first day of the ride.

Opening Ceremonies at the Cow Palace included what was one of the most moving speeches I’ve ever heard of why people ride on ALC. A young lady described how her mother had lived quietly with HIV for 10 years. The mother was finally telling her daughters she was not going to live much longer. This was in 1994. Then HIV cocktail medication therapies came out and prolonged her Moms life further. Her Mom was going to help on ALC last year as a Roadie but, as if she had not been through enough, had been diagnosed with cancer. Her daughter announced proudly on stage “My mom kicked cancers ass, and is here to join us this year on ALC 2012.”

     Such was the mood for the ceremony, but then it was time to head out on the road. Nobody told me it would be this cold in Spandex at 4am!!!! Brrrr. But it was ok, there were tons and tons of people cheering us on from the route as we exited. It was really fantastic.

     As we wound our way up to Skyline Drive and beyond I realized that I don’t often take off work anymore and just watch and take in the scenery around me. My lord how wonderful a gift the nature in California is!!! The views from Skyline overlooking the water as the sun came up on the horizon were awesome.

     There were definitely some climbs today, but I had a ton of energy. I spent most of the day biking with Paul and Russell. Its really easy to bike 82 miles when you have someone funny and interesting to talk to. I almost biked with a fella named Greg who came to the ride from Chicago and another group, Eva, Todd and Rand, who came to the ride from L.A. Everyone was amazingly friendly and supportive.

     Along the route there were rest stops on the ocean. How cool it was to be biking along the ocean!!! The last time I’d really, really taken in the ocean via route 1 was in 2003 when I drove cross country so this was a huge treat.

       We ended the day in Santa Cruz at a little camp ground. Everything was very well organized. My knees were a little sore, the right in particular, so I went to the medical tent which was staffed with 20+/- chiropractors and sports med techs. They were super helpful and taught me all sorts of strange stretches to try to loosen up my knees. We’ll see of they work tomorrow!

      Tomorrow is a century ride, 109 miles to King City. I’m excited to do my first 100+ ride. More updates to come! Pictures will be uploaded when I figure out how hehe.

AIDS LifeCycle 2012 – Day 0 June 3, 2012 at 5:47 am

Today, I will be restarting this blog. We’ll begin with my journey on AIDS LifeCycle – a week-long bike ride from San Francisco to Los Angeles which I will embark on tomorrow.

Stay tuned for pictures and thoughts from the road. And in the meantime, thanks to those of you who supported me getting this far. I’m nervous, excited, anxious and so many other emotions right now – but most definitely thankful for your help in making this a reality.

ClueCon – Best Engineering Telephony Event of 2010 August 10, 2010 at 7:40 pm

This past week, we announced our new project – the 2600hz Project – at the annual ClueCon Telephony Developers Conference in Chicago, IL. This event continues to be the number one event that I look forward to year after year.

Let me explain why this conference is so special to me. First off, the FreeSWITCH team goes to great lengths to line up speakers who know telephony inside and out. These folks don’t just talk about trends related to PBXes and such. They discuss – in tremendous detail – things that most companies would consider company secrets. Likely topics include building a hosted cloud-based phone system that scales to millions of nodes, or how to tweak parameters in FreeSWITCH to improve performance on Amazon EC2. I am not sure how long this trend will continue, but for now, the idea that I can sit in front of a company’s top CTOs and developers and have them explain deep technical information about the inner workings of their product – that’s pretty amazing and incredibly helpful. At the end of the day, it feels like everyone is just trying to build cool new VoIP tools – and once they get going they’re excited to share it with the world.

Second, the crowd who participates in this conference is unique. They are mostly developers. There are no vendor tracks. There are no sales tracks. It’s all engineers, and they are all really passionate about what they do. Most of them are also a lot of fun. This conference really has a feel of community. It is not unusual for people to know at least ten or twenty others at the conference (mostly thanks to the insanely active FreeSWITCH IRC channel). At most conferences, I find the after-party events a bit awkward and stale. Most people don’t stay very long. At the FreeSWITCH after-party event (put on by iCall both last year and this year), people hung out, chatted and really engaged with other folks.  I look forward to the after-party events as much as the talks, although by Thursday I’m admittedly pretty exhausted and don’t have much of a voice left.

Finally, the last thing I love about ClueCon is the networking. Many people who attend ClueCon are actively seeking partners, jobs or other engagements. The ideas being presented are regularly fresh and creative, so it makes sense that conversations about partnering and building long-term relationships happen at ClueCon. And with companies continuing to send their top-level folks to the conference, you actually get to talk to the people who really make executive level decisions. This avoids extra rounds of introductions and scheduling of conference calls and such that I often find you must go through with other firms who send non-CTO level staff to their events.

There are some concerns I have about the ability for ClueCon to remain this open. This year, the conference saw a lot more paid sponsorships – I hope this is not the precursor to a tradeshow or exhibit hall being introduced. There are plenty of other events to fill that gap. In addition, I noticed several companies who flew in staff just for a day or so to do their talk and then depart. This is a bit different then the “community” feel that comes with folks who fly out and stay for the week. Finally, the conference is simply getting more popular – without good strategies around ice breakers and possibly more personalized events I think people will slowly become more reluctant to share ideas.

All in all, the conference remains the top-notch event I continue to look forward to every year. I don’t expect this to change anytime soon and I’ll continue to attend, watch and listen as time goes on.

Special thanks to Michael Collins and Brian West for doing such a wonderful job getting this year’s ClueCon organized. It was, as usual, better then last year’s. And an extra special thanks to Meraki. They provided wireless Internet stations throughout the conference floor. This is the first computer-related conference I’ve ever been to where the Internet actually worked reliably. Awesome.

See you next year.

Why I Love Git (or Git vs. Subversion) July 26, 2010 at 11:55 pm

I have used Git before but only in small trials. Due to the ancient servers I was previously interacting with, I was unable to upgrade from SVN to Git. However, some recent changes have finally allowed for this.

I will never go back.

First off, I must say that there is not one thing I could do in SVN that I can’t do in Git. And to be fair, there are very few things that Git can do that SVN can not. Every branch, commit, tag and other feature in SVN seems to be available in Git and vice versa. But Git provides some secret sauce on top that has changed my life forever.

Branching as a Way of Life

Branches, tags and managing those particular intricacies of release management used to be cumbersome. Everyone had their own way of doing it. Helper scripts were available for SVN (like svnmerge) that would track your commits and work and help make friendly merges across branches. But they all were a bit cryptic and hackish – usually requiring separate downloading of some script and some secret command lines and such. Generally, it was a pain. And it was slow and, sometimes, insecure.

Git makes branching incredibly simple. INCREDIBLY simple.

If you want to create a branch, you make sure you are checked out in whatever you are branching, and then you just type:

git checkout -b new-branch-name

All done. To keep life organized, you can make up your own fun branch name with versions. We use modulename/1.x to designate a module/feature and a version. Easy.

Switching branches is just as easy. Same command as above, but without the -b. Sweet. Git will automatically switch to the other branch, assuming you haven’t changed any files in the current branch. If you have, it will politely list which files and ask you to fix the problem – with lots of verbose instructions. Yum.

But here’s what’s really nuts… Because this is so easy, I actually find myself using git’s branch feature more regularly. Like, every 30 minutes or so. Working on open-source projects makes you a bit ADD – people are chatting all day with suggestions, features, bugs and some are quick to implement, but you have to be able to hop around – quickly. I would never dream of tracking things like I am using SVN. It would take hours.

Committing Locally

Committing locally has changed my life as well. We used to commit a lot of broken code to the repository. We’d do this because, mid-stream down an idea, we sometimes thought “gee, we should do this differently.” But in case the “differently” way didn’t work, we wanted a way to “go back”. Or sometimes we’d want to share what we’d done so far with a teammate. This was hard to do when it was constantly coupled with merging and branching. We could be doing this 20 times a day, being that our contributors and programmers are spread across the US. Nobody can “run over to my screen” to look through things. And since we use IDEs to program in, screen sharing wasn’t an option.

Git’s ability to allow you to commit locally or off a private shared repo before pushing to a public repository has, again, been so unbelievably easy, you sort-of have to be brain dead not to get it. And it’s, again, changed our workflow. We regularly commit our work now – tracking each individual change – and we revert more often, which saves us time when we make mistakes. Before we’d just have “code we deleted.” Now it’s all tracked.

Infrastructure Be Gone!

In addition, setting up SVN with more strict access controls and to use Apache with any sort of security required usually required loading mod_subversion and some other funky stuff in Apache. Annoying. Git, however, uses SSH. There are WAY MORE tools for SSH key management and it’s inherently secure without any loaded modules on the server. We’re using gitolite so we can go even further, but even that isn’t required. Stupidly simple AGAIN.

Oh, and all this branching stuff – WAY fast on Git. Local commits and creations of branches is near instant, even on a large project. Only when we’re ready to be “finished” and have to pull or push changesets to the server do we see any delays.

Not All Roses

Subversion is still the defacto in many places – and it shows. Our IDE doesn’t support Git yet. Many features Git has aren’t exposed in our hosting provider’s interface (assembla) such as submodules. And we do find some features (such as submodules) complex.

We’re still learning.

That said, I’ll never look back. Git is an amazing upgrade to an already amazing version control world we live in. Three cheers.

Read More On Git:

Ten Git Tips & Tricks for Beginners

Gitolite (from Pro Git)

Managing Multiple Git Clients

Where’s Darren been? June 16, 2010 at 4:34 pm

Darren has quit his job at bandwidth.com to pursue new opportunities. And so, this blog will come alive again.

After starting at Bandwidth.com I pretty much posted once here and otherwise abandoned it, mostly to avoid conflict of interest and in the hopes of making the FreePBX.org and bandwidth.com blogs my home. As I’m not with bandwidth.com anymore this will return to being my home.

So we’ll start this blog back up. More posts coming shortly.

FreeSWITCH gets a free GUI (and a paid PBX platform) August 5, 2009 at 8:07 pm

FreePBX FreeSWITCH GUI

Update: The FreeSWITCH GUI project that temporarily became the FreePBX v3 project is now actively maintained as the 2600hz Project.

What happened to TCAPI and the FreeSWITH GUI project?

I’m pleased to announce the general availability of the developers release of FreePBX v3.0. I designed the code, along with the help and feedback of the folks at bandwidth.com, from the ground up – starting with my TCAPI project which has now merged with FreePBX. This work is the result of years of experience with telephony systems. Specifically, the last two years have included tireless development and effort coding late into the night and through the weekend to produce a flexible, modular PBX system that was open-source.

Finally, that PBX software gets to see the day of light. Thanks to backing from bandwidth.com and the FreePBX project, you can now see the documentation and code I’ve been working on at http://www.freepbx.org/v3 .

I also strongly recommend you checkout voicebus.net – a hosted service that is beginning to offer free sandbox development installations for learning and utilizing the new FreePBX v3.0. VoiceBus.net is run by one of the core supporters and developers of the FreePBX v3 open source community, Michael Phillips. The site also offers hosted virtualized instances of FreePBX that work great and cost almost nothing.

Speaking of core developers, we would be nowhere without the help of Karl Anderson. Karl is a more recent addition to the team but he’s committed so much code he might as well have been here since day 1. Karl is part of the GoCentrix.com team, and there’s no doubt in my mind that FreePBX v3.0 will make it into the service offering of Karl’s company thanks to his efforts. If you need hosted VoIP with a premise-based service contract, check out GoCentrix.com Kudos to Karl for his awesome work.

So what’s in FreePBX v3.0? Here’s just a short list:

  • A solid MVC framework design
  • CRUD for device management, number management, IVR management, voicemail, user management, etc.
  • Central number database, to avoid conflicting dialplans
  • Pluggable, modular architecture – tailor the product to do what you want
  • Tie-ins to the FreeSWITCH architecture, including the ability to monitor sofia registrations and turn on/off message waiting lights via web-based voicemail
  • Internationalization support
  • jQuery/AJAX based grid and navigation systems
  • Completely skinnable CSS & layout system – put your brand or vendor logo on the pages, or redo it completely!
  • Automated installer
  • Module management system via the web
  • Advanced hook and event system in both the database driver and the rendering system
  • Ability to send SMS text messages from the UI
  • Ability to make phone calls from the UI
  • Play voicemails via the web
  • XML/Curl support as well as static config file generation
  • The start of an Asterisk driver

For those of you who were interested in TCAPI, I hope you will join me in the move to the FreePBX name and a newer codebase. The concepts of TCAPI (and ironically of the original AMP) are being revived and refined in the new FreePBX v3. Feel free to join us on irc.freenode.net in #freepbx-dev to tag along for the ride.

I’ll be demoing the new FreePBX v3.0 today at ClueCon (in just a few minutes actually). I’ll post the video as well, once it’s ready. In addition, it’s worth noting that the FreeSWITCH team just announced their first corporate sponsorship and paid product. So now you have the option of a fully supported PBX system made by the developers of the core FreeSWITCH project.

And let the games begin…!

Announcing mod_nibblebill – a FreeSWITCH module that does billing! January 15, 2009 at 12:30 pm

I am pleased to announce the submission of mod_nibblebill for review and hopefully acceptance into the trunk of the FreeSWITCH project.

OK, it’s rev. 1 and needs a bunch of work. Got it. But it’s very functional and it does infact function!

Take a peak at the extension documentation I wrote up on their WIKI for details. The module basically handles real-time billing for lots of simultaneous calls in a way where you can give people credit that they use up and their calls get terminated when they have no more money.

Soooo many uses for this!

Let me know if you have questions, as usual! Drop me an email

Adventures with XML and YUI and Dojo January 13, 2009 at 3:32 pm

Grids, grids, grids – everyone wants to make a cool grid rendering engine. Both YUI and Dojo seem to include out-of-the-box functionality to make a cool datagrid that’s similar to, dare I say it, .NET’s built-in grid rendering engine. But cooler, of course, and in JavaScript.

I love the grids they’ve come up with – specifically Dojo’s, where they’ve actually managed to render only the portions of the grid you are viewing until you scroll elsewhere. This makes it reasonable to support grids with rather large sets of data and save on rendering time until you need to render an area. Cool stuff.

What I’m not impressed with is how inflexible the inbound datasets are to these grids. Not a one seems to believe in complex XML anymore – all the cool kids now seem to be using JSON. Which is great if you’re writing your application from scratch and have that much control. But if you don’t, you’re sort of screwed. The XML parsing engines as they relate to datastores which can be used in grids are basically non-existant in both YUI and XML unless you have a very flat XML structure. Neither framework will go deeper then a single element without overriding the default data stores or writing your own parsing tool (for which you won’t find any directions on how to do).

It boggles my mind that XML, in particular, has such poor support when the browser itself provides such great support for XML as a data type. Aside from the obvious parsing functions required natively to parse HTML, you have XPath, XQuery and the DOMNode/DOMDocument structure itself built-in to every single browser out there. Why not adapt this data structure to work with these DataGrids using some of these available tools? It seems completely reasonable to me to allow for an XPath query to find certain nodes in a dataset and make them into your datagrid – and because we’re just talking about DOMNodes being returned you can still retain the cool functionality of editing and what not. And of course, you can pass all this good stuff back to any XHR command you wish in it’s native form.

So I guess I’m making the case for more formats (especially XML) to work with datasets inside DataGrid’s. I can only hope we’re not all force-fed JSON, because it’s not quite there yet when it comes to a reliable, expected output option from every app we may want to interact with. Sometimes we are writing a UI for a back-end that’s already been written and just needs an overhaul. Not all of us get to write something brand new and sparkling in Ruby with JSON support.

Here’s to hoping…

Ruby modules are awesome January 12, 2009 at 5:00 am

If you read my previous blog, well. I told you this was a roller coaster! I’m starting to feel manic.

Ruby, on the other hand, is pretty solid, well documented, and cool.

After fighting with stupid gems all day, I decided to just let people include their own with a simple wrapper. So I started playing with modules.

Modules really rock. They are so freakin simple it’s unbelievable. Modules are sort of like namespaces but you can build upon already loaded ones. Basically, what this means to me is that I can have a base/core module that loads all the files in some module directory, and then figures out dynamically what’s been loaded.

Here’s a practical example (from the previous blog).

Let’s say we want to be able to let a system administrator installing our software decide where users authenticate from. The avilable options *we* thought of were Active Directory, Local Database and Linux PAM. But the reality is not everyone will need all of these options, but someone might need two of them (like local DB and active directory – such that when A.D. is down you can still get into your machine). How do we do this?

First, the individual modules would look something like this:

module MyAuthFramework
  module AuthViaLDAP
    def Login
    # Do login validation here, possibly through a gem
    end
  end
end

You’d save that to a folder somewhere, along with maybe another module, like this:

module MyAuthFramework
  module AuthViaPAM
    def Login
    # Do login validation here, possibly through a gem
    end
  end
end

Note that the two modules share the same method name and base module, but the module namespace in the middle is different. Loading both the above files from the same Ruby script effectively mixes them together, creating this:

module MyAuthFramework
  module AuthViaPAM
    def Login
    # Do login validation here, possibly through a gem
    end
  end

  module AuthViaLDAP
    def Login
    # Do login validation here, possibly through a gem
    end
  end
end

Now, in a base file somewhere, we can use a nifty constants method built into modules to accurately see what’s loaded and cycle through each class, calling it’s login function with some credentials we received. Whoever returns a success could be declared the winner!

module MyAuthFramework
  def Login
    MyAuthFramework.constants.each do |modulename|   # Cycle through all modules
      mod = Object.const_get(modulename)                    # Instantiate the module
      mod.Login()    # Call the method in each module
  end
end

This routine will effectively call the Login() methods in both the included modules, LDAP and PAM.

cool, huh?

Check out the docs for more goodness.

Why Ruby on Rails frustrates me… January 11, 2009 at 11:00 pm

So I am now in week 4 of trying to switch to Ruby on Rails from CakePHP. It is truly a roller coaster ride.

On the upside, I have been very, very impressed with the true object-oriented nature of Ruby. Really, I can’t say enough here. The fact that you can override and extend pretty much anything in the language to your liking is just awesome. Everything is an object – just like Java or Javascript – but without the annoyances of an overwhelming required number of definitions or memory concerns. You can even overload symbols and other operators. The built-in introspection and modularity is just slick.

On the downside, I am continuing to have trouble taking advantage of this “goodness” so many speak of with Rails. I don’t think it’s because of lack of trying. I think it’s due to lack of good documentation.

As a practical example, let’s say I want to do something as simple as create a login generator that can integrate with LDAP as well as a local database. This is a practical scenario I’ve run into in the PHP world – I want to have Windows users use the same login/password on an intranet site as they do for their Windows credentials. But I also want a fallback mechanism so I can login when LDAP is broken, or when I need to create a special account for, say, a contractor who only needs temporary access and should not be allowed onto the Active Directory network. In PHP, you simply go to www.php.net/ldap for the LDAP pieces, or maybe do a search for a PECL library that handles LDAP for you. Then you cobble together a quick model, view and controller using CakePHP’s scaffolding and get the login and logout stuff done. Or you extend the CakePHP authentication modules that are well documented right within the CakePHP manual. Probably about two hours of work.

Now let’s apply this to Rails. Not knowing where to start, and not having a search box on the RubyonRails API page (there isn’t one – how silly), I try some Google-fu to find the equivalent in rails. The first relevant hit is a page that seemed like a match – http://wiki.rubyonrails.org/rails/pages/Authentication . An authentication wiki page on Rails own site. Seems legit.

But then I load the page and the first thing I am greeted with is:

“This article is part of the confusing world of Authentication in Rails. Feel free to help: AuthenticationNeedsHelp.”

ehh

Then I start examining the list of available plug-ins, gems and solutions to authentication that people have listed. Almost all of them are labeled either deprecated, incomplete or “good for beginning Ruby on Rails user.” Fine, I think, maybe one will work and I just have to find which one. So I start clicking into each page.

LoginGenerator seems relevant.

But scrolling through that page, the text and comments suggest that it no longer works for versions past 2.0.2. But don’t worry – it links to ANOTHER site that swears to be the real solution I am looking for! That’s here – http://wiki.rubyonrails.com/rails/pages/Acts_as_authenticated .

Woohoo! This page starts by proclaiming “Yay! (read why)” and then explains that THIS is the right place for an authentication system generator. As if the page already knew that all those OTHER pages one might stumble upon prior were total crap. But before I get too excited, I click on the link which states that “…you really want to see the official Acts As Authenticated Github.” So I bite – I click the link, and am whisked away to the project page, which states right at the top:

“Please note that acts_as_authenticated, is no longer developed/maintained”

WTF?!?!

So we go back to the drawing board – all the way back to the Wiki page we started with – and look for more. Restful_authentication seems to be the top item, so maybe I should have started there. Again, I click it, and the first comment mentions that some material links to the wrong source. ugh. It then lists four locations to get information. I start with the first one – the official plug-in homepage. It says it’s for rails 1.2. But I’m on Rails 2.2.2. [Sigh] Do I try it? Or go back to the drawing board?

Maybe I am missing something, but one strong part of PHP was the manual and the comments – the manual matched the methods and classes that were actually available almost 98% of the time, and were never incomplete in terms of broken. Does such a resource exist for Rails where I can go to a webpage to find plug-ins and they reliably work and are available? This has to be my #1 frustration with Rails at this point.

If you have comments, please add them. I’d love to hear solutions to how to better manage rails plug-ins, gems and other “goodies” that seem to just be scattered everywhere.