Iscsi For Mac
SANmp and Xsan are FiberChannel based applications whereas globalSAN/iSANmp are ethernet based. Whilst I cannot find iSANmp on the SNS web site, a 3rd party offering for it listed a $199 price - way more than for globalSAN ($89). I mentioned that when using globalSAN that it automatically discovers ALL available iSCSI Targets and presents them in its Preferences interface - and - that users need to manually select which one to use - and - be very careful to choose the correct one. One interesting note about Synology DiskStation level NAS/SAN servers (at least for the ds216 series) is that their configuration has an expliciit 'allow/disallow multiple connections' for each defined Target so multiple connections to the same available Target can be managed at the iSCSI server level. Of course, if multiple connections are enabled, then the iSCSI user(s) are responsible for whatever happens to the Target data. Since the original question was about (client) iSCSI Initiator rather than file sharing (MP) I did not include information about the other SNS products including Xtarget which is an iSCSI server application).
Apple Footer. This site contains user submitted content, comments and opinions and is for informational purposes only. Apple may provide or recommend responses as a possible solution based on the information provided; every potential issue may involve several factors not detailed in the conversations captured in an electronic forum and Apple can therefore provide no guarantee as to the efficacy of any proposed solutions on the community forums. Apple disclaims any and all liability for the acts, omissions and conduct of any third parties in connection with or related to your use of the site. All postings and use of the content on this site are subject to the.
. There is no mention of iSCSI on apple's site, and I never heard of apple supporting iSCSI.
You cannot run Linux drivers on Mac OS X, drivers is one area where Darwin is very different from Linux. You might have more luck with BSD drivers. Running the drivers as command line makes little sense: drivers don't have much interfaces.
You might need the command line to start or install drivers. Assuming an iSCSI driver existed for Mac OS X / Darwin, then the system could see a remote device and handle it. Assuming the file-system on this device would be supported by Darwin then all applications, with or without GUI would 'see' this file-system. File system drivers for MacOS X would have to be written as a kext and would be IOKit-based. Totally un-BSD. If it were a file system, you would be wrong (since the VFS layer is basically BSD), but it isn't a file system; it's a block device.
So yes, it would be an I/O Kit KEXT. However, to say that it's 'totally un-BSD' is a stretch. BSD drivers are relatively easy to port to Mac OS X if they are written correctly. The wrapper tends to be relatively small, with additional changes needed for synchronization where applicable. Nobody 'really needs' iSCSI.
ISCSI isn't real yet. It's still one of those 'coming soon' things, like Infiniband. And we saw how well Infiniband worked out.
ISCSI is just another way of solving a problem that's already been solved in any number of other ways. You need to attach a computer to some storage. You can use direct-attach FireWire storage. That has the advantage of being absolutely bullet-proof. Or you can use Fibre Channel to attach to a switched fabric. That works fine, too; just present a LUN to the Mac and let it format and mount it. Or you can use a network storage technology, like AppleShare or NFS.
Those work fine, too, and the Power Macs, PowerBooks, and xServes are all shipping with 1000BASE-T, so that's not a problem. There are any number of ways to ameliorate your so-called 'real need' for iSCSI. These work today.
1) firewire - no managment, just loose drives attached to single machines. Might as well suggest a usb memory stick. Firewire drives don't make a san. 2) fibre channel - cost of entry approaching $50k. That adds up to about 50k reasons not to use it on a home machine or small network.
3) network storage - not really a block level disk access technology, is it? I think the real reason is that very few people are using macs in a data center serving up real applications to lots of clients - the sorta place where a well managed SAN makes sense. Now that the draft standard has been finalized (but not ratified), i imagine that you'll see iSCSI becoming more commoditized and more software being made available for more OSs. Note that the windows and linux software packages are only iSCSI initiators - i haven't seen any software based iSCSI targets. This means that even if you did port the code to Darwin you'd still have to have some storage device out there speaking iSCSI to point your mac at.
Eccept that in my experience most Datacenters are migrating to net attached storage. People are tired of SANs, the high chost of parts and maintenence, the difficultly and expense of backing up, etc.
It SOUNDS great to have 5TB of storage in one unit, but just exactly how do you keep current off-site backups? That's right.
Iscsi For Windows
You maintain another 5TB unit in another location and run a dedicated T3 between them, yea. THAT'S affordable. The SAN was a great idea, Fiber Channel was a great idea, but it never reached critical mss, and now distributed network storage is taking over. ISCSI will propbaly make some inroads, but it will never replace a simple device with network ports actining as a server.
The latter is cheap, esily understood, easily maintained, provides 95% of the functionality necessary to any IT department, and the clients are built in to every major OS on the market. There are a lot of apps, though that demand disk-level access to things. Sure, you could push a lot of things to DFS (in the windows world)-even, conceivably a nice system like an XRAID. But for systems that demand a disk level access, such as SQL, Exchange, etc., a SAN or Server attached storage is the only way to go now.
ISCSI would change that, and bring the power of SAN-type storage to a much better budget point. It's true, SANs are too limiting and too finicky and way way too expensive. That's by the XRAID looks appealing and would be more appealing if an attached XServe could serve up the disk space as an iSCSI drive. I'm less interested in seeing the proliferation of iSCSI clients, and much more interested in the proliferation of iSCSI target software.
It'll make storage that much more flexible. Which is one of the major reasons that people have not been migrating to E2K as quickly as MS would have liked. But the still. Exhange is NOT accessing the disk directly, MS just did a brain-dead thing and forces E2K to use the DASD storage stack instead of the one level higher at the 'volume' level where total abstraction is possible. One major reason MS did this was to lock people in to their proprietary Microsoft Cluster Server solutions and higher licensing fees for those components. There is no technical reason the E2K must use local storage.
At least that'm my understanding of the state of things. This doesn't fully match with my experience and I work for HP in their storage software division. Fortune 50, 100 and many 500 are using and or starting to setup storage networks based on Fibre Channel (not much iSCSI yet). Most of the major data centers in the world are using FC interconnects and large storage arrays (5+ TB per array, the bigger (in capacity not physical size) the better generally).
The trend is towards putting the data in storage arrays external from the servers, think about blade servers for example. So in other words toward using storage networks regardless if they are based on FC interconnects, iSCSI, etc. Also not what your point is about T3 and 5TB replicated storage. Many companies do that with far larger amounts of storage using metro fiber not T3s. This is not an Fibre Channel issue but a data desaster recover issue. 1) firewire - no managment, just loose drives attached to single machines.
Might as well suggest a usb memory stick. Firewire drives don't make a san. no, firewire drives can be attached to many machines at the same time. There -are- firewire san solutions out there right now. 2) fibre channel - cost of entry approaching $50k. That adds up to about 50k reasons not to use it on a home machine or small network.
no, have a look at Apple's Xraid box. Much cheaper than $50k. i think the real reason is that very few people are using macs in a data center serving up real applications to lots of clients - the sorta place where a well managed SAN makes sense. Now that the draft standard has been finalized (but not ratified), i imagine that you'll see iSCSI becoming more commoditized and more software being made available for more OSs. I think the main reason that macs are doing that job is that there haven't been any mac capable of doing that job.
I got a almost new quality 16 port Brocade 2800 switch of off eBay, fully load with GBICs for around 4k, add on about 400-600 per host for adapters, and 5k-11k for something like Apple's xRAID you can get into a FC SAN for much less then 50k. If you want 2gig fibre channel it would cost you of course more for the switch (2-3x currently). Not that it is a cost effect thing to do for small SANs.
ISCSI isn't that cheap either but in theory you do save on at least the switch costs (most companies are making iSCSI adapters instead of normal NICs for performance reasons). IEEE1394B is peer to peer. SAN management is a job for a user space tool. Actually, i think it's very consistent: given that a) he's asking 'ask slashdot' instead of somebody like EMC, it's probably not for a major data center. And b) there aren't a hell of a lot of datacenters out there hosting large scale mac server installs (yet). Which is why iscsi would actually be nice on a mac, as a software implementation would probably be cheap and would utilize commodity hardware and would be totally accessible for the home user. I manage a number of intel systems attached to an emc SAN at work, and i'd love to be able to implement something similar at home myself, which has me watching the emerging iscsi standard very closely for these same reasons - i just don't have the quid to drop a symmetrix in at home.very few people are using macs in a data center serving up real applications to lots of clients - the sorta place where a well managed SAN makes sense.uh, which one is it, then?
If you're working in a real data center, you're presumably not in your home. And, not incidently, i've still yet to have it explained to me why a block level network storage system is a good thing as compared to a network file system (although not NFS particularly), other than for developers who can't wrap their minds around any model that doesn't involve every PC having a 'disk'. ISCSI is just another way of solving a problem that's already been solved in any number of other ways.absolutely right. And not even an improvement.
I'd say quite the oposite. What's so great about the SCSI command set that makes people think it'll be such a wonderful networked protocol? There's lots of things it doesn't do that you'd like a network protocol to do. Presumably many of these are addressed by the 'i' up front, but why do this stupid layering? Is elegance totally lost on modern programers? This all doesn't even get into the question of whether a block-level network storage system is a good thing. Can someone explain to me why it's an improvement over a good network file system?
And please don't talk about problems with specific network file systems. We all know NFS sucks.