반응형

Remote Access from Windows 7 to Ubuntu and Linux Mint with xRDP

Hello Linux Geeksters.

In this article I will show you how to install xRDP on Ubuntu and Linux Mint.

xRDP is a server which allows you to connect remotely to Ubuntu or Linux Mint, via Remote Desktop Connection, from Windows 7. The xRDP software for Windows 7 is called Microsoft RDP.

To install xRDP on Ubuntu and Linux Mint, do:

리눅스에 xrdp를 설치하면 됩니다^^
$ sudo apt-get install xrdp







http://linuxg.net/remote-access-from-windows-7-to-ubuntu-and-linux-mint-with-xrdp/

반응형

http://ubuntuforums.org/showthread.php?t=2092298




Dear Chris,

Thank you very much for your message.

Here you have some helpful commands for the daemon in Linux.

teamviewer --daemon status           show current status of the TeamViewer daemon
teamviewer --daemon start              start                              TeamViewer daemon
teamviewer --daemon stop              stop                              TeamViewer daemon
teamviewer --daemon restart          stop/start                   TeamViewer daemon
teamviewer --daemon disable        disable                         TeamViewer daemon - don't start daemon on system startup
teamviewer --daemon enable         enable                         TeamViewer daemon - start daemon on system startup (default)

If you have any further questions, please do not hesitate to contact us again.
 

Best regards



아래의 명령어를 터미널에서 입력하면 됩니다.


teamviewer --daemon enable



2013. 07. 14. (일) 16:19:15 KST

Action: Installing daemon (8.0.17147) for 'upstart' ...

installing /etc/init/teamviewerd.conf (/opt/teamviewer8/tv_bin/script/teamviewerd.conf)

initctl start teamviewerd

teamviewerd start/running, process 9536



반응형






If you are using snapshots, you need to DELETE all of them before resizing the VDI! If you have already attempted to 

resize the VDI without deleting the snapshots, then hopefully you have backups of the virtual machine’s files + any xml files located in ~/.VirtualBox on Linux. - See more at: http://www.webdesignblog.asia/software/linux-software/resize-virtualbox-disk-image-manipulate-vdi/#sthash.WZWzSKHC.xIdgsQNZ.dpuf





http://www.webupd8.org/2011/02/how-to-easily-resize-virtualbox-40-hard.html



VirtualBox 4.0 got a very cool new feature that allows you to easily resize a hard disk in just a few seconds. Previously, you had to install Gparted to do this and the procedure was quite slow.


In VirtualBox 4.0+ (see how to install VirtualBox 4.0.x in Ubuntu), to resize a VirtualBox hard disk image (.VDI) firstly locate the folder where the .vdi you want to resize is located - this should be under ~/VirtualBox VMs or ~/.VirtualBox/HardDisks. Then open a terminal, navigate to that folder ("cd /FOLDER/PATH") and run the following command to resize the .VDI:
VBoxManage modifyhd YOUR_HARD_DISK.vdi --resize SIZE_IN_MB


Where YOUR_HARD_DISK.vdi is the VirtualBox hard disk you want to resize and SIZE_IN_MB is the new virtual hard disk size, in megabytes. For example, the following command will resize the VirtualBox hard disk called "natty.vdi" to 12000 megabytes:
VBoxManage modifyhd natty.vdi --resize 12000
That's it! The process takes just a few seconds and you should now have a resized VirtualBox hard disk.






윈도우 환경에에서도 가능 합니다.


http://www.webdesignblog.asia/software/linux-software/resize-virtualbox-disk-image-manipulate-vdi/#sthash.WZWzSKHC.xIdgsQNZ.dpbs




반응형

source : http://linux.101hacks.com/unix/shutdown/

Shutdown the machine immediately

# shutdown -h now

Broadcast message from sathiya@sathiya-laptop
	(/dev/pts/1) at 11:28 ...

The system is going down for halt NOW!

Reboot the machine immediately

# shutdown -r now

Broadcast message from sathiya@sathiya-laptop
	(/dev/pts/1) at 11:28 ...

The system is going down for reboot NOW!

Shutdown the machine with user defined message

# shutdown -h now 'System is going down for replacement of primary memory'

Broadcast message from sathiya@sathiya-laptop
	(/dev/pts/1) at 11:28 ...

The system is going down for halt NOW!
System is going down for replacement of primary memory

Scheduling the shutdown with 24 hour format

# shutdown -h 20:00

It sends the following message immediately to all terminals.

Broadcast message from sathiya@sathiya-laptop
	(/dev/pts/3) at 10:25 ...

The system is going down for halt in 575 minutes!

Similar to shutdown (halt), you also schedule a reboot at a specified time as shown below.

# shutdown -r 20:00

Broadcast message from sathiya@sathiya-laptop
	(/dev/pts/3) at 10:27 ...

The system is going down for reboot in 573 minutes!

Cancel a running shutdown

You can cancel the running shutdown by using -c option as,

# shutdown -c

Syntax and Options

shutdown [OPTION]… TIME [MESSAGE]

Short OptionOption Description
-rRequests that the system be rebooted after it has been brought down
-hRequests that the system be either halted or powered off after it has been broughtdown, with the choice as to which left up to the system
-HRequests that the system be halted after it has been brought down
-PRequests that the system be powered off after it has been brought down
-cCancels a running shutdown. TIME is not specified with this option, the firstargument is MESSAGE
-kOnly send out the warning messages and disable logins, do not actually bring thesystem down



반응형

출처 : http://stackoverflow.com/questions/1120095/split-files-using-tar-gz-zip-or-bzip2




You can use the split command with the -b option:

split -b 1024m file.tar.gz

It can be reassembled on a Windows machine using @Joshua's answer.

copy /b file1 + file2 + file3 + file4 filetogether

Edit: As @Charlie stated in the comment below, you might want to set a prefix explicitly because it will use xotherwise, which can be confusing.

split -b 1024m "file.tar.gz" "file.tar.gz.part-"

// Creates files: file.tar.gz.part-aa, file.tar.gz.part-ab, file.tar.gz.part-ac, ...

Edit: Editing the post because question is closed and the most effective solution is very close to the content of this answer:

# create archives
$ tar cz my_large_file_1 my_large_file_2 | split -b 1024MiB - myfiles_split.tgz_
# uncompress
$ cat myfiles_split.tgz_* | tar xz

This solution avoids the need to use an intermediate large file when (de)compressing. Use the tar -C option to use a different directory for the resulting files. btw if the archive consists from only a single file, tar could be avoided and only gzip used:

# create archives
$ gzip -c my_large_file | split -b 1024MiB - myfile_split.gz_
# uncompress
$ cat myfile_split.gz_* | gunzip -c > my_large_file

For windows you can download ported versions of the same commands or use cygwin.

반응형

리눅스 민트 15 x64bit용 팀뷰어 다운로드

http://download.teamviewer.com/download/teamviewer_linux_x64.deb


파일 다운로드 후 다음 명령어로 설치

  • sudo dpkg -i teamviewer_linux_x64.deb



최신 버전은 공식 홈페이지에서 다운로드 할 수 있습니다.

http://www.teamviewer.com/en/download/linux.aspx


세부적인 내용은 아래 방법을 참고 하세요. 


How do I install TeamViewer on my Linux distribution?

Graphical installation

For installing TeamViewer, we recommend using the graphical installer. The graphical installer can be invoked by (double) clicking the downloaded package.

If this is not the case and, e.g. the Archive Manager opens up, open the context menu instead (right-click on the downloaded package). Depending on your distribution you will get different possibilities to install the package, e.g. “Open with software installation”, “Open with GDebi package installer”, “Open with Ubuntu Software Center”, or “Open with> QApt package installer”.

RedHat, CentOS, Fedora, SUSE

You need the teamviewer_linux.rpm package.

For installing TeamViewer, we recommend using the graphical installer.

If you prefer to use the command line or if there is no graphical installer available you can use either one of these commands:

  • yum install teamviewer_linux.rpm (recommended, as it will install missing dependencies)
  • rpm -i teamviewer_linux.rpm

In case “yum” is asking for a missing public key, you can download it here and import the key by using following command:

  • rpm --import TeamViewer_Linux_PubKey.asc

After importing the public key, please execute the “yum”-command again to install the TeamViewer rpm.

Notes to Red Hat 4.x:

We do not offer packages for RedHat/CentOS 4.x. If you have a need to run TeamViewer on RedHat/CentOS 4.x, please contact our technical support.


Debian, Mint, Ubuntu, Kubuntu, Xubuntu…

For 32-bit DEB-systems you need the teamviewer_linux.deb package.

For 64-bit DEB-systems without Multiarch you need the teamviewer_linux_x64.deb package. Please see note onMultiarch below.

For installing TeamViewer, we recommend using the graphical installer.

If you prefer to use the command line or if there is no graphical installer available you can use either one of these commands:

For the 32-bit package:

  • sudo dpkg -i teamviewer_linux.deb

For the 64-bit package:

  • sudo dpkg -i teamviewer_linux_x64.deb

In case “dpkg” indicates missing dependencies, complete the installation by executing the following command:

  • sudo apt-get install -f

Notes to Multiarch:

On newer 64-bit DEB-systems with Multiarch-support (Debian 7) teamviewer_linux_x64.deb cannot be installed because the package ia32-libs is not available anymore on these systems. In this case you can use teamviewer_linux.deb instead.

In case you get the error “wrong architecture i386” you have to execute the following command lines:

  • dpkg --add-architecture i386
  • apt-get update

For further information: http://wiki.debian.org/Multiarch/HOWTO


Other platforms

TeamViewer does also run on a lot of other distributions, although not officially supported.

You can use our tar.gz package which will only create files in the directory where you extract it to. The tar.gz package works, if the libraries that TeamViewer depends on are installed which is often the case.

On Mandriva/Mageia TeamViewer can be installed using the following command:

  • urpmi --force --allow-nodeps teamviewer_linux.rpm

On PCLinuxOS TeamViewer can be installed from the repository. The package is provided by the PCLinuxOS team.

반응형

Introduction

Solid State Drives (SSDs) are not PnP devices. Special considerations such as partition alignment, choice of file system, TRIM support, etc. are needed to set up SSDs for optimal performance. This article attempts to capture referenced, key learnings to enable users to get the most out of SSDs under Linux. Users are encouraged to read this article in its entirety before acting on recommendations as the content is organized by topic, not necessarily by any systematic or chronologically relevant order.

Note: This article is targeted at users running Linux, but much of the content is also relevant to our friends using other operating systems like BSD, Mac OS X or Windows.

Advantages over HDDs

  • Fast read speeds - 2-3x faster than modern desktop HDDs (7,200 RPM using SATA2 interface).
  • Sustained read speeds - no decrease in read speed across the entirety of the device. HDD performance tapers off as the drive heads move from the outer edges to the center of HDD platters.
  • Minimal access time - approximately 100x faster than an HDD. For example, 0.1 ms (100 us) vs. 12-20 ms (12,000-20,000 us) for desktop HDDs.
  • High degree of reliability.
  • No moving parts.
  • Minimal heat production.
  • Minimal power consumption - fractions of a W at idle and 1-2 W while reading/writing vs. 10-30 W for a HDD depending on RPMs.
  • Light-weight - ideal for laptops.

Limitations

  • Per-storage cost (close to a dollar per GB, vs. around a dime or two per GB for rotating media).
  • Capacity of marketed models is lower than that of HDDs.
  • Large cells require different filesystem optimizations than rotating media. The flash translation layer hides the raw flash access which a modern OS could use to optimize access.
  • Partitions and filesystems need some SSD-specific tuning. Page size and erase page size are not autodetected.
  • Cells wear out. Consumer MLC cells at mature 50nm processes can handle 10000 writes each; 35nm generally handles 5000 writes, and 25nm 3000 (smaller being higher density and cheaper). If writes are properly spread out, are not too small, and align well with cells, this translates into a lifetime write volume for the SSD that is a multiple of its capacity. Daily write volumes have to be balanced against life expectancy.
  • Firmwares and controllers are complex. They occasionally have bugs. Modern ones consume power comparable with HDDs. They implement the equivalent of a log-structured filesystem with garbage collection. They translate SATA commands traditionally intended for rotating media. Some of them do on the fly compression. They spread out repeated writes across the entire area of the flash, to prevent wearing out some cells prematurely. They also coalesce writes together so that small writes are not amplified into as many erase cycles of large cells. Finally they move cells containing data so that the cell does not lose its contents over time.
  • Performance can drop as the disk gets filled. Garbage collection is not universally well implemented, meaning freed space is not always collected into entirely free cells.

Pre-Purchase Considerations

There are several key features to look for prior to purchasing a contemporary SSD.

  • Native TRIM support is a vital feature that both prolongs SSD lifetime and reduces loss of performance for write operations over time.
  • Buying the right sized SSD is key. As with all filesystems, target <75 % occupancy for all SSD partitions to ensure efficient use by the kernel.

Reviews

This section is not meant to be all-inclusive, but does capture some key reviews.

Tips for Maximizing SSD Performance

TRIM

Most SSDs support the ATA_TRIM command for sustained long-term performance and wear-leveling. For more including some before and after benchmark, see this tutorial.

As of linux kernel version 3.7, the following filesystems support TRIM: ext4, btrfs, JFS, and XFS.

The Choice_of_Filesystem section of this article offers more details.

Enable TRIM by Mount Flags

Using this flag in one's /etc/fstab enables the benefits of the TRIM command stated above.

/dev/sda1  /       ext4   defaults,noatime,discard   0  1
/dev/sda2  /home   ext4   defaults,noatime,discard   0  2
Note: It does not work with ext3; using the discard flag for an ext3 root partition will result in it being mounted read-only.
Warning: Users need to be certain that kernel version 2.6.33 or above is being used AND that their SSD supports TRIM before attempting to mount a partition with the discardflag. Data loss can occur otherwise!

Apply TRIM via cron

Enabling TRIM on supported SSDs is definitely recommended. But sometimes it may cause some SSDs to perform slowly during deletion of files. If this is the case, one may choose to use fstrim as an alternative.

# fstrim -v /

The partition for which fstrim is to be applied must be mounted, and must be indicated by the mount point.

If this method seems like a better alternative, it might be a good idea to have this run from time to time using cron. To have this run daily, the default cron package (cronie) includes an anacron implementation which, by default, is set up for hourly, daily, weekly, and monthly jobs. To add to the list of daily cron tasks, simply create a script that takes care of the desired actions and put it in /etc/cron.daily, /etc/cron.weekly, etc. Appropriate nice and ionice values are recommended if this method is chosen. If implemented, remove the "discard" option from /etc/fstab.

Note: Use the 'discard' mount option as a first choice. This method should be considered second to the normal implementation of TRIM.
Enable TRIM for LVM

Enable issue_discards option in /etc/lvm/lvm.conf

Enable TRIM With mkfs.ext4 or tune2fs (Discouraged)

One can set the trim flag statically with tune2fs or when the filesystem is created.

# tune2fs -o discard /dev/sdXY

or

# mkfs.ext4 -E discard /dev/sdXY
Note: After this option is set as described above, any time the user checks mounted filesystems with "mount", the discard option will not show up. Even when discard is passed on the CLI in addition to the option being set with tune2fs or mkfs.ext4, it will not show up. See the following thread for a discussion about his:https://bbs.archlinux.org/viewtopic.php?id=137314

I/O Scheduler

Consider switching from the default CFQ scheduler (Completely Fair Queuing) to NOOP or Deadline. The latter two offer performance boosts for SSDs. The NOOP scheduler, for example, implements a simple queue for all incoming I/O requests, without re-ordering and grouping the ones that are physically closer on the disk. On SSDs seek times are identical for all sectors, thus invalidating the need to re-order I/O queues based on them.

The CFQ scheduler is enabled by default on Arch. Verify this by viewing the contents /sys/block/sdX/queue/scheduler:

$ cat /sys/block/sdX/queue/scheduler
noop deadline [cfq]

The scheduler currently in use is denoted from the available schedulers by the brackets.

Users can change this on the fly without the need to reboot.

As root:

# echo noop > /sys/block/sdX/queue/scheduler

As a regular user:

$ sudo tee /sys/block/sdX/queue/scheduler <<< noop

This method is non-persistent (eg. change will be lost upon rebooting). Confirm the change was made by viewing the contents of the file again and ensuring noop is between brackets.

Kernel parameter (for a single device)

If the sole storage device in the system is an SSD, consider setting the I/O scheduler for the entire system via the elevator=noop kernel parameter. See Kernel parameters for more info.

Using the sys virtual filesystem (for multiple devices)

This method is preferred when the system has several physical storage devices (for example an SSD and an HDD).

Create the following tmpfile where "X" is the letter for the SSD device.

 /etc/tmpfiles.d/set_IO_scheduler.conf 
w /sys/block/sdX/queue/scheduler - - - - noop

Because of the potential for udev to assign different /dev/ nodes to drives before and after a kernel update, users must take care that the NOOP scheduler is applied to the correct device upon boot. One way to do this is by using the SSD's device ID to determine its /dev/ node. To do this automatically, use the following snippet instead of the line above and add it to /etc/rc.local:

declare -ar SSDS=(
  'scsi-SATA_SAMSUNG_SSD_PM8_S0NUNEAB861972'
  'ata-SAMSUNG_SSD_PM810_2.5__7mm_256GB_S0NUNEAB861972'
)

for SSD in "${SSDS[@]}" ; do
  BY_ID=/dev/disk/by-id/$SSD

  if [[ -e $BY_ID ]] ; then
    DEV_NAME=`ls -l $BY_ID | awk '{ print $NF }' | sed -e 's/[/\.]//g'`
    SCHED=/sys/block/$DEV_NAME/queue/scheduler

    if [[ -w $SCHED ]] ; then
      echo noop > $SCHED
    fi
  fi
done

where SSDS is a Bash array containing the device IDs of all SSD devices. Device IDs are listed in /dev/disk/by-id/ as symbolic links pointing to their corresponding /dev/nodes. To view the links listed with their targets, issue the following command:

ls -l /dev/disk/by-id/

Using udev for one device or HDD/SSD mixed environment

Though the above will undoubtedly work, it is probably considered a reliable workaround. It should also be noted that with the move to systemd there will be no rc.local. Ergo, it would be preferred to use the system that is responsible for the devices in the first place to implement the scheduler. In this case it is udev, and to do this, all one needs is a simple udev rule.

To do this, create and edit a file in /etc/udev/rules.d named something like '60-schedulers.rules'. In the file include the following:

# set deadline scheduler for non-rotating disks
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"

# set cfq scheduler for rotating disks
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="cfq"

Of course, set deadline/cfq to the desired schedulers. Changes should occur upon next boot. To check success of the new rule:

$ cat /sys/block/sdX/queue/scheduler   #where X is the device in question
Note: Keep in mind cfq is the default scheduler, so the second rule with the standard kernel is not actually necessary. Also, in the example sixty is chosen because that is the number udev uses for its own persistent naming rules. Thus, it would seem that block devices are at this point able to be modified and this is a safe position for this particular rule. But the rule can be named anything so long as it ends in '.rules'. (Credit: falconindy and w0ng for posting on his blog)

Swap Space on SSDs

One can place a swap partition on an SSD. Note that most modern desktops with an excess of 2 Gigs of memory rarely use swap at all. The notable exception is systems which make use of the hibernate feature. The following is a recommended tweak for SSDs using a swap partition that will reduce the "swappiness" of the system thus avoiding writes to swap:

# echo 1 > /proc/sys/vm/swappiness

Or one can simply modify /etc/sysctl.conf as recommended in the Maximizing Performance wiki article:

vm.swappiness=1
vm.vfs_cache_pressure=50

SSD Memory Cell Clearing

On occasion, users may wish to completely reset an SSD's cells to the same virgin state they were at the time the device was installed thus restoring it to its factory default write performance. Write performance is known to degrade over time even on SSDs with native TRIM support. TRIM only safeguards against file deletes, not replacements such as an incremental save.

The reset is easily accomplished in a three step procedure denoted on the SSD Memory Cell Clearing wiki article.

Tips for Minimizing SSD Read/Writes

An overarching theme for SSD usage should be 'simplicity' in terms of locating high-read/write operations either in RAM (Random Access Memory) or on a physical HDD rather than on an SSD. Doing so will add longevity to an SSD. This is primarily due to the large erase block size (512 KiB in some cases); a lot of small writes result in huge effective writes.

Note: A 32GB SSD with a mediocre 10x write amplification factor, a standard 10000 write/erase cycle, and 10GB of data written per day, would get an 8 years life expectancy. It gets better with bigger SSDs and modern controllers with less write amplification.

Use iotop -oPa and sort by disk writes to see how much programs are writing to disk.

Intelligent Partition Scheme

  • For systems with both an SSD and an HDD, consider relocating the /var partition to a magnetic disc on the system rather than on the SSD itself to avoid read/write wear.

noatime Mount Flag

Using this flag in one's /etc/fstab halts the logging of read accesses to the file system via an update to the atime information associated with the file. The importance of the noatime setting is that it eliminates the need by the system to make writes to the file system for files which are simply being read. Since writes can be somewhat expensive as mentioned in previous section, this can result in measurable performance gains. Note that the write time information to a file will continue to be updated anytime the file is written to with this option enabled.

/dev/sda1  /       ext4   defaults,noatime   0  1
/dev/sda2  /home   ext4   defaults,noatime   0  2
Note: This setting will cause issues with some programs such as Mutt, as the access time of the file will eventually be previous than the modification time, which would make no sense. Using the relatime option instead of noatime will ensure that the atime field will never be prior to the last modification time of a file.

Locate High-Use Files to RAM

Browser Profiles

One can easily mount browser profile(s) such as chromium, firefox, opera, etc. into RAM via tmpfs and also use rsync to keep them synced with HDD-based backups. In addition to the obvious speed enhancements, users will also save read/write cycles on their SSD by doing so.

The AUR contains several packages to automate this process, for example profile-sync-daemon.

Others

For the same reasons a browser's profile can be relocated to RAM, so can highly used directories such as /srv/http (if running a web server). A sister project to profile-sync-daemon is anything-sync-daemon, which allows users to define any directory to sync to RAM using the same underlying logic and safe guards.

Compiling in tmpfs

Intentionally compiling in /tmp is a great idea to minimize this problem. Arch Linux defaults /tmp to 50 % of the physical memory. For systems with >4 GB of memory, one can create a /scratch and mount it to tmpfs set to use more than 50 % of the physical memory.

Example of a machine with 8 GB of physical memory:

tmpfs     /scratch     tmpfs     nodev,nosuid,size=7G     0     0

Disabling Journaling on the filesystem

Using a journaling filesystem such as ext4 on an SSD WITHOUT a journal is an option to decrease read/writes. The obvious drawback of using a filesystem with journaling disabled is data loss as a result of an ungraceful dismount (i.e. post power failure, kernel lockup, etc.). With modern SSDs, Ted Tso advocates that journaling can be enabled with minimal extraneous read/write cycles under most circumstances:

Amount of data written (in megabytes) on an ext4 file system mounted with noatime.

operationjournalw/o journalpercent change
git clone367.0353.03.81 %
make207.6199.43.95 %
make clean6.453.7342.17 %

"What the results show is that metadata-heavy workloads, such as make clean, do result in almost twice the amount data written to disk. This is to be expected, since all changes to metadata blocks are first written to the journal and the journal transaction committed before the metadata is written to their final location on disk. However, for more common workloads where we are writing data as well as modifying filesystem metadata blocks, the difference is much smaller."

Note: The make clean example from the table above typifies the importance of intentionally doing compiling in tmpfs as recommended in the preceding section of this article!

Choice of Filesystem

Btrfs

Btrfs support has been included with the mainline 2.6.29 release of the Linux kernel. Some feel that it is not mature enough for production use while there are also early adopters of this potential successor to ext4. Users are encouraged to read the Btrfs article for more info.

Ext4

Ext4 is another filesystem that has support for SSD. It is considered as stable since 2.6.28 and is mature enough for daily use. ext4 users must explicitly enable the TRIM command support using the discard mount option in fstab (or with tune2fs -o discard /dev/sdaX). See the official in kernel tree documentation for further information on ext4.

XFS

Many users do not realize that in addition to ext4 and btrfs, XFS has TRIM support as well. This can be enabled in the usual ways. That is, the choice may be made of either using the discard option mentioned above, or by using the fstrim command. More information can be found on the XFS wiki.

JFS

As of Linux kernel version 3.7, proper TRIM support has been added. So far, there is not a great wealth of information of the topic but it has certainly been picked up by Linux news sites. It is apparent that it can be enabled via the discard mount option, or by using the method of batch TRIMs with fstrim.

Firmware Updates

ADATA

ADATA has a utility available for Linux (i686) on their support page here. The link to the utility will appear after selecting the model.

Crucial

Crucial provides an option for updating the firmware with an ISO image. These images can be found after selecting your product here and downloading the "Manual Boot File."

Kingston

Kingston has a Linux utilty to update the firmware of their Sandforce based drives. It can be found on their support page.

Mushkin

The lesser known Mushkin brand Solid State drives also use Sandforce controllers, and have a Linux utility (nearly identical to Kingston's) to update the firmware.

OCZ

OCZ has a command line utility available for Linux (i686 and x86_64) on their forum here.

Samsung

Samsung notes that update methods other than by using their Magician Software is "not supported", but it is possible. Apparently the Magician Software can be used to make a USB drive bootable with the firmware update. However, I could not get the Magician Software to cooperate with me. The easiest method is to use the bootable ISO images they provide for updating the firmware. They can be grabbed from here. Note Samsung does not make it obvious at all that they actually provide these. They seem to have 4 different firmware update pages each referencing different ways of doing things.

SanDisk

SanDisk makes ISO firmware images to allow SSD firmware update on operating systems that are unsupported by their SanDisk SSD Toolkit. Note that one must choose the firmware for the right SSD model, as well as for the capacity that it has (e.g. 60GB, or 256GB). After burning the adequate ISO firmware image, simply restart your computer to boot with the newly created CD/DVD boot disk (may work from a USB stick.

I could not find a single page listing the firmware updates yet (site is a mess IMHO), but here are some relevant links:

SanDisk Extreme SSD Firmware Release notes and Manual Firmware update version R211

SanDisk Ultra SSD Firmware release notes and Manual Firmware update version 365A13F0

See also


반응형

5.5.5. The network interface with the static IP

The network interface served by the static IP is configured by creating the configuration entry in the "/etc/network/interfaces" file as the following.

allow-hotplug eth0
iface eth0 inet static
 address 192.168.11.100
 netmask 255.255.255.0
 gateway 192.168.11.1
 dns-domain example.com
 dns-nameservers 192.168.11.1

When the Linux kernel detects the physical interface eth0, the allow-hotplug stanza causes ifup to bring up the interface and the iface stanza causes ifup to use the static IP to configure the interface.

Here, I assumed the following.

  • IP address range of the LAN network: 192.168.11.0 - 192.168.11.255

  • IP address of the gateway: 192.168.11.1

  • IP address of the PC: 192.168.11.100

  • The resolvconf package: installed

  • The domain name: "example.com"

  • IP address of the DNS server: 192.168.11.1

When the resolvconf package is not installed, DNS related configuration needs to be done manually by editing the "/etc/resolv.conf" as the following.

nameserver 192.168.11.1
domain example.com
[Caution]Caution

The IP addresses used in the above example are not meant to be copied literally. You have to adjust IP numbers to your actual network configuration.


 출처 : http://www.debian.org/doc/manuals/debian-reference/ch05.en.html#_the_network_interface_with_the_static_ip

+ Recent posts